However, medical secrecy, processes and laws prevent such things, even if they would save lives.
I don't see ChatGPT being any different.
In my view, while statistical models would probably be an improvement ( assuming all confounding factors are measured ), the ultimate solution is not to get better at educated guessing, but to remove the guessing completely, with diagnostic tests that measure the relevant bio-medical markers.
This becomes even more true when you consider there is risk to every test. Some tests have obvious risks (radiation risk from CT scans, chance of damage from spinal fluid tap). Other tests the risk is less obvious (sending you for a blood test and awaiting the results might not be a good idea if that delays treatment for some ailment already pretty certain). In the bigger picture, any test that costs money harms the patient slightly, since someone must pay for the test, and for many the money they spend on extra tests comes out of money they might otherwise spend on gym memberships, better food, or working fewer hours - it is well known that the poor have worse health than the rich.
> Similar possibilities existed in medicine for 50 years
It would've been like building the tower of babel with a bunch of raspbery pi zeros. While theoretically possible, practically impossible and not (just) because of laws, but rather because of structural limitations (vector dbs of the internet solves that)
> Patents and byzantine regulations will stunt its potential
Thats the magic of this technology, its like AWS for highly levered niche intelligence. This arms an entire generation of rebels (entrepreneurs & scientists) to wage a war against big pharma and the FDA.
As an aside, this is why I'm convinced AI & automation will unleash more jobs and productivity like nothing we've seen before. We are at the precipice of a Cambrian explosion! Also why the luddites needs to be shunned.
Imagine for example that 'disease books' are published each month with tables of disease probabilities per city, per industry, per workplace, etc. It would also have aggregated stats grouped by by age, gender, religion, wealth, etc.
Your GP would grab the page for the right city, industry, workplace, age, gender etc. That would then be combined with the pages for each of the symptoms you have presented with, and maybe further pages for things from your medical history, and test results.
All the pages would then be added up (perhaps with the use of overlayed cellophane sheets with transparency), and the most likely diseases and treatments read off.
When any disease is then diagnosed and treatment commenced (and found effective or ineffective), your GP would fill in a form to send to a central book-printer to allow next months book edition to be updated with what has just been learned from your case.
can you, though? it's not scalably confirmable. what you can say in a British accent to another human person in the physical world is not necessarily what you can say in unaccented text on the internet.
Medical secrecy, processes and laws have indeed prevented SOME things, but a lot of things have gotten significantly better due to enhanced statistical models that have been implemented and widely used in real life scenarios.
Example: my favourite team is X. So if I want to keep it a secret, when I ask for the history of championships of X, I will ask for X. My local agent should ask for 100 teams, get all the data, and then report back for only X. Eventually the mothership will figure out what we like (a large wenn diagram). But this is not in anyone's interest, and thus will not happen.
Also, like this the local agent will be able to learn and remember us, at a cost.
The medical possibilities that will be unlocked by large generative deep multimodal models are on an entirely different scale from "statistical diagnoses." Imagine feeding in an MRI image, asking if this person has cancer, and then asking the model to point out why it thinks the person has cancer. That will be possible within a few years at most. The regulatory challenges will be surmounted eventually once it becomes exceedingly obvious in other countries how impactful this technology is.
Your deep multimodal models or the MRI imaging?
What you are essentially saying is the signal is so subtle that only a large NN can reliably extract it.
While that may well be the case, it would be better to have a scan/diagnostic that doesn't need that level of signal processing to interpret.
For example - you don't need a large generative deep multimodal model to read a Covid antigen or PCR test.