On the one hand, there are SO many reasons using LLMs to help people make health decisions should be an utterly terrible idea, to the point of immorality:
- They hallucinate
- They can't do mathematical calculations
- They're incredibly good at being convincing, no matter what junk they are outputting
And yet, despite being very aware of these limitations, I've already found myself using them for medical advice (for pets so far, not yet for humans). And the advice I got seemed useful, and helped kick off additional research and useful conversations with veterinary staff.
Plenty of people have very limited access to useful medical advice.
There are plenty of medical topics which people find embarrassing, and would prefer to - at least initially - talk to a chatbot than to their own doctor.
Do the benefits outweight the risks? As with pretty much every ethical question involving LLMs, there are no obviously correct answers here.
I say this without snark- it is simply true. I should also mention that a good quarter of the medical care folks who have assisted me have gone above and beyond in exceptional ways. It is a field of extremes.
Tell me you never taught service courses for pre-meds without telling me you never taught service courses for pre-meds ;)
> They hallucinate, They're incredibly good at being convincing, no matter what junk they are outputting
Describes about a third of the doctors I've interacted with, tbh.
> And the advice I got seemed useful, and helped kick off additional research and useful conversations with veterinary staff.
It's similar to "Dr. Google". Possible to misuse. But also, there's nothing magical about the medical guild initiation process. Lots of people are smart enough to learn and understand the bits of knowledge they need to accurately self-diagnose and understand tradeoffs of treatment options, then use a medical professional as a consultant to fill in the gaps and validate mental models.
Unfortunately, most medical professionals aren't willing to engage with patients in that mode and would rather misdiagnose than work with an educated patient. (My bil -- a medical doctor, and a fairly accomplished one at that -- has been chided for using "Dr Google" at an urgent care before.)
> Do the benefits outweight the risks? As with pretty much every ethical question involving LLMs, there are no obviously correct answers here.
At the end of the day, it doesn't matter. At least in the US, you won't have access to any meaningful treatment without going through the guild anyways.
I don't think that using LLMs for medical diagnosis is a good idea, but it's important to admit when the status quo is so thoroughly hollowed out of any moral or practical justification that even terrible ideas are better than the alternative of leaving things as they are.
This is incredibly dangerous, lots of people are smart enough that they can research questions about their condition/care to discuss with their medical professional but should absolutely not be self-diagnosing. It is very reasonable to ask "I read about X what do you think" but you (and even physicians cannot do this for themselves by the way) should not be self-diagnosing anything.
This is like saying lots of doctors are smart enough to learn and understand the bits of knowledge they need to accurately train LLMs and put them in charge of [life threatening system].
> But also, there's nothing magical about the medical guild initiation process.
You're right, it's not magical. It's just 10+ years of medical training.
Case in point, I'm a big fan of Andrew Huberman (https://www.youtube.com/@hubermanlab). He's quite prolific and his presentations pack a lot of data. Just taking all of that in would require a lot of time. Being able to have it condensed and indexed would be wonderful.
Plenty of others like him (e.g., Rhonda Patrick, Peter Attia, etc.) High quality stuff but there's literally not enough time to take all of it in.
Summarizing academic research is almost entirely unrelated to the practice of medicine. Medical diagnosis and treatment are different from more typical uses of LLMs in lots of important ways.
It also seems capable of anonymizing a large chunk of medical data that we would not want to share normally. Who knows, perhaps it could even be a means of payment.
> There are plenty of medical topics which people find embarrassing, and would prefer to - at least initially - talk to a chatbot than to their own doctor.
I don't think you would trust an AI chatbot alone to take a number of pills for any medication instead of going to a human doctor, especially when these AI models risk hallucinating terrible advice and its output is unexplainable and as transparent as a black-box. The same goes for 'full self-driving'.
I don't think one would trust these deep learning-based AI systems in very high risk situations unless they are highly transparent and can thoroughly explain themselves rather than regurgitate back what it has been trained on already.
It is like trusting a AI to pilot a Boeing 737 Max with zero human pilots on board end-to-end. No one would board a plane that has an black-box AI piloting it. (Autopilot is not the same thing)
Yes, I think people would indeed take pills prescribed by AI, just make it a robot wearing a lab coat.
Also pilots! I mean, pilots kill themselves and a planeload of people more than you think. Of course people would take black box ai that works.
Take fine-tuning trainers to "conferences", perhaps?
Will they try to make their own?
What a next few years...
As a physician, I would not be surprised if the medical use of these tools ends up having similar value.
I recently used ChatGPT because my Google was failing to help me remember the name of the standard for securely sharing passwords between systems. My searches kept turning up end user password management related topics. ChatGPT got me to SCIM after one question and one correction.
I could absolutely see a doctor using something like a ChatGPT to help supplement their memory in a way I did. I don't think anyone recommends that doctors just trust ChatGPT, but to use it as a supplementary tool for their own expertise. Even if it's outside of their specific medical domain, it could help them get a basis for having a conversation with one of their specialist colleagues.