Patients are guilted into allowing the doctors to use it. I have gotten pushback when asked to have it turned off.
The messaging is that it all stays local. In reality it’s not and when I last looked it was running on Azure OpenAI in Australia.
I spoke to a practice nurse a few days ago to discuss this.
She said she didn’t think patients would care if they knew the data would be shipped off site. She said people’s problems are not that confidential and their heath data is probably online anyway so who cares.
If you're why doesn't this guy just check the AI scribe notes? Well, probably because with the amount of detail it writes, he'd be better off writing a quick soap note.
That memo is how you make staff hide things instead of asking for help.
The scarier part though is that LLM-written clinical notes probably look fine. That's the whole problem. I built a system where one AI was scoring another AI's work, and it kept giving high marks because the output read well. I had to make the scorer blind to the original coaching text before it started catching real issues. Now imagine that "reads well, isn't right" failure mode in clinical documentation.
Nobody's re-reading the phrasing until a patient outcome goes wrong.
I think there is a chance that these systems will lead to a change where the note isn't the fundamental record of the encounter. Instead different artifacts are created specifically for each entity that needs it. Billing gets their view, and scheduling gets theirs, and, etc etc... It will, hopefully, give the practitioners a chance to get back to focusing on the patient and not ensuring their note quality captured one more billable code. Of course the negative is also likely to happen here too. As practitioners spend less time on the note they will likely not get that back in time with individual patients, but instead on seeing more patients. It will also likely lead to higher bills as the health systems do start squeezing more out of every encounter. There is no perfect here when profit is the driving motivator but with this much change happening I can only hope that it causes the industry as a whole to shake up enough to maybe find a new better optimum to land in.
We had to correct them at the end of the consultation.
This is just about not using free/public AI tools.
Heidi is frustratingly consistent at hallucinating stuff. I've seen it in almost all of the dozen or so summaries I've had from medical people recently (surgeon, physio, consultant). A GP I know tried for a month and then was like 'it's not worth the risk exposure to me or my patients'.