This week I used ChatGPT to help “diagnose” a medical issue my senior dog has developed with his eye. We noticed he very suddenly started walking into furniture, and his left eye has become sunken and half covered by his third eyelid. Our small town’s farm vet wasn’t equipped to deal with eye issues, and the second vet we saw was understaffed so they had a traveling vet in for the day look at our dog. We weren’t impressed after he couldn’t figure out how to work his eye examination tool (he was looking through it backwards at first, shining the light into his own eye) and then gave up and just prescribed an antibiotic/ointment to our dog and told us to come back in a week.
Obviously we’ve tried to google the symptoms, but I’d heard anecdotes of people feeding their own medical issues into ChatGPT and getting good feedback, so I figured I’d do the same with my dog. It gave me a ton of detailed data about five different things that could be causing the problem with his eye. I questioned it about each one, and it tried to rule out some of the causes to the best of its ability when I was able to fill in details about things it asked. All the while it cautioned me that a vet would need to test him to truly determine if one of these things were the problem.
We’re heading back to the vet tomorrow for his recheck, ready to ask about a couple of these things. I’ve been very bearish on ChatGPT and LLMs, but it’s been genuinely useful to me in this situation.
I’m still a little cautious about the info it gave me though, because I’m still thinking about all the times I’ve played with it and had it give me broken lua/f# code or kusto queries that call functions which simply don’t exist. This could easily be one of those situations where I’m not a veterinarian so I can’t easily spot any of the wrong or misinformed information it gave me.
Edit: the five conditions it listed that could have caused the sudden eye problem for my dog are entropion; ectropion; enophthalmos; glaucoma; trauma.