Then you start "digging deeper" on a specific sub-topic, and this is where the risk of an incorrect response grows. But it is easy to continue with the assumption the text you are getting is accurate.
This has happened so many times with the computing/programming related topics i usually prompt about, there is no way I would trust a response from an LLM on health related issues I am not already very familiar with.
Given that the LLM will give incorrect information (after lulling people with a false sense of it being accurate), who is going to be responsible for the person that makes themselves worse off by doing self diagnosis, even with a privacy focused service?