The idea is that the canned message would be an attempt at persuading them. I really don't trust an LLM prompted to persuade someone to seek therapy would yield better results.
Modern thinking models can be trusted to follow nuanced safety instructions.
Models like ChatGPT-4o can't.
They will make bizarrely inaccurate statements from time to time.