Thus, unless one wanted to make a "ParentalLLM" through which one could submit questions to an ourhouse.internal/api/v1/prompt api and get back "chat" responses over sms/some existing chat protocol, I'd carefully consider the risk verses reward of bringing LLMs into that process
I also recognize that I came up in the era of autodidactic lessons, and that predisposes me to "teach yourself how to fish" style approaches, but just wanted to draw attention to those LLM interactions coming with trade-offs