"In the sacred tongue of the omnissiah we chant..."
In that universe though they got to this point after having a big war against the robot uprising. So hopefully we're past this in the real world. :-)
1. Users and, more importantly, makers of those tools can't predict their behaviour in a consistent fashion.
2. Requires elaborate procedures that don't guarantee success and their effect and its magnitude is poorly understood.
An LLM is a machine spirit through and through. Good thing we have copious amounts of literature from a canonically unreliable narrator to navigate this problem.
Welcome to 30k made real
I was used to this kind of nifty quirk being things like FFTs existing or CDMA extracting signals from what looks like the noise floor, not getting computers to suddenly start doing language at us.