I'm with you until this pair of sentences because I believe you are confusing ontological subjectivity (which is fine, for our purposes) with epistemic subjectivity (which isn't).
Hallucination is an ontologically subjective phenomenon, requiring an experiencer to experience it. "Making shit up" similarly implies an "intentional stance" (Dennett), wherein the AI agent is constructing a world model as it interacts with the world. That isn't required to arrive at a "stochastic parrot" that spouts nonsense.
"Generating nonsense" is closer to what the AI is doing. It's generating text that we are unable to interpret, not revealing its errors of reasoning through its speech. It's not reasoning; it's generating tokens.
tl;dr: Ontological vs epistemic subjectivity. There's no reason to affirm AI is hallucinating because there's no reason to affirm its experiencing anything.
(Please forgive my multiple edits; it's a clumsy-words kind of day.)