You can think of science itself as lossy compression.
for psychological reasons. Natural language processing makes people prone to anthropomorphize. It's why people treat Alexa in human like ways, or even ELIZA back in the day. You're making the same mistake in your description. You're not teaching ChatGPT anything, you're ever only querying a trained static model. It remains in the same state. It's not "scatterbrained", that's a human quality, it's incorrect. Ted Chiang points to this mistake in the article, mistaking lossiness in an AI model for the kind of error that a human would make.
A photocopier making bad copies is just a flawed machine, but because you don't treat chatgpt like a machine, you think it performing worse is actually a sign of it being smarter. Ironically if it 100% reproduced your language, you'd likely be more sceptical, even if that was due to real underlying intelligence.
When you tell me what I allegedly think and under what condition I'd be "more skeptical", it's kind of irritating. (Maybe I deserve it for starting this thread with a combative tone. By the time I came back meaning to edit that first comment, there was already a reply.)