The things they did to restrict the model don't demonstrate that it would otherwise actually have an opinion though. They just mean that it's being (arguably artificially) prevented from generating certain texts that appear to endorse a certain point of view.
A similar example which demonstrates my point while perhaps being a little more clear cut is the Amazon recruiting AI that got shut down because it was unintentionally amplifying bias present in its training set.[1] I don't think we can assume from that that the model actually had opinions which were misogynistic even though it was producing results which were.
[1] https://www.reuters.com/article/us-amazon-com-jobs-automatio...
I think this distinction is neither useful nor interesting.
I started this thread simply by saying that the GP had applied a falsifiability standard to Chomsky that they weren't applying to their own reasoning when saying more or less that the model would have opinions were it not for artificial restrictions imposed by the programmers[1]. If whether the model has an opinion or not is a matter of definition that seems inherently unfalsifiable. However if we can establish a more objective basis then it could be falsifiable. I just don't know what that basis might be.
[1] Apologies if this is a mischaracterization - I genuinely don't mean to do so if it is.