So, hypothetically, a general-intelligence-capable architecture isn't allowed to specialize in a particular task without losing its GI status? I.e. trained radiologists wouldn't be a general intelligence? E.g. their ability to produce text is really just a part of their radiologist-function to output data, right?
It's impossible for humans to know a lot about everything, while LLMs can. So an LLM that sacrifices all that knowledge for a specific application is no longer an AI, since it would show its shortcomings more obviously.
They're still very bounded systems (not some galaxy brain) and training them is expensive. Learning tradeoffs have to be made. The tradeoffs are just different than in humans. Note that they're still able to interact via natural language!