Somewhat, yes. But I think it better fits the description somewhere between a sociopath and Harry G. Frankfurt's definition of a bullshitter, who has utter disregard for the truth and will say anything in the moment that deals with a question qua problem. It's essentially defensive and uses language instrumentally only in that pursuit.
The problem here is not what LLM-AI is capable of, but what capabilities people ascribe to it. The danger comes from what we believe or project/personify onto it. It has no essential concept of truth. But we want it to.