(Unless you're intentionally going on a tangent --)
The discussion is whether LLMs have "knowledge, understanding, and reasoning ability" like humans do.
Your reply suggests that a bullshitter has the same cognitive abilities as an LLM, which seems to validate that LLMs are on-par with some humans. The claim that "it simply does not know when to stop" is wrong (it does stop, of course, it has a token limit -- human bullshitters don't). The claim that "It has no conceptual model that guides its output." is just an assertion. "It parrots words but does not know things." is just begging the question.
Lots of assertions without back up. Thanks for your opinion, I guess?