Whatever answer it gave you is not reliable.
Obviously, people find some value in some output of some LLMs. I've enjoyed the coding autocomplete stuff we have at work, it's helpful and fun. But "it's not qualified to answer my questions" is still true, even if it occasionally does something interesting or useful anyway.
*- this is a complicated term with a lot of baggage, but fortunately for the length of this comment, I don't think that any sense of it applies here. An LLM doesn't understand its training set any more than the mnemonic "ETA ONIS"** understands the English language.
**- a vaguely name-shaped presentation of the most common letters in the English language, in descending order. Useful if you need to remember those for some reason like guessing a substitution cypher.
Behavior indistinguishable from understanding is understanding. Sorry, but that's how it's going to turn out to work.
LLMs encode some level of understanding of their training set.
Whether that's sufficient for a specific purpose, or sufficiently comprehensive to generate side effects, is an open question.
* Caveat: with regards to introspection, this also assumes it's not specifically guarded against and opaquely lying.
Exactly like humans dont understand how their brain works
Also, why are we comparing humans and LLMs when the latter doesn't come anywhere close to how we think, and is working with different limitations?
The 'knowledge' of an LLM is in a filesystem and can be queried, studied, exported, etc. The knowledge of a human being is encoded in neurons and other wetware that lacks simple binary chips to do dedicated work. Decidedly less accessible than coreutils.