It doesn't seem to have any "meta" understanding. It's subconscious thought only.
If I asked a human to program in a language they didn't understand, they'd say they couldn't, or they'd ask for further instructions, or some reference to the documentation, or they'd suggest asking someone else to do it, or they'd eventually figure out how to write in the language by experimenting on small programs and gradually writing more complex ones.
GPT4 and friends "just" take an input that seems like it could plausibly answer the request. If it gets it wrong then it just has another go using the same generative technique as before with whatever extra direction the human decides to give it. It doesn't think about the problem.
("just" doing a lot of work in the above sentence: what it does is seriously impressive! But it still seems to be well behind humans in capability.)