We can expect a bot like this to not really get context clues in natural language, although they seem to be getting better at that, but context is not necessary to have a true and functional understanding of a programming language. That was the point of creating such languages.
Using an API that doesn't exist but logically should once the use cases are demonstrated is not an example of lacking understanding, it is an example of advanced insight. A human might have invented the necessary functions inline with the rest of the project but if they are expressing functionality that is commonly applicable, then a common API for those functions is what the humans would eventually converge upon to clean up the code from the initial inline implementation, making it more consistent and readable.
On the other hand, one of the other posters asked “to generate a parallax effect in Qt/QML. It simply used a QML Elemened with the name Parallax”. Is this an insight, or is this answering “yes, I could” to “could you pass me the salt?”. Maybe the line between the two is a fine one, and I didn’t realize that yet.
In general, it feels like copying part of the question (“write parallax code”) in the answer is the easy part of the task…
To me there does seem to be some nuance to here that's worth noticing. Some examples of this type of response are indeed too cheap and can be chalked up to lack of training data or something.
But in other cases it's actually not immediately obvious whether the answer the user got was their fault for not specifying that they are expecting code that works without additional supporting libraries.
A language model can't reasonably be expected to understand an expectation of usability or fitness for purpose in a context the user didn't specify.
The user was implicitly expecting code that would function when executed immediately and as written with no additional supporting libraries included. This is different from code that would function correctly when executed after having downloaded relevant existing packages. Which is different from code that would function if executed along side additional supporting code from private libraries the user might not have access to. Etc...
Yet any of those answers fit for the same prompt, "create an R script that does such and such". The bot's lack of insight is on the likely intention behind the prompt rather than on the requested language. I'd say if it produces any code that fits the syntax and grammatical structure of the requested language, that's enough to say it understands the language.