Good question.
It's important to understand that it doesn't know anything. It gives approximate answers based on statistical relevance.
For example, it has been noticed that ChatGPT can answer leetcode interview questions. That doesn't surprise me in the least, chances are that it contained the algorithm associated to slight variations of the question in its training data.
To generate boilerplate code it has been pretty good. Those are pretty repetitive tasks, so it is also expected.
Using it to explore the capabilities of frameworks, libraries and services has been great for the most part, but it sometimes tells me things that don't exist or just plain doesn't work. The funniest one was when it told me I could use an annotation on a method to do something, but the annotation didn't exist. When I highlighted it to ChatGPT, it started to get in a loop with variations of code that didn't work, claiming the previous answer was valid for version 3.0. The library doesn't have a version 3.0. it doesn't even have a 2.0.
Getting it to code is a wild ride. Sometimes it gets real close to give me something workable, sometimes it gives me stuff that will never work.
However, it is awesome at telling me what code is doing when I paste a snippet.