In limited, often zero or one shot probing of the model, yes. Do multiple generations and recursive passes over the output to have the model select and iterate on a target and the utility goes way up. You can coax great output from small models, even the 125m parameter gpt-neo.
The process kinda goes like this -
Think of ten answers to this question: blah blah blah
From these ten answers, which are the best 3?
Of the three answers, which is the best?
Revise and edit the best answer to be simpler or more understandable.
Prompt engineering is a nascent field, and we haven't seen nuanced or sophisticated use of the tool yet. Most of the metrics reported in papers are barely better than a naive Turing test. It doesn't take much introspection to know that even humans endlessly iterate and revise their output, and the best extemporaneous speech doesn't match well curated and edited material. It shouldn't surprise us that similar editing and revision processes will benefit transformer output.