This is true for something like raw GPT. For the
chat models that have been
specifically optimized for "you" prompts, this is false. See the discussion in the link I provided, along with the leaked copilot/bing prompts.
Or, in other words, use a model in a way that fully takes advantage of how it was specifically optimized, from the intentional burning of massive amounts of compute time/money to get it that way.