For their competition to avoid a PR disaster, isn't it better to look in the model? Perhaps observe the weights, when the AI says something that you want to avoid in the future. A safeguard could trigger if the model is going in that direction.
Chatgpt was created from gpt via prompt engineering? An inverse chatgpt where user answers questions instead of the other way around also has applications.
Agreed, it should really be rolled into fine tuning. If you're building a model for PR, for example, it should already be fine tuned so it can't say anything disastrous. Prompt engineering is only really relevant to general-purpose models which aren't that useful to begin with (other than "fun" chatting).
If a tip was like "use XML tags to give clarity to the model," then it wouldn't be sustainable.
I thought this was a meme, but I have actually seen some job posts for “prompt engineer”.