That said, if one's job is just to write prompts (hence "prompt engineer" without the "-ing")... maybe that's actually bad.
2% actually work on OpenAI systems.
MLE, just a different name for it.
What really defines an AI engineer:
A) Already used their $5 on OpenAI API B) Has llamacpp/ollama etc. in their CLI with a library of models C) Has 64GB+ VRAM which makes computer extremely loud possibly gas-powered
Basically researchers develops on his laptop, AI engineer helps deliver it to customer's laptop.
In its simplest form (again from a web app engineer's viewpoint) LLMs ingest text, organize that data (a very complicated process I don't understand fully, nor need to), and provide API's to output text given some set of inputs so this resembles very closely working on Elastic or other search engine technology. A caveat is the API's being called likely maintain state in the sense of keeping track of inputs, context and outputs. I would classify someone that is working on this as more of an API/backend engineer. They need to understand the AI/LLM data model being used which is very specific and the use cases around it but they did not engineer the AI/LLM data model themselves, it was likely some other R&D engineers.
Edit to add: AI engineer to me is the R&D people I reference above - the ones building the data model that others use.
I'm a teacher who trains the kids you will be hiring in a few years. Tell me what to teach them.