“Enough” for what exactly?
Furthermore, OpenAI has to make up for a ton of debt they are taking on. They've already lost $9B, and are planning on losing another $75B in the next 2 years. As such, they have a ton of digging to do to get themselves out of the massive hole they're digging.
First of all, your numbers a off by an order of magnitude at least: even GPT-5 can generate 1000 tokens for 1c, which is much more than a paragraph.
And then again that's why my entire argument revolved around the fact that OpenAI would need to stop aiming for the technological edge. Deepseek generates 25k tokens for a cent and it's still a gigantic model. I'd you use a model comparable in size to gpt-oss-120b you can even increase that up to 100-200k tokens per cent (going from 32GB worth of active parameters, 32B at q8 for Deepseek, to 4GB, 8B using MXFP4 for gpt-oss-120b). That would mean being able to serve more than 100 answers per cent spent on inference.
If they can serve .1c worth of ads per request, that's 90% gross margin for you.