> prompts with >272K input tokens are priced at 2x input and 1.5x output for the full session for standard, batch, and flex.
which is basically maxxed out quickly. So there is 2x (the first lever)
Then there is the /fast mode, which they state costs 2x more (for 1.5x speedup)
And then there is the model base price ($2.50 vs $1.75), well yeah thats 42% increase. It is in fact a 5.7x total increase of token cost in fast mode and large context. (Sorry for the confusion, I thought it was 8x because I thought gpt-5.3-codex was $1.25)