In this case, all LLMs are fixed-length, but not all AI systems are. An LLM on its own is useless. Current SoTA research includes inserting 'pause' tokens. This is something that, when combined with an AI system that understands these, would enable variable time 'thinking'.