All three claims of gold medal performance on IMO 2025 that I'm aware of solved the first 5 problems, that were designed to be solvable by application of standard techniques, but got stumped on the sixth problem that was a bit more unusual. So it does seem like state-of-the-art models solve competition problems by recognizing which kind of problem it is and applying a corresponding solution template. Which is not too different from human competitors exploiting common question patterns, but humans seem to be able to degrade more gracefully by falling back to a more explorative mode when none of the standard tricks seem to apply.