All of the benchmark game problems either prescribe using a fixed algorithm or have an obvious optimal asymptotic complexity. The remaining challenge is generally to reduce the constant factor as much as possible.
This is just not what a lot of computationally expensive problems look like in practice. As a simple example, any practical solution for an NP-hard problem will be full of tradeoffs; often you just want something that's good enough and then you get to choose and adjust an algorithm for your particular problem space to find the sweet spot between time complexity and quality of the solution.
The benchmark games also have fairly simple and obvious data structures; most of them just deal with arrays and strings. Real-world problems often require you to make difficult choices about representation (where one is optimal in some situations, another in a different set of situations, but you handle all of them). Example: adjacency matrixes can be represented as bit matrices, integer/float matrices (if edges can have weights), or associate arrays of sets, to name just a few implementation options. Depending on how your graph is structured (sparse vs. dense, connectivity) and what algorithms you require, one or the other can be optimal. Clever choices can make orders of magnitude of difference for performance.
I'll offer you a concrete example: computing the factorial of large numbers basically has three well-known algorithms: (1) naive multiplication, (2) divide-and-conquer, (3) prime decomposition. Each subsequent approach improves performance over its predecessor, but is also increasingly more difficult to implement.
Now, it so happens that this particular problem is well-researched, so you can look it up, but you encounter similar problems all the time where you can't find them on stackoverflow or in the literature. And then you have to consider tradeoffs between the time spent researching better solutions, implementing those better solutions, and the time gained from the increased performance.