I don't think so, the scaling laws haven't failed so far. I fully expect that making the model bigger and training it on more data will make it better at logic.
For a nice example with image models, Scott Alexander made a bet that newer image models would be able to do the things that Dall-E 2 gets wrong. [1] (This post also discusses how GPT-3 could do many things that GPT-2 got wrong.) He won the bet three months later through Imagen access. [2]
[1]: https://astralcodexten.substack.com/p/my-bet-ai-size-solves-... [2]: https://astralcodexten.substack.com/p/i-won-my-three-year-ai...