But sure, if you think AGI means solving polynomials, you've done it!
I never stated "AGI means solving polynomials". Based on how far LLMs have come, function approximation seems to play a role in it.
I see two possible problems.
The first is if this method can express all the functions that a NN can express. High order polinomials usualy have huge spikes outside the region where they are fitted. The functions in NN usualy give more smooth interpolations. Those high exponents make me very worried.
The second is if it's possible to train them. I use the solver of Excel to fit a lot of experimental data with theoretical formulaswith few parameters. In my experience, it's important to guess initial values of the parameters than are close enough to the best values. Otherwise the gradient descent method just get a horrible local minimum that is completely unrelated to the solution you are looking for.
In conclusion, it's important to show that this new proposed method works well in practice in a few non trivial problems that then NN can solve, or at least that it can solve some problems that NN can't solve.