I can propose an alternate view of things. Not that I'm going to argue that it is the only true statement in the world, but I think it is necessary for a thought to progress to have an alternative hypothesis.
So the proposition is: formal symbolisms can deal only with those problems that where already solved in imprecise human's languages.
To invent calculus and orbital mechanics you need first to talk for a several centuries (or thousands of years?) about what is position and velocity, you need to talk your way upto acceleration, and then you need to find a way to measure them and to define in a strict geometric terms. Ah, and infinity, it was a very counter-intuitive idea, Xenon invented some of his paradoxes specifically to point at counter-intuitiveness. When Newton came all these talks and debates did the most of work for him.
> the ability to understand the human tongue is insignificant compared to the power of math.
But the fun is: you cannot know if someone understands math if they do not understand human language too. You cannot teach math to those who cannot speak human language.
Math is a cream on top with a limited applicability. What math can say about love? I do not like to sound like Dumbledor, but really behind all we do there is an emotions motivating us. Math cannot deal with emotions, because it was built that way and because non-math talks about emotions hadn't bring a good model for emotions, which math could express in a formalized language.
> Dijkstra says
I wonder when he said it? Before AI concluded that expert-systems based on logic were acknowledged to be a failure or after that?
> To invent calculus and orbital mechanics you need first to talk for a several centuries (or thousands of years?) about what is position and velocity, you need to talk your way upto acceleration, and then you need to find a way to measure them and to define in a strict geometric terms. Ah, and infinity, it was a very counter-intuitive idea, Xenon invented some of his paradoxes specifically to point at counter-intuitiveness. When Newton came all these talks and debates did the most of work for him.
For the sake of argument, let's grant your story about what you need to invent calculus.
But once you invented calculus, you can then use it to solve all kinds of problems that you would never in a thousand years be able to handle with mere talk.
Not "all kinds of problems" but very specific kinds of problems which is possible to formalize into a math language. How would you go about inventing thermodynamics if you didn't know words "temperature" and "pressure"? You'd need to start for your senses that can tell you "this is a hot surface", or "this is a cold one", or "this one is colder than that", you need to decide that "coldness" is a "negative heat" (it is not the most obvious idea for an animal, because animals have as receptors for a cold, so receptors for a heat, you could feel hot and cold at the same time, if you managed to stimulate both kinds of receptors at the same time). Then you need to notice that some materials change volume when heated, then you need to come up with an idea to use measurements of a volume to measure a temperature, and only then you can try to invent pV=nRT, which becomes almost tautological at that point, because your operational definition of a temperature makes it equivalent to a volume.
After that you really can use calculus and make all sorts of quantitative statements about thermodynamic systems. But before all that "mere talk" was finished thermodynamics was not a kind of a problem calculus can deal with.
However, this paper is evidence that the field is figuring out how to built what's actually needed, which is a good thing.
If you want to measure its ability to do mindlessly repetitive tasks without diverging from instructions, you should compare it to humans doing the same, not expect it to act like a calculator.
If you want to measure its ability to solve problems that involve many such steps that are simple to express but tedious to carry out, ask it to write and evaluate code to do it instead.
If you include a large amount of properly solved math in its training text, it gets MUCH better at that kind of math.
It has a very deep set of intelligences that are alien to us, that allow it to predict and ACT LIKE us, when it comes to generating the next word. You're only seeing the output of those intelligences through a very lossy channel.
As a side note, there are structures in human language that apparently encode much more information that you might think at first glance. The fact that Word2Vec had such mathematical properties, despite it's relative simplicity, astound me to this day. Throwing a bunch of sine/cosine values on top of that to represent position in a sentence to enable LLMs is also amazing in that it works.
- The result of 69*94 is 6466.