cf. https://en.wikipedia.org/wiki/Divine_Proportions:_Rational_T...
But it's because the sine of 60 degrees is said by modern tables to be equal to sqrt(3) / 2, which Wildberger doesn't "believe in", he prefers to state that the square of the sine is actually 3 / 4 and that this is "more accurate".
The actual paper is at [1]:
The news from this paper (thanks for the link!) is that evidently the Babylonians preferred that, too. Surely Pythagoras would have.
But how do you actually do anything useful with this ratio ¾? Like, calculating the height of a ziggurat of a given size whose sides are 60° above the horizontal? Well, that one in particular is pretty obvious: it's just the Pythagorean theorem, which lets you do the math precisely, without any error, and then at the end you can approximate a linear result by looking up the square root of the "quadrance" in a table of square roots, which the Babylonians are already known for tabulating.
For more elaborate problems, well, Wildberger wrote the book on that. Presumably the Babylonians had books on it too.
Personally I don't believe in either value. I prefer to state that the sine of 60 degrees is 2.7773. I believe that is more accurate.
Re: rationals, I mean there's an infinite number of rationals available arbitrarily near any other rational, that has to mean they are good enough for all practical purposes, right?
For practical purposes, they’re bad. Denominators tend to explode when you do a few operations (for example 11/123 + 3/17 = 556/2091), and it’s not easy to spot whether you can simplify results. 12/123 + 3/17 = 191/697, for example.
You can counteract things by ‘rounding’ to fractions with denominators below a given limit (say 1000) but then, you likely are better of with reckoning with a fixed denominator that you then do not have to store with each number, allowing you to increase the maximal denominator.
For example (https://en.wikipedia.org/wiki/Farey_sequence), there are 965 rational fractions in [0,1] with denominator at most 10 (https://oeis.org/A005728/list), so storing one requires just under 10 bits. If you use the fractions n/964 for 0 ≤ n ≤ 964 as your representable numbers, arithmetic becomes easier.
After all, the Cayley-Dickson construction is not an infinite affair.
To defend Wildberger a bit (because I am an ultrafinitist) I'd like to state first that Wildberger has poor personal PR ability.
Now, as programmers here, you are all natural ultrafinitists as you work with finite quantities (computer systems) and use numerical methods to accurately approximate real numbers.
An ultrafinitist says that that's really all there is to it. The extra axiomatic fluff about infinities existing are logically unnecessary to do all the heavy lifting of the math that we are familiar with. Wildberger's point (and the point of all ultrafinitist claims) is that it's an intellectual and pedagogical disservice to teach and speak of, e.g. Real Numbers, as if they're actually involving infinite quantities that you can never fully specify. We are always going to have to confront the numerical methods part, so it's better to make teaching about numbers methodologically aligned with how we actually measure and use them.
I have personally been working on building various finite equivalents to familiar math. I recommend anyone to read Radically Elementary Probability Theory by Nelson to get a better sense of how to do finite math, at least at the theoretical level. Once again, on a practical level to do with directly computing quantities, we've only ever done finite math.
As long as someone isn't a crank (e.g. they aren't creating false proofs) I enjoy the occasional outsider.
The standard way of setting up calculus involves continous magnitudes, hence irrational quantities, and obviously that's used all over physics and there doesn't seem to be a problem with it.
I think to make a compelling case for a finitist foundation for maths you would at the least have to construct all of the physically useful maths on a finitist basis.
Even if you did that, you should show somehwere this finitist foundation disagrees with the results obtained by the standard foundation, otherwise there's no reason to think the standard foundation is in error.
Well these are probably easy to find even now? E.g the Banach-Tarsky paradox is unlikely to be provable in finitist math which is somewhat of an improvement.
This is so true but it can be good if you're flexible enough to try it either way.
With massive tables of physical properties officially produced by pages of 32-bit Fortran it really did look like floating-point was ideal at first. Because it worked great.
The algorithm had been stored as a direct mathematical equation, plain as day, exactly as deduced with constants and operations in 32-bit double-precision floating point.
But when the only user-owned computers were still just 8-bit machines, there was no way to reproduce the exact results across the entire table to the same number of significant figures, using floating point.
Since it's a table it is of course not infinite, and a matrix to boot. A matrix of real numbers across an entire working spectrum.
The algorithm takes a set of input values, calculates results as defined, and rounds it off repeatably in the subsequent logic before output, so everyone can get agreement. The software OTOH takes a range of input values and outputs a matrix. And/or retains a matrix in "imaginary" spreadsheet form for later use :)
Every single value in the matrix is a floating-point representation of a real number, but they are rounded off as precisely as possible to the "exact" degree of usefulness, making them functionally all finite values in the end. This took a lot of work from top mathematicians, computer scientists, and engineers. And as designed, the matrix then carries the algorithm on its own without reference to the fundamental equation.
The solution turned out to involve working backward from the matrix reiteratively until an alternate algorithm was found using only integers for values and operations, up until the final rounding and fixed.point representation at the end. Dramatically unrecognizable algorithm but it worked and only took 0.5 kilobytes of 8-bit Basic code which was a fraction of the original Fortran.
This time the feature that showed up without having to make extra effort was the property of being more precise based directly on increased bitness of the computer, without need for floating-point at all. Of course the Fortran code accomplished this too by the wise use of floating-point but it took a lot bigger iron to do so. And wasn't going to be battery powered any time soon way back then.
>somehwere this finitist foundation disagrees with the results obtained by the standard foundation,
>there's no reason to think the standard foundation is in error.
This is "exactly" how it was. There were disagreements all over the place but they were in further decimal places not representable by the table. The standard was an international standard having carefully agreed-upon accuracy & precision, as defined by the Fortran which really worked and was then written in stone, with any nonmatched output being a notable failure.
We think we study the real numbers but it seems we can't even have a system to express them. And indeed, that's not even a limitation of algebraic systems: any notation over a finite alphabet can only express a countable set of distinct objects which amounts to nothing when real numbers are concerned.
I'm not a finitist, but I do find it curious that we approach mathematics by inventing a more-than-infinite set of objects that's impossible to fully grasp. I don't see it as a bad thing though, I also love Complex Analysis and many people (and some mathematicians even) denounce them for being imaginary. My impression is that transcendental numbers are as imaginary as are imaginary numbers, it's just we don't notice. And they're obviously still useful as are the complex numbers.
I can tell you that it is the output of a function, not a distinct entity that exists on its own independently of the computation.
The whole point is that as a theory for the foundations of mathematics, you do not need to assume numbers with infinitely long decimal expansions in order to do math.
Topology, i.e. the analysis of connectivity, is built upon the notion of continuity and infinite divisibility, which seems to be difficult to handle in an ultrafinitist way.
Topology is an exceedingly important branch of mathematics, not only theoretically (I consider some of the results of topology as very beautiful) but also practically, as a great part of the engineering design work is for solving problems where only the topology matters, not the geometry, e.g. in electronic schematics design work.
So I would consider any framework for mathematics that does not handle well topology as incomplete and unusable.
Ultrafinitist theories may be interesting to study as an alternative, but the reality is that infinitesimal calculus in its modern rigorous form does not need any alternatives, because it works well enough and until now I have not seen alternatives that are simpler, but only alternatives that are more complicated, without benefits sufficient to justify that.
I also wonder what ultrafinitists do about projective geometry and inversive geometry.
I consider projective geometry as one of the most beautiful parts of mathematics. When I encountered it for the first time when very young, it was quite a revelation, due to the unification that it allows for various concepts that are distinct in classic geometry. The projective geometry is based on completing the affine spaces with various kinds of subspaces located at an "infinite" distance.
Without handling infinities, and without visualizing how various curves located at infinity look like (as parts of surfaces that can be seen at finite distances), projective geometry would become very hard to understand, even if one would duplicate its algorithms while avoiding the names related to "infinity".
Similarly for inversive geometry, where the affine spaces are completed with points located at "inifinity".
Such geometries are beautiful and very useful, so I would not consider as usable a variant of mathematics where they are not included.
We use numbers in compact decimal approximations for convenience. Repeated rational series are cumbersome without an electronic computer and useless for everyday life.
The point is to not confuse the notational convenience with the underlying concept that makes such numbers comprehensible in the first place.
https://scispace.com/pdf/words-and-pictures-new-light-on-pli...
Robson's argument is that it isn't a trig table in the modern sense and was probably constructed as a teacher's aide for completing-the-square problems that show up in Babylonian mathematics. Other examples of teaching-related tablets are known to exist.
On a quick scan, it looks like the Wildberger paper cites Robson's and accepts the relation to the completing-the-square problem, but argues that the tablet's numbers are too complex to have been practical for teaching.
A little off-topic, but as a non native English speaker this sentence in the article made me look up whether there’s scientific consensus that Noah’s Ark has been found and I’d just never heard about it. Turns out there isn’t, and the end of the sentence actually refers to the tablet. Was still a fun rabbit hole to go down.
https://www.cnbc.com/2019/04/10/toddler-locks-ipad-for-48-ye...