We know from the functional busy beaver [1] that Graham behaviour can come surprisingly early; a 49-bit lambda term suffices. There are only 77519927606 closed lambda terms of at most that size [2], compared to 4^12*23836540=399910780272640 unique 6-state Turing Machines [3].
With the achievement of pentation in only 6 states, several people now believe that 7 states should suffice to surpass Graham's. I would still find that rather surprising. A few days ago, I made a large bet with one of them on whether we would see proof of BB(7)>Graham's within the next 10 years.
What do people here think?
BB has to grow faster than any computable sequence. What exactly that means concretely for BB(7) is... nothing other than handwaving... but it sort of means it needs to walk up the "operator strength" ladder very quickly... it eventually needs to grow faster than any computable operator we define (including, for example, up-arrow^n, and up-arrow^f(n) for any computable f).
My gut feeling is that the growth between 47 million and 2^^2^^2^^9 is qualitatively larger than the growth between 2^^2^^2^^9 and graham's number in terms of how strong the operator we need is (with gramah's number being g_64 and g here being roughly one step "above" up_arrow^n). So probably we should have BB(7)>Graham's number.
Your proof rests primarily on this assertion:
> BB has to grow faster than any computable sequence.
This is almost true! BB(n) has to grow faster than any computable sequence _defined by an n-state Turing machine_. That last part is really important. (Note that my restatement is probably incorrect too, it is just correct enough to point out the important flaw I saw in your statement). This means that up-arrow^f(n) _can_ be larger than BB(n) — up-arrow^f(n) is not restricted by a Turing machine at all. As an easy example, consider f(n) = BB(n)^2.
You may still be right about BB(7) being bigger than Graham’s number, even if your proof is not bulletproof
Thus, any proof that BB(748) = N must either show that TM_ZF_INC halts within N steps or never halts. By Gödel's famous results, neither of those cases is possible if ZFC is assumed to be consistent.
BB(748) is by definition a finite number, and it has some value - we just don't know what it is. If an oracle told us the number, and we ran TM_ZFC_INC that many steps we would know for sure whether ZFC was consistent or not based on whether it terminated.
The execution of the turing machine can be encoded in ZFC, so it really is the value of BB(748) that is the magic ingredient. Somehow even knowledge of the value of this finite number is a more potent axiomatic system than any we've developed.
Isn't it more accurate to say that any proof that BB(748) = N in ZFC must either show that TM_ZF_INC halts within N steps, or never halts?
Meaning, it's totally possible to prove that BB(748) = N, it just can't be done within the axioms of ZFC?
I guess it's also hard when we have an arbitrary Turing machine and have to prove that what it's doing isn't equilavent to trying to prove an undecibable statement.
It's BB(n) that is incomputable (that is there's no algorithm that outputs the value of BB(n) for arbitrary n).
BB(748) is computable. It's, by definition, a number of ones written by some Turing machine with 748 states. That is this machine computes BB(748).
> It feels like a category error or something.
The number itself is just a literally unimaginably large number. Independence of ZFC comes in when we try to prove that this number is the number we seek. And to do that you need theory more powerful than ZFC to capture properties of a Turing machine with 748 states.
As soon as Gödel published his first incompleteness theorem, I would have thought the entire field of mathematics would have gone full throttle on trying to find more axioms. Instead, over the almost century since then, Gödel’s work has been treated more as an odd fact largely confined to niche foundational studies rather than any sort of mainstream program (I’m aware of Feferman, Friedman, etc., but my point is there is significantly less research in this area compared to most other topics in mathematics).
Statements that are independent of ZFC are a dime a dozen when doing foundations of mathematics, but they're not so common in many other areas of math. Harvey Friedman has done interesting work on finding "natural" statements that are independent of ZFC, but there's dispute about how natural they are. https://mathoverflow.net/questions/1924/what-are-some-reason...
In fact, it turns out that a huge amount of mathematics does not even require set theory, it is just a habit for mathematicians to work in set theory. https://en.wikipedia.org/wiki/Reverse_mathematics.
But why? Gödel's theorem does not depend on number of axioms but on them being recursively enumerable.
ZFC is way overpowered for that. https://mathoverflow.net/questions/39452/status-of-harvey-fr...
Two lenses for trying to understand this are potentially Chastain's limits on output of a lisp program being more complex than the program itself [1] or Markov's proof that you can't classify manifolds in d>= 4.
If you try the latter and need/want to figure out how the Russian school is so different this is helpful [2]
IMHO the former gives an intuition why, and the latter explains why IMHO.
In ZFC, C actually ends up implying PEM, which is why using constructionism as a form of reverse math helped it click for me .
This is because in the presence of excluded middle, every sequentially complete metric space is a complete space, and we tend to care about useful things, but for me just how huge the search space grows was hidden due to the typical (and useful) a priori assumption of PEM.
If you have a (in my view) dislike for the constrictive approach or don't want/have to invest in learning an obscure school of it, This recent paper[3] on the limits for finding a quantum theory of everything is another lens.
Yet another path is through Type 2 TMs and the Borel hierarchy, where while you can have a uncomputable number on the input tape you algorithms themselves cannot use them, while you can produce uncomputable numbers by randomly selecting and/or changing an infinite sequence.
Really it is the difference between expressability and algorithms working within what you can express.
Hopefully someone else can provide more accessible resources. I think a partial understanding of the limits of algorithms and computation will become more important in this new era.
[1] https://arxiv.org/abs/chao-dyn/9407003 [2] https://arxiv.org/abs/1804.05495 [3] https://arxiv.org/abs/2505.11773
Related: It's incorrect to claim that each machine either halts or doesn't halt. To know that that dichotomy holds would require having a halting problem algorithm.
BB(748) is very similar, in that I'd call it a 'definition' independent of ZF rather than a 'number' independent of ZF.
* ZFC is a set of axioms. A "model" is a structure that respects the axioms.
* By Godel, we know that ZFC proves a statement if and only if the statement is true in all models of ZFC.
* Therefore, the statement "BB(748) is independent of ZFC" is the same as the statement "There are two different models of ZFC where BB(748) are two different numbers.
* We can take one of these to be the "standard model"[1] that we all think of when we picture a Turing Machine. However, the other would be a strange "non-standard" model that includes finite "natural numbers" that are not in the set {0,1,2,3,...} and it includes Turing Machines that halt in "finite" time that we would not say halt at all in the standard model.
* So BB(748) is indeed a number as far as the standard model is concerned, the problem only comes from non-standard models.
TL;DR this is more about the fact that ZFC axioms allow weird models of Turing Machines that don't match how we think Turing Machines usually work.
[1] https://en.wikipedia.org/wiki/Non-standard_model_of_arithmet...
But there is also a function g that you cannot prove whether g() = n.
Important distinction.
This means that somebody could claim that the value of BB(748) = n but you cannot be sure if they are correct (but you might be able to show they are wrong).
If you are refering to the idea that nothing that can't exist in the real universe "really exists", then the "Busy Beaver" portion of that idea is extraneous, as 100% of integers can't exist in the real universe, and therefore, 100% of integers are equally just "mathematical concepts". That one of them is identified by BB(748) isn't a particularly important aspect. But certainly, a very specific number is identified by that designation, though nothing in this universe is going to know what it is in any meaningful sense.
We just can't prove which number it is, we don't know which of the machines halt.
With this definition, we can say that "ZFC is inconsistent" is semidecidible: you run a program that searches for a contradiction.
The question BB(748) =/= 1000 is similarly semidecidable. You can run a program that will rule out 1000 if it is not BB(748).
So they are in the same "category", at least regarding their undecidability.
Also, if you turn "ZFC is consistent" into a number: {1 if ZFC is consistent; 0 if ZFC is inconsistent}, you will see, that BB(748) is not very much different, both are defined (well, equivalently) using the halting of Turing machines, or, the result of an infinite search.
I'm well aware that BB(748) is an integer definable in classical logic. My claim is that "integer definable in classical logic" does not actually correspond well to what people mean by "number" in almost any other setting when pushed to extremes such as this.
I thought it was a typo. First time I encounter tetration.
When arithmetic is introduced just as a way to, for example, count money, it's more directly practical in the moment, but you're not seeing the larger pattern.
I don't get this part. Is it really rounding away the volume of the observable universe divided by the average volume of a grain of sand? That is many more orders of magnitude than the amount of mass in the universe, which is a more usual comparison.
10↑↑10,000,000 / (sand grains per universe) is vastly larger than, say, 10↑↑9,999,999
So on system we're using to write these numbers, there's really no better way to write (very big)/ (only universally big) than by writing exactly that, and then in the notation for very big, it pretty much rounds to just (very big).
In significant figures, 1.0 billion minus 1.0 million equals 1.0 billion.
However many universes in question, there is a qualitative difference between that many empty universes (with 1 grain), and that many completely packed with grain.
Ask anybody who lives in one!
Recently on HN (couple of months ago): https://news.ycombinator.com/item?id=43776477
I've pondered that version of the question a bit, but I couldn't get very far due to my lack of expertise in first-order logic. What I do know is that Skelet #17 [0] is one of the toughest machines to prove non-halting on a mathematical level [1], so any theory sufficient to prove that Skelet #17 doesn't halt is likely sufficient to decide the rest of the 5-state machines.
[0] https://bbchallenge.org/1RB---_0LC1RE_0LD1LC_1RA1LB_0RB0RA
Oh! Of course! That sure clears things up for this non-expert. This is clearly a hardcore blog for people who have been doing this kind of research for decades. Kind of awesome to stumble upon something so unapologetically dense and jargony and written for a very specific audience!
Is it niche jargon, absolutely, but to say it's only accessible to people who have put in decades is selling yourself short.
People can't visualize numbers that big. There's more ways to express numbers than just counting them. For example a single grain of sand has infinite states it can be in (there are an infinite amount of real numbers), so you could say a single grain of sand could represent BB(6). Combinations can grow exponentially, so that may be something useful to try and express it.
I.e., how well can a system fake being inconsistent before that fact it discovered? An inconsistent system faking consistency via BB(3) will be “found out” much quicker than a system faking consistency via BB(6). (What I mean by faking consistency is claiming that all programs that run longer than BB(n) steps for some n never halt.)
Using infinite precision to make things seem tractable is sleight of hand in my book. Stick with integers when you're describing scale.
The mass-energy includes ordinary matter, dark matter, and dark energy. Current estimates suggest the observable universe contains roughly 10^53 kg of mass-energy equivalent.
Plugging these into S ≤ 2πER/ℏc gives someting on the order of 10^120 bits of maximum information content.
S ≤ 2πER/ℏc
S ≤ (2 × 3.141593 × 3.036e+71 × 4.399e+26)/(1.055e-34 × 299792458)
S ≤ 2.654135e+124
S ≤ 10^120
So, no.
Look at 3 sub 10 = which is (10^(10^10)). So that is 10 to the power of 10 billion. In regular decimal notation, that is a "1" with 10 billion "0"s following it. It takes 10 gigabytes of ram to represent the number in decimal notation, naively.
The number of atoms in the universe is only 10^80, or 1,000...000 (80 zeroes). 10-million sub 10 is so huge, how much ram to represent it.
This example is from https://www.statisticshowto.com/tetration-function-simple-de...
You can convert every atom of observable Universe into a substrate for supercomputer, you can harness energies of supermassive black holes to power it, but running a humble BB(6) to halting state would be forever out of its reach.
Unlike Aaronson, he actually is on the forefront of Busy Beaver research, and is one of the people behind the https://bbchallenge.org website
Extremely bad ad hominem, I enjoyed Aaronson's read, nothing wrong with it.
Colloquially, I understand it's easy to think it means "saying something about someone that could be interpreted negatively" because that's the context it is read in it when it is used.
The meaning is saying a logical argument is incorrect because of who wrote the argument.
I spent 5 minutes trying to verify any link in the post above links to Scott Aaronson, or mentions him, and found nothing. :\ (both the siglocki, and when I found nothing there, the busy beaver site)