[0] https://en.wikipedia.org/wiki/Computational_irreducibility
I am a computational person, not a scientist, and I think science people find him to be speaking total garbage. That seems a correct assessment to me. His model of the world from physics perspective seems wrong. Nevertheless personally I find his computational lens/bias to be useful.
[0] https://www.kaggle.com/c/conways-reverse-game-of-life-2020/o...
"And indeed, what we’ll often see is that the more optimized a structure is, the less modular it tends to be. If we’re going to construct something “by hand” we usually need to assemble it in parts, because that’s what allows us to “understand what we’re doing”. But if, for example, we just find a structure in a search, there’s no reason for it to be “understandable”, and there’s no reason for it to be particularly modular."
If I were going to sum up what I will politely call Wolfram's Conjecture, it is that there is some other way to start with some sort of understanding of cellular automata and derive from that understanding the ability to model systems, and presumably the ability to build systems in cellular automata, using that superior understanding that are not characterized by the modularity shown in human constructions. Something like chaos mathematics, but for cellular automata, and presumably, something that goes beyond statistical characterization into some sort of deeper understanding that is somehow analogous to our ability to deeply understand the modular systems.
For the purposes of what I'm saying here, I'm virtually equating "understanding" with "ability to meaningfully manipulate". You or I may not be able to manipulate one of the human constructed systems in Life, but there is a path to learn how that is observably human-comprehensible and capable of being conveyed through human communication, because it has been. By contrast, the article shows many cases of "humans constructed this system with property X, and by random search we found a much more efficient system with that property, but it is not something a human could ever have reasonably designed".
What I may less charitably call "Wolfram's Mistake" is that there is at the moment frankly no evidence that I can see that such a thing is true, despite him having been searching for it for a very long time.
There's two angles on that, which are: One, is there any such understanding that is accessible to human cognition? And two, is there any such understanding at all, without that limitation, but with some sort of finite level of intelligence significantly smaller than the level necessary to just brute force the problem?
Or, to put it another way, it is not hard to hypothesize intelligences that could just glance a description of Life and then simply internally simulate systems directly looking for whatever you like. You may consider them something like "What if a human just had a modern computer built directly into their brain?" They would have vastly, vastly more raw computational power than a human without that for this purpose, but, is there any sort of understanding of cellular automata in general or even Life in particular that is meaningfully more compact than simply running the automata?
If you want to dig more into what it means to "understand" something, I refer you to "Why Philosophers Should Care About Computational Complexity" https://www.scottaaronson.com/papers/philos.pdf , rather than trying to further elaborate in this post. An "understanding" of this sort of cellular automata, whatever it may be, no matter how alien the system that has this "understanding" may be, should me meaningfully more computationally compact than the exponential complexity of simply having a table of all possibilities in memory.
From this perspective "Wolfram's Mistake" is that there is no such understanding; there is fundamentally no way to create a computational model for the phenomena in a cellular automata system that is simpler than the cellular automata itself in a general case.
Which means that rather than being the key to understanding, they're actually a really bad lens to try to view the world through for any sort of understanding by us finite beings. As incomplete and simplified as our many other lenses may be, at least they work in some cases. Cellular automata don't seem amenable to compact theories of any utility. (And by "any utility", I mean, any utility; scientific, mathematical, engineering, practical, all of them.) There's a few places of use, but not very many.
An interesting aspect is that while all Turing Complete systems ultimately exhibit this behavior, they do not seem to all do so equally. Lambda calculus, for instance, has all the same chaos in some sense, if for no other reason than you can always create the game of Life directly in lambda calculus. Yet lambda calculus also admits human understanding. I select lambda calculus in particular as one of the most human-friendly Turing Complete formalisms. Turning them into actual useful programming languages is on the order of "useful learning exercise". Turing machines are sort of in between; we can work with them, but they do tend to explode into chaos relatively easily. Cellular automata, by contrast, require extreme human effort to work with, just to get them to halfway tame. There's an interesting study here about why that is, which is currently well beyond me.
Again, looking at Wolfram's Conjecture another way is that this is somehow not correct and we're just not looking at CAs correctly, and if we did, they might become more useful than Turing machines or perhaps even lambda calculus, and, again, I just don't see the evidence for this.
I think Wolfram could contribute a lot if he shifted his focus towards these domains.
https://the-decoder.com/edge-of-chaos-yale-study-finds-sweet...
Innovation in the real world is often driven by the usual incentives of capitalism, like the basic need to out-compete the competitors by improving quality or lowering costs. I do not really think Game of Life serves as a model for innovation in the real world; it might serve as a model of the pure phenomenon of innovation. In the real world, even things like pure math research is motivated by applied math, by monetary factors like NSF grants etc.
This was enacted via specific processes (human brain) using resources and their environment within their constraints, sometimes surplus. It involved local and global phenomenon that were dependent and independent.
In short, it's nothing like cellular automata or most simplistic models of the world. We'd have to model the above within this world's laws to know what drives innovation among humans in this world.
How would we have gotten to the point of human history when capitalism became predominant if innovation is driven by capitalism?
Also, humanity is hardly an economy of capitalism any more. More similar to oligarchical capital feudalism.
Thousands of texts written by historians and economists who dedicate their lives to understanding our past are totally wrong.
I spent around 15 years working on stochastic lattice models. They can be amazing. They can also fail to capture the essence of the problem. Same with cellular automata.
They’re definitely not _new_. They weren’t new in 2002. I’ve always viewed Conway’s game of life as an interesting deterministic variant of an Ising model, and Ising models date back to the 1920s.
Most physicists I knew at the time looked at the book, shrugged, and kept working on what they were doing. I love lattice models, cellular automata, lattice fluid models, and the like. But they’re just one class of useful model.
And yet because Wolfram has a perpetual money machine called Mathematica, he’s got a huge megaphone to advocate for himself.
I’ll keep rolling my eyes. I don’t hate the guy, but he is just a little too into self-promotion for my taste.