"just a hobby, won't be big and professional like gnu"
Essentially, he wants to be able to express programs, and even an operating system, as a directed acyclic graph of logical binary operations, so that you can have consistent and deterministic runtime behavior.
The bit about LLMs is a distraction, in my opinion.
So how is this different from digital logic synthesis for CPLD/FPGA or chip design we have been doing over the last decades?
The idea is to be able to compile/run like you can now with your Von Neuman machine.
FPGA compile runs can sometimes take days! And of course, chips take months and quite a bit of money for each try through the loop.
No, I do not think future devices will be "boot to neural network." Traditional algorithms still have a place. Your robot vacuum cleaner (his example) may still use A* to route plan, and Quicksort to display your cleanings in terms of most energy usage.
> Without CPUs, we can be freed from the tyranny of the halting problem.
Not sure what this means but I think it still makes sense to have a CPU directing things as in current architectures. You don't just have your neural engine, you also have your GPU, Audio system, input devices, etc. and those need a controller. Something needs to coordinate.
Can someone please explain to me what this even means in this context?
Serious question.
https://news.ycombinator.com/item?id=36074287
You could say he had a history of using big words to talk shit.
He's wrong in the idea of using an LLM for general purpose compute. Using math instead of logic isn't a good thing for many use cases. You don't want a database, or an FFT in a Radar System to hallucinate, for example.
My personal focus is on homogeneous, clocked, bit level systolic arrays.[2] I'm starting to get the feeling the idea is really close to being a born secret[1] though, as it might enable anyone to really make high performance chips on any fab node.
If I understand him correctly, if everything becomes a neural network then he expects most neural networks to use Tinygrad
Doesn't every ML framework have that?
It's definitely going to happen but I don't think it will replace CPU's much like human brains can't quite replace CPU's and what they are optimised for.
Trying to make out that TinyGrad is leading the charge in this is quite self indulgent.
In the same way that we can be freed of the tyranny of being able to write a for loop.
https://github.com/tinygrad/tinygrad/blob/master/extra/hip_g...
it is definitely not shippable
We wrote entire NVIDIA, AMD, and QCOM drivers in that style.
https://github.com/tinygrad/tinygrad/blob/master/tinygrad/ru...
https://github.com/tinygrad/tinygrad/blob/master/tinygrad/ru...
https://github.com/tinygrad/tinygrad/blob/master/tinygrad/ru...