Why is determinism good for inference? If you are clever, you can run computations distributed without waiting for sync. I can’t tell from their marketing materials, but it’s also possible they went for the gold ring and built something latch-free on the silicon side.
Groq seems to have been able to use their architecture to deliver some insanely high token/s numbers; groqchat is by far the fastest inference API I’ve seen.
All this to say that I’m curious what a Dojo architecture designed around training could do. Presuming training was a key use case in the arch design. Knowing the long game thinking at Tesla, I imagine it was.
TSMC got so far out in front of everyone that their competitors had to get creative and solve other issues.
Why is this on 7mn? Because I dont think you could do this on 3nm. It is my understand that everything down at that scale is double shot/imaged to get the right sized components, and with that a higher defect rate.
Look at what intel is doing, and holding out for single shot processes. Their pushing of double sided chips (power on one side and data on the other) would be impossible with the 3nm double shot (I cant see flipping the die as being a good way to get reliability in alignment for 4 imagings..)
I suspect that were getting to the end of size (shrink) scaling and were going to get into process and design scaling. Going to be interesting to see what happens to cost and capacity if we're at that point. Process flexibility would be the new king!
The actual processors sound like they’re being made on other wafers in a standard way and then cut out and attached to the interconnect wafer.
So it’s not like the entire part was built only on one wafer and a single mistake would cause the whole thing to have to be thrown out.
At least that’s how I understood the article + educated guessing.
This feels a little like a “you were so preoccupied with whether or not you could” thing.
Wow.
I can’t imagine how much one of those wafers must cost. I’d love to know.
Hope it accomplishes what they want because they’ve certainly had to spend a fortune to get to this point.
My guess is if they did this, they thought it would be a huge improvement over buying off the self hardware. The reason I say this is it’s expensive to design a wafer computer, and get it manufactured.
Because it doesn't seem to make much sense.