Economics and costs are hard to predict. For example, Groq is not using HBM chips. So probably the cards are a lot easier to source.
Its not clear what the capacity of these systems are in terms of total users, or even tokens per second. Then you factor in cost. Then you realize all vendors will match a competitors pricing. Then you realize Groq doesn't sell chips.
¯\_(ツ)_/¯
The only thing you have is the public API to benchmark against: https://artificialanalysis.ai/
- SambaNova has real revenue from big customers - SambaNova can run any model on a single node at the speed Groq requires - SambaNova can do low latency inference just like Groq, but can also run large batches and host hundreds of models on a single deployment - SambaNova does not quantize models unless explicitly stated - SambaNova can run training at perf competitive with Nvidia, as well as fastest inference in the world at full precision
It really isn't a competition. Groq has done great as garnering hype in recent months, but it is a house of cards.
So every clock cycle you're doing useful work rather than loading up people into batches. And thats why the arch will probably win for inference, for training you're basically competing with software eco system and silicon density. AKA NVIDIA can give TSMC more money to get more ALUs on the die.
I think other places have attempted dataflow (FPGA etc) but they all basically had buffers (due to non-determinism in networks stack and even ram). SambaNova seems indistinguishable from an FPGA with a few clock cycles difference. I think they blew their shot with a Series D ($600 million???) where they made more of the same old. Maybe Intel will buy them to augment Altera? Looks like chasing parity with existing strategies.
I buy the Groq hype because its something different, certainly the public demo helped. HN is about the future.
[1] https://www.semianalysis.com/p/groq-inference-tokenomics-spe...