„Insurance underwriters are seriously trying now to remove coverage in policies where AI is applied and there's no clear chain of responsibility“
I see a future coming, where everyone uses AI but nobody admits it.
“It passed all the unit tests, the shape of the code looks right," he said. It's 3.7x more lines of code that performs 2,000 times worse than the actual SQLite. Two thousand times worse for a database is a non-viable product. It's a dumpster fire. Throw it away. All that money you spent on it is worthless.”
“This magic that literally didn’t exist two years ago in more than a toy state is moving at such a rapid rate that it couldn’t even reproduce sqlite three months ago and only got better enough in those weeks to produce a bad version of sqlite! Clearly useless! It has no value, no one is using it to do any work and won’t get better over the next three months or three years!”
An amazing take.
> "The other way to look at this is like there's no free lunch here," said Smiley. "We know what the limitations of the model are. It's hard to teach them new facts. It's hard to reliably retrieve facts. The forward pass through the neural nets is non-deterministic, especially when you have reasoning models that engage an internal monologue to increase the efficiency of next token prediction, meaning you're going to get a different answer every time, right? That monologue is going to be different.
> "And they have no inductive reasoning capabilities. A model cannot check its own work. It doesn't know if the answer it gave you is right. Those are foundational problems no one has solved in LLM technology. And you want to tell me that's not going to manifest in code quality problems? Of course it's going to manifest."
You can argue with specifics in there, but they made their case.