I just hope we don't all start relying on current[1] AI so much that we lose the ability to solve novel problems ourselves.
[1] (I say "current" AI because some new paradigm may well surpass us completely, but that's a whole different future to contemplate)
I just don't think there was a great way to make solved problems accessible before LLMs. I mean, these things were on github already, and still got reimplemented over and over again.
Even high traffic libraries that solve some super common problem often have rough edges, or do something that breaks it for your specific use case. So even when the code is accessible, it doesn't always get used as much as it could.
With LLMs, you can find it, learn it, and tailor it to your needs with one tool.
I'm not sure people wrote emulators, of all things, because they were trying to solve a problem in the commercial sense, or that they weren't aware of existing github projects and couldn't remember to search for them.
It seems much more a labour of love kind of thing to work on. For something that holds that kind of appeal to you, you don't always want to take the shortcut. It's like solving a puzzle game by reading all the hints on the internet; you got through it but also ruined it for yourself.
What kranner said. There was never an accessibility problem for emulators. The reason there are a lot of emulators on github is that a lot of people wanted to write an emulator, not that a lot of people wanted to run an emulator and just couldn't find it.
And now people seem to automate reimplementations by paying some corporation for shoving previous reimplementations into a weird database.
As both a professional and hobbyist I've taken a lot from public git repos. If there are no relevant examples in the project I'm in I'll sniff out some public ones and crib what I need from those, usually not by copying but rather 'transpiling' because it is likely I'll be looking at Python or Golang or whatever and that's not what I've been payed to use. Typically there are also adaptations to the current environment that are needed, like particular patterns in naming, use of local libraries or modules and so on.
I don't really feel that it has made it hard for me to do because I've used a variety of tools to achieve it rather than some SaaS chat shell automation.
That isn't why people made emulators. It is because it is an easy to solve problem that is tricky to get right and provides as much testable space as you are willing to spend on working on it.
To some extent AI is an entirely different approach. Screw elegance. Programmers won’t adhere to an elegant paradigm anyway. So just automate the process of generating spaghetti. The modularity and reuse is emergent from the latent knowledge in the model.
It’s much easier to get an LLM to adhere, especially when you throw tooling into the loop to enforce constraints and style. Even better when you use Rust with its amazing type system, and compilation serves as proof.
I wonder if you could design a language that is even more precise and designed specifically around use by LLMs. We will probably see this.
Now that Im here Ill say Im actually very impressed with Groks ability to output video content in the context of simulating the real-world. They seemingly have the edge on this dimension vs other model providers. But again - this doesnt mean much unless its in the hands of someone with taste etc. You cant one-shot great content. You actually have to do it frame-by-frame then stitch it together.
…If every time you looked at the dictionary it gave you a slightly different definition, and sometimes it gave you the wrong definition!
Reproducibility is a separate issue.