Wow, an actual open source language model (first of its kind [from a larger company] maybe even?), includes all you need to be able to recreate it from scratch. Thanks AMD!
Available under this funky GitHub organization it seems: https://github.com/AMD-AIG-AIMA/AMD-LLM
Apple research has previously released another example of a model with open training code, data, and weights, but their model was sized for running inference workloads on mobile devices.
However, Apple has a mobile device line of business and AMD has an enterprise AI accelerator line of business, so they are both doing work relevant to their bottom line.
Maybe some other heavy hitter out there can explain what all this whatchamacallit newfangled synergy producing matrix algebra does after you have it running?
I find it funny that the AI field has somehow normalised the goalpost moving from capabilities all the way to definitions about open source. And people seem really tribal about it...
There absolutely are open source LLMs already. Phi3.5 (MIT), various Mistral models (Apache2.0), various Qwen2 models (Apache2.0) and so on. LLamas are not open source, nor are Gemmas. But to say this is "an actual open source model" is weird nitpicking for the sake of nitpicking, IMO.
Requiring the methods and datasets that someone used to create some piece of IP is in no way a requirement for open sourcing said IP. It never has been!
Imagine this analogy:
A dev comes up with a way to generate source code that solves a real problem. This dev uses a secret seed, that only they know. The dev also uses thousands of hours of compute, and an algorithm that they created. At the end of the exercise they release the results on github, as follows:
- here is a project that takes in a piece of text in english, and translates it into french.
- the resulting source code is massive. 10 billions LOC. The lines of code are just if statements, all the way down, with some hardcoded integer values.
- source code licensed under Apache 2.0, written in let's say python.
- users can see the source code
- users can run the source code
- users can modify the source code and re-release the code
Now, would anyone pre LLMs say "this isn't true open source" because it's too complicated? Because no one can reasonably understand the source code? Because it uses hard coded int values? Because it's 10b LOC? Because the dev never shared how they got those values?
Of course not. The resulting code would have been open source because Apache 2.0 is open source.
It's the same with model weights. Just because they're not source code, and just because you don't know how they were created, it does not mean the weights are not open source.
You can see the weights. You can change the weights. You can re-distribute the weights. It's open source. The definition of something being open source does not cover you understanding why the weights are like they are. Nor do they require you having access to the methods of creating those weights. Or datasets. Or whatever the devs had for breakfast.
The problem is that Facebook and others are trying to move the goalpost, while others like me would like the goalpost to remain where it is, namely we call projects "Open source" when the required parts to build it on our own machines, is sufficiently accessible.
As I probably wouldn't be a developer in the first place if it wasn't for FOSS, and I spend literally all day long contributing to others FOSS projects and working on my own, it's kind of scary seeing these large companies trying to change what FOSS means.
I think you're forgetting about the intent and purpose of open source. The goal is that people can run software for whatever purpose they want, and they can modify it for whatever purpose. This is the intent behind the licenses we use when we "create FOSS".
This means, in practice, that the source code has to be accessible somehow, so the compiler I have on my computer, can build a similar binary to the one the project itself offers (if it does). The source code has to be accessible so I can build the project, but also modify it for myself.
Taking this idea that mostly only applied to software before (FOSS) but applying it to ML instead, it's clear to see what we need in order to 1) be able to use it as we want and 2) be able to modify it as we want.
> You can see the weights. You can change the weights. You can re-distribute the weights. It's open source.
Right. If I upload a binary to some website, you can see the binary, you can change the binary and you can re-distribute it. Would you say the binary is open source?
The weights are the binary in ML contexts. It's OK for projects to publish those weights, but it's not OK to suddenly change the definition and meaning of open source because companies want to look like they're doing FOSS, when in reality they're publishing binaries without any ways of building those binaries with your own changes.
Imagine if the Linux kernel was just a big binary blob. Yes, you can change it, re-distribute and what not, but only in a binary-blob shape. You'd be kind of out there if you insist on calling this binary-blob kernel FOSS. I'm sure you'd be able to convince some Facebook engineers about it, seems they're rolling with that idea already, but the rest of us who exist in the FOSS ecosystem? We'd still have the same goalpost in the exact same spot it's been for at least two decades I've been involved.
Can't believe it's the second time I end up with the very same argument about what open source is today on HN.
can we stick to years as a unit of measure and not spread Sam Altman's phrase :)
Twenty two thousand days
It's not a lot, it's all we got
Twenty two thousand days
- Sam Altman?
Anyone know the recommended cloud provider and equivalent rental price?
[1] https://www.wiredzone.com/shop/product/10025451-supermicro-g...
Actually, AMD has excellent reasons to make this kind of development and I hope they continue.
Does anyone know if the "several orders of magnitude speed improvement" is accurate? I'm doubtful.
Very interesting though! I'll be playing around with this on the weekend!
[1] https://www.reddit.com/r/LocalLLaMA/comments/17h4rqz/specula...
- 1.75x-2.80x on MI250
- 2.83x-2.98x on NPU
- 3.57x-3.88x on CPU
Note they were testing on AMD-Llama-135m-code as draft model for CodeLlama-7b, both of which do similarly badly on Humaneval Pass@1 (~30%), so it's likely if they were using a similarly trained 135m to SD for say, Qwen2.5-Coder (88.4% on HumanEval), the perf gains would probably be much worse.
For example, the C++ model is really good at writing both OpenGL+GLFW and Raylib.
https://machinelearning.apple.com/research/introducing-apple... (see Model Adaptation)
That's already very much a thing. Codestral, Phind, Starcoder etc.
Fine tuning models on whatever you want is quite accessible if you have a good dataset and a 100 bucks of budget
* https://github.com/amd/RyzenAI-SW - has a list of demos and how to use it directly (including apparently w/ PyTorch and LLMs)
* https://github.com/huggingface/optimum-amd - can use RyzenAI to use the NPU for HF transformers
There's now a Linux driver even https://github.com/amd/xdna-driver although it looks like a sufficiently PITA that I haven't even bothered to try it (my 7940HS only has like 10 TOPS anyway, so not much point even if it worked perfectly).
I thought PyTorch didn't work well with AMD architecture, and read of many people using JAX instead?