I would argue this is inherent to the training and compute cost of all large language models.
There are also providers now that will let you upload low-rank adapters you have trained in top of open foundational models, so that you can use their efficient serverless infrastructure with your fine-tuned models. This requires even less capital.
None of this would exist had OpenAI’s vision of centralized, locked-down API access become the reality.
(As Microsoft explained years ago)
Unfortunately it's past the deadline for you to add your own comments on this, but I'm sure there will be future RFCs if you have thoughts.
Here's a bit of help for them, development happens with our help or not. For every regulated repository that's removed, there's a mirror/fork.
I don't really have a point... just having a little fun at the expense of the latest thing in the harbor. I'll honestly stay apprised
Open weights vs training code is not a distinction worth drawing for the average person
If it is open then everyone has access. Any restrictions would be akin to demanding that linux not use strong encryption or firefox implement content censorship.
The license really does nothing to protect your project from regulation its just that the government doesn't care about open source yet.
A lot of open source efforts depend on big companies training million-dollar models and giving them away. These companies will often apply some censoring adjustment to the weights, which the open source community then undoes through fine tuning.
But perhaps in the future new methods of censorship will be developed, which are radically harder to undo?
And of course, there's always heavy-handed options available - if we can require hairdressers to hold professional licenses, we could require the same of anyone who wants to upload to huggingface or civitai.
Apple is similarly partnering with OpenAI.
Google has released nerfed versions of its Gemini models (Gemma 2B and Gemma 7B).
It seems the only company truly setting an example for open weights is Meta. Hopefully the other companies realize that it’s in their best interest to do the same (as it seems Google is begrudgingly realizing with its nerfed releases).
Eventually open weights will win, and if OpenAI (the most ironically named company) continues to rely on closed models as its moat, it will lose.
If the govt wants to regulate commercial AI, they can do that by going to the companies responsible for building that AI and say "do X, Y and Z or else we will punish you with A, B or C". But what do you say to a collective of developers that could be all around the globe? What stops that group from just saying "F* You" and continuing on?
Updated link.