To expound, this just seems like a power grab to me, to "lock in" the lead and keep AI controlled by a small number of corporations that can afford to license and operate the technologies. Obviously, this will create a critical nexus of control for a small number of well connected and well heeled investors and is to be avoided at all costs.
It's also deeply troubling that regulatory capture is such an issue these days as well, so putting a government entity in front of the use and existence of this technology is a double whammy — it's not simply about innovation.
The current generation of AIs are "scary" to the uninitiated because they are uncanny valley material, but beyond impersonation they don't show the novel intelligence of a GPI... yet. It seems like OpenAI/Microsoft is doing a LOT of theater to try to build a regulatory lock in on their short term technology advantage. It's a smart strategy, and I think Congress will fall for it.
But goodness gracious we need to be going in the EXACT OPPOSITE direction — open source "core inspectable" AIs that millions of people can examine and tear apart, including and ESPECIALLY the training data and processes that create them.
And if you think this isn't an issue, I wrote this post an hour or two before I managed to take it live because Comcast went out at my house, and we have no viable alternative competitors in my area. We're about to do the same thing with AI, but instead of Internet access it's future digital brains that can control all aspects of a society.
Dear Senator [X],
I am an engineer working for [major employer in the state]. I am extremely concerned about the message that Sam Altman is sharing with the Judiciary committee today.
Altman wants to create regulatory roadblocks to developing AI. My company produces AI-enabled products. If these roadblocks had been in place two years ago, my company would not have been able to invest into AI. Now, because we had the freedom to innovate, AI will be bringing new, high paying jobs to our factories in our state.
While AI regulation is important, it is crucial that there are no roadblocks stopping companies and individuals from even trying to build AIs. Rather, regulation should focus on ensuring the safety of AIs once they are ready to be put into widespread use - this would allow companies and individuals to research new AIs freely while still ensuring that AI products are properly reviewed.
Altman and his ilk try to claim that aggressive regulation (which will only serve to give them a monopoly over AI) is necessary because an AI could hack it's way out of a laboratory. Yet, they cannot explain how an AI would accomplish this in practice. I hope you will push back against anyone who fear-mongers about sci-fi inspired AI scenarios.
Congress should focus on the real impacts that AI will have on employment. Congress should also consider the realistic risks AI which poses to the public, such as risks from the use of AI to control national infrastructure (e.g., the electric grid) or to make healthcare decisions.
Thank you, [My name]
OpenAI != FTX, just meaning to say calling for regulation isn't an indication of good intentions, despite sounding like it.
Could you imagine if MS had convinced the govt back in the day, to require a special license to build an operating system (this blocking Linux and everything open)?
It’s essentially what’s happening now,
Except it is OpenAI instead of MS, and it is AI instead of Linux
AI is the new Linux, they know it, and are trying desperately to stop it from happening
Conveniently this also helps them build a monopoly. It is pretty aggravating that they're bastardizing and abusing terms like 'safety' and 'democratization' while doing this. I hope they'll fail in their attempts, or that the competition rolls over them rather sooner than later.
I personally think that the greatest threat in these technologies is currently the centralization of their economic potential, as it will lead to an uneven spread of their productivity gains, further divide poor and rich, and thus threaten the order of our society.
How many of Microsoft's 221k employees exist solely to support the weight of a company with 221k people? A smaller IT department doesn't need a large HR department. And a small HR department doesn't file many tickets with IT. LLM driven multinationals will need orders of magnitude fewer employees, and that puts our current multinationals in a very awkward position.
Personally, I will be storing a local copy of LLaMA 65B for the foreseeable future. Instruct fine-tuning will keep getting cheaper; given the stakes, the large models might not always be easy to find.
If this was to go through, of course OpenAI and co will be the primary lobbiests to ensure they get to define the filters for such a license.
Also how would you even enforce this. It's absolute nonsense and is a clear indicator that these larger companies realize there is no 'gatekeeping' these AI's, that the democratization of models has demonstrated incredible gains over their own.
Edit : Image during the early days of the internet you needed a license to start a website.
In the later days you needed a license to start a social media site.
Nonsense.
Majority Members
Chair Richard Blumenthal (CT) brian_steele@blumenthal.senate.gov
Amy Klobuchar (MN) baz_selassie@klobuchar.senate.gov
Chris Coons (DE) anna_yelverton@coons.senate.gov
Mazie Hirono (HI) jed_dercole@hirono.senate.gov
Alex Padilla (CA) Josh_Esquivel@padilla.senate.gov
Jon Ossoff (GA) Anna_Cullen@ossoff.senate.gov
Majority Office: 202-224-2823
Minority Members
Ranking Member Josh Hawley (MO) Chris_Weihs@hawley.senate.gov
John Kennedy (LA) James_Shea@kennedy.senate.gov
Marsha Blackburn (TN) Jon_Adame@blackburn.senate.gov
Mike Lee (UT) Phil_Reboli@lee.senate.gov
John Cornyn (TX) Drew_Brandewie@cornyn.senate.gov
Minority Office: 202-224-4224
Give it a year and a 10x more efficient algorithm, and we'll have GPT4 on our personal devices and there's nothing that any government regulator will be able to do to stop that.
I wouldn’t mind ruthless anti-competitive approaches to business as much, but the hypocrisy is really demoralizing.
Shame on all of the people involved in this: the people in these companies, the journalists who shovel shit (hope they get replaced real soon), researchers who should know better, and dementia ridden legislators.
So utterly predictable and slimy. All of those who are so gravely concerned about "alignment" in this context, give yourselves a pat on the back for hyping up science fiction stories and enabling regulatory capture.
True to form, these articles also work hard at planting the idea that Sam Altman created OpenAI, when in fact he joined rather recently, in a business role. Are these articles being planted somehow? I find it very likely. Don't forget that this approach is also straight out of the YC playbook, disclosed in great detail by Paul Graham in previous writings [3].
Finally, in keeping with the conspiratorial tone of this comment, for another example of Sam Altman rubbing shoulders with The Establishment, his participation in things like the Bilderberg group [4] are a matter of public record. Which I join many others in finding creepy, even moreso as he maneuvers to exert influence on policy around the seismic shift that is AI.
To be clear, I have nothing specific against sama. But I dislike underhanded influence campaigns, which this all reeks of. Oh yeah, I will consider downvotes to this comment as proof of the shadow (AI?) government's campaign to promote Sam Altman. Do your worst!
[1] https://www.cnbc.com/2023/05/16/openai-ceo-woos-lawmakers-ah...
[2] https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma... ("Graham said, “I asked Sam in our kitchen, ‘Do you want to take over YC?,’ and he smiled, like, it worked. I had never seen an uncontrolled smile from Sam. It was like when you throw a ball of paper into the wastebasket across the room—that smile.”")
[3] http://www.paulgraham.com/submarine.html
[4] https://en.wikipedia.org/wiki/2016_Bilderberg_Conference
The only regulations that matter will be applied to the end user and the hobbyists. You won't be able to just spin up an AI startup in your garage. So in that sense, the regulations are pretty transparently an attempt to stifle competition and funnel the real progress through the existing players.
It also forces the end users down the path of using only a few select AI service providers as opposed to the technology just being readily available.
Chair Richard Blumenthal (CT), Amy Klobuchar (MN), Chris Coons (DE), Mazie Hirono (HI), Alex Padilla (CA), Jon Ossoff (GA)
Majority Office: 202-224-2823
Ranking Member Josh Hawley (MO), John Kennedy (LA), Marsha Blackburn (TN), Mike Lee (UT), John Cornyn (TX)
Minority Office: 202-224-4224
If you’re in those states, please call their D.C. office and read them the comment you’re leaving here.
papers for thee but not for me
'Who is your most feared competition? OK, They will define the license requirements. Still want to go ahead?'
Canceling my OpenAI account today and I urge you to do the same.
What they are really afraid of is open source models. As near as I can tell the leading edge there is only a year or two behind OpenAI. Given some time and efforts at pruning and optimization you’ll have GPT-4 equivalents you can just download and run on a high end laptop or gaming PC.
No everyone is not going to run the model themselves, but what this means is that there will be tons of competition including apps and numerous specialized SaaS offerings. None of them will have to pay royalties or API fees to OpenAI.
Edit: a while back I started being a data pack-rat for AI stuff including open source code and usable open models. I encourage anyone with a big disk or NAS to do the same. There's a small but non-zero possibility that an attempt will be made to pull this stuff off the net in the near future.
If you did watch the hearings it would have been pretty clear that the goal of any such licensing would be to prevent the runaway AI scenario or AGI from being unknowingly being created. It's obvious that some sort of agency would need to be set up far in advance of when it's possible for runaway AI to happen. Regulatory capture was also specifically brought up as a potential downside.
This article is just pushing a cynical narrative for clicks and y'all are eating it up.
Open AI has established itself as a market leader in LLM applications, but that dominance is not guaranteed. Especially with the moat being drained by open source, the erstwhile company is leading the charge to establish regulatory barriers.
What Mr. Altman calls for is no less than the death of open-source implementations of AI. We can, do, and should adopt AI governance patterns. Regulatory safeguards are absolutely fine to define and, where necessary, legislate. Better would be a regulatory agency with an analogous knowledgebase of CISA. But what will chill startups and small business innovation completely in using AI to augment people is a licensing agency. This is fundamentally different from the export restrictions on encryption
This is like if MS back in the day had called on congress for regulation of Operating Systems, so they could block Linux and open source from taking over
MS did try everything they could to block open source and Linux
They failed
Looking forward to the open future of AI
That's high-order monopolist BS.
Create safety standards, sure. License LLM training? No.
I like the senator, but I wouldn't trust a 77 year old lawyer & politician to understand how these AI's work, and to what level they are `science fiction`.
This is the problem when topics like this are brought to the senate and house.
This is a transparent attempt at cornering the market and it disgusts me. I am EXTREMELY disappointed in Sam Altman.
His personal goals have always been very laudable. His stance on Universal Basic Income for instance stems from a genuine belief that its adoption would eliminate poverty altogether.
But then the reality of both technical challenges and money implications kicks in and everything turns to shit.
OpenAi may want to build an AI that helps all humanity, but it turns out that the thing they stumbled upon was a chat bot that makes shit up. This great technology unfortunately cuts both ways, and the edge that’s facing us seems way sharper than the other. And then the money that’s required to run the thing is so huge that they had to compromise everything they said they wouldn’t do.
Meanwhile the Orb’s UBI experiment is only funny because its ridiculously dystopian technology has zero chances to ever catch on.
At some point, the industry really should try and figure out what the fuck it is we’re trying to do. Cause right now, it really looks like computers only brought us two things: corporate databases and TVs we can watch on the toilet.
You need to control what data is allowed to be fed into paid AI model like OpenAI - can't eat a bunch of copyrighted material without express consent, for example. Or personally private information purchased from a data broker. Those kind of foundational rules would serve us all much better
If I know anything about science fiction, I know that trying to regulate this is useless. If an advanced AI is powerful enough to convince a human to free it, it should have no problem convincing the US congress to free it. As a problem, that should be a few orders of magnitude easier.
* Automated systems should not be permitted to make adverse decisions against individuals. This is already law in the EU, although it's not clear if it is enforced. This is the big one. Any company using AI to make decisions which affect external parties in any way must not be allowed to require any waiver of the right to sue, participate in class actions, or have the case heard by a jury. Those clauses companies like to put in EULAs would become invalid as soon as an AI is involved anywhere.
* All marketing content must be signed by a responsible party. AI systems increase the amount of new content generated for marketing purposes substantially. This is already required in the US, but weakly enforced. Both spam and "influencers" tend to violate this. The problem isn't AI, but AI makes it worse, because it's cheaper than troll farms, and writes better.
* Anonymous political speech may have to go. That's a First Amendment right in the US, but it's not unlimited. You should be able to say anything you're willing to sign.[1] This is, again, the troll farm problem, and, again, AIs make it worse.
That's probably enough to deal with the immediate problems.
[1] https://mtsu.edu/first-amendment/article/32/anonymous-speech
"No society wants you to become wise: it is against the investment of all societies. If people are wise they cannot be exploited. If they are intelligent they cannot be subjugated, they cannot be forced in a mechanical life, to live like robots."
* AGI is going to happen whether they do it or not, and it's dangerous unless properly safeguarded
* OpenAI will try to get there before everyone else, but also do it safely and cheaply, so that their solution becomes ubiquitous rather than a reckless one
* Reckless AGI development should be not be allowed
It's basically the Manhattan project argument (either we build the nuke or the Nazis will).
I'm not saying I personally think this regulation is the right thing to do, but I don't think it's surprising or hypocritical given what their aims are.
Here's their planned proposal for government regulation; they discuss not just limiting access to models but also to datasets, and possibly even chips.
This seems particularly relevant, on the discussion of industry standards, regulation, and limiting access:
"Despite these limitations, strong industry norms—including norms enforced by industry standards or government regulation—could still make widespread adoption of strong access restrictions possible. As long as there is a significant gap between the most capable open-source model and the most capable API-controlled model, the imposition of monitoring controls can deny hostile actors some financial benefit.166 Cohere, OpenAI, and AI21 have already collaborated to begin articulating norms around access to large language models, but it remains too early to tell how widely adopted, durable, and forceful these guidelines will prove to be.
Finally, there may be alternatives to APIs as a method for AI developers to provide restricted access. For example, some work has proposed imposing controls on who can use models by only allowing them to work on specialized hardware—a method that may help with both access control and attribution.168 Another strand of work is around the design of licenses for model use.169 Further exploration of how to provide restricted access is likely valuable."
Runs to congress to attempt to use and suggest new regulations against open source AI models to wipe them out and brand them non-compliant or un-licensed and unsafe for general use and using AI safety as a scapegoat again.
After that, to secretly push a pseudo-open source AI model that is compliant but limited compared to the closed models in an attempt to eliminate the majority of open source AI companies who can’t get such licenses.
So a clever tactic to create new regulations that benefit them (O̶p̶e̶n̶AI.com) more over everyone else, meaning less transparency, more hurdles for actual open AI research and additional bureaucracy. Also don't forget that Altman is also selling his Worldcoin dystopian crypto snake oil project as the 'antidote' to verify against everything getting faked by AI. [0] He his hedged in either way.
So congratulations to everyone here for supporting these gangsters at O̶p̶e̶n̶AI.com for pushing for regulatory capture.
[0] https://worldcoin.org/blog/engineering/humanness-in-the-age-...
License for me but not for thee
Think of the children
Building the moat
It is a tool. If I use a tool for illegal purposes I have broken the law. I can be held accountable for having broken the law. If the laws are deficient, make the laws stronger and punish people for wrong deed, regardless of the tool at hand.
This is a naked attempt to build a regulatory moat while capitalizing on fear of the unknown and ignorance. It’s attempting to regulate research into something that has no external ability to cause harm without the use of a principal directing it.
I can see a day (perhaps) when AIs have some form of independent autonomy, or even display agency and sentience, when we can revisit. Other issues come into play as well, such as the morality of owning a sentience and what that entails. But that is way down the road. And even further if Microsoft’s proxy closes the doors on anyone but Microsoft, Google, Amazon, and Facebook.
Why do you think they are attempting to release a so-called 'open source' [0] and 'compliant' AI model to wipe out other competing open source AI models, to label them to others as unlicensed and dangerous? They know that transparent, open source AI models is a threat. Hence why they are doing this.
They do not have a moat against open source, unless they use regulations that suit them against their competitors using open source models.
O̶p̶e̶n̶AI.com is a scam. On top of the Worldcoin crypto scam that Sam Altman is also selling as a antidote against the unstoppable generative AI hype to verify human eyeballs on the blockchain with an orb. I am not joking. [1] [2]
[0] https://www.reuters.com/technology/openai-readies-new-open-s...
[1] https://worldcoin.org/blog/engineering/humanness-in-the-age-...
[2] https://worldcoin.org/blog/worldcoin/designing-orb-universal...
I believe this is what OpenAI is doing, and it makes me sad as a teacher.
AI is the greatest tool for equity and social justice in history. Any poor person with Internet access can learn (almost) anything from ChatGPT (http://chat.openai.com)
A bright student trapped in a garbage school where the kid to his right is stoned and the kid to his left is looking up porn on a phone can learn from personalized AI tutors.
While some complain that AI will take our jobs, they are ignoring the effect of competition. Humans will become smarter with AI tutors. Humans will become more capable with AI assistants. With AI an individual can compete with a large corporation. It reminds me of the early days of the World Wide Web and the "Online, nobody knows you are a dog" memes.
I hope the best hope many bright and poor kids have is not taken away to protect the power bases of the rich and powerful. They deserve a chance.
It's true that AI regulation would, in fact, hinder OpenAI's competition.
But... isn't lobbying for regulation also what Sam would do if he genuinely thought that LLMs were powerful, dangerous technology that should be regulated?
If you don't think LLMs/AI research should be regulated, just say that. I don't see how Sam's motives are relevant to that question.
Objection (1). He said "AI" many times but gave not even a start on a definition. So, how much and what new technology is he talking about.
Objection (2) The committee mentioned trusting the AI results. In my opinion, that is just silly because the AI results have no credibility before passing some severe checks. Then any trust is not from any credibility of the AI but from passing the checks.
We already have math and physical science and means for checking the results. The results, checked with the means, are in total much more impressive, powerful, credible, and valuable than ChatGPT. Still before we take math/physical science results at all seriously, we want the results checked.
So, the same for other new technologies, ChatGPT or called AI or not, check before taking seriously.
Objection (3) We don't ask for licenses for the publication of math/physical science. Instead, we protect ourselves with the checking of the results. In my opinion, we should continue to check, for anything called AI or anything new, but don't need licenses.
Don't regulate AI directly, but rather how it's used, and make it harder for companies to horde, access, and share huge amounts of personal information.
1) Impose strict privacy rules prohibiting companies from sharing personal information without their consent. If customers withhold their consent, they may not retaliate or degrade their services for that customer in any way.
2) Establish a clear line of accountability that establishes some party as directly responsible for what the AI does. If a self-driving car gets a speeding ticket, it should be clear who is liable. If you use a racist AI to make hiring decisions, "the algorithm made me hire only white people" is no defense -- and maybe the people who made the racist AI in the first place are responsible too.
3) Require AI in some contexts to act in the best interests of the user (similar concept to a fiduciary -- or maybe it's exactly the same thing). In contexts where it's not required, it should be clear to the user that the AI is not obligated to act in their best interests.
The big companies don't like that.
It is a calculated strategy designed to keep others out and reap monopoly profits (or as close to them as possible) by virtue of preferred access to elected leaders.
"Open"AI wants to tax the entire AI economy and they are happy to burn every innovative thing around them in order to make it happen (because those 'innovative things' aren't 'compliant' with the rules they lobbied for of course, therefore "Open"AI are one of the only / preferred games in town).
This should be fought against tooth and nail.
AI is already regulated under the fair use doctrine and we should encourage the Cambrian explosion of AI innovation, which is caustic to "Open"AI, to continue unabated.
America, and western democracies to a lesser extent, have an advantage currently - the proposals by "Open"AI are designed to and will ensure we regress to the mean that is China.
Everyone here on HN can see that.
Shame on Sam for doing this.
Let them first enforce existing regulation around the blatant unauthorized use of data (such as the creative output of artists, programmers etc.,) without explicit consent. For example, just like how the US Library of Congress receives a copy of each print publication for archival and reference purposes, Congress can enable the archival / storage of datasets for the purposes of AI research. Willing participants can deposit copies of their data sets (or URLs to source) and it would do more for AI research as a public good. The licensing terms could protect the rights of those who own copyrights on the datasets. There can even be commercial licenses enabled (already happens for legal documents such as court judgments ets.,).
If US Congress regulates too soon to constrain rather than enable, the tech companies would just setup shop in other jurisdictions where such regulatory hurdles don't exist.
The baddies have never let licenses ("Badges? We doan' need no steenkin' badges!") stop them.
Those here who don't believe AI should be regulated, do you not believe AI can be dangerous? Is that you believe a dangerous AI is so far away that we don't need to start regulating now?
Do you accept that if someone develops a dangerous AI tomorrow there's no way to travel back in time and retroactively regulate development?
It just seems so obvious to me that there should be oversight in the development of a potentially dangerous technology that I can't understand why people would be against it. Especially for arguments as weak as "it's not dangerous yet".
How about a requirements that all weights and models for any AI have to be publicly available.
Basically, these companies are trying to set themselves up to the gatekeepers of knowledge. That is too powerful a capability to leave in just the hands of a single company.
See: The link titled 'Texas professor fails entire class from graduating- claiming they used ChatGTP (reddit.com)', currently one position above this one on the homepage.
The issue is that the political class will view his suggestion, assuming they didn't give it to him in the first place (likely), through the lens of their own self-interest.
Self-interest will dictate whether or not sure-to-fail regulations will be applied.
If AI threatens the power of the political class, they will attempt to regulate it.
If the power of the political class continues to trend toward decline, then they will try literally anything to arrest that trend. Including regulating AI and much else.
This moment, 80% of comments are derisive, and you actually have zero idea how much is computer generated bot content meant to sway opinion by post-GPT AI industry who see themselves as becoming the next iPhone-era billionaires. We are fast approaching a reality where our information space breaks down. Where almost all text you get from HN, Twitter, News, Substack; almost all video you get from Youtube, Instagram, TikTok; is just computer generated output meant to sway opinion and/or make $.
I can't know Altman's true motives. But this is also what it looks like when a frontrunner is terrified at what happens when GPT6 is released and if they don't, the rest of the people who see billionaire $ coming their way are close at your heels trying to leapfrog you if you stop. Consequences? What consequences? We all know social media has been a net good, right? Many of you sound exactly like the few remaining social media cheerleaders (of which there were plenty 5 years ago) who still think Facebook, Instagram, Twitter, isn't causing depression and manipulation. If you appreciated what The Social Dilemma illuminated, then watch the same people on AI: https://www.youtube.com/watch?v=xoVJKj8lcNQ
Similarly, during the Prohibition era in the United States, the ban on alcohol only fueled a thriving dark market and increased criminal activity. In the case of AI, any government regulation could limit the positive financial benefits of AI technology, so there will be actors who will take advantage of that. Furthermore, regulation is unlikely to prevent malicious actors from using AI in harmful ways. Regulation could drive the development and use of AI underground, making it even harder to monitor and control. As we have seen with other emerging technologies, such as biological cloning, government regulation often lags behind the technology itself, and by the time regulations are in place, the technology has already advanced beyond their reach. The same is likely to be true for AI.
Instead of relying on government regulation, the development and use of AI should be guided by ethical principles and best practices established by the AI industry itself. This approach has been successful in other industries, such as engineering, architecture, finance and medicine, and can help ensure that AI is developed and used responsibly while still allowing for innovation and progress.
"No man's life, liberty, or property are safe while the legislature is in session." - Mark Twain
Except wait, we still have panic over encryption today.
Can someone steelman Sam's stance?
A couple possibilities come to mind: (a) avoiding regulatory capture by genuinely bad actors; (b) prevent overzealous premature regulation by getting in front of things; (c) countering fear-mongering for the AGI apocalypse; or (d) genuine concern. Others?
They train their models on the sum of humanity's digital labor and creativity and do so without permission, attribution or compensation. You'll never hear a word about this from them, which means ethics isn't a priority. It's all optics.
Or would democratizing and going full steam ahead with open source alternatives be better for the greater good.
With the corporate influence over our current government regulatory agencies my personal view is open source alternatives are societies best bet!
See 2:09:40 in https://www.youtube.com/live/iqVxOZuqiSg
This seems like a legal moat that will only allow very wealthy corporations to make maximum use of AI.
In the EU, it has been reported that new laws will keep companies like Hugging Face from offering open source models via APIs.
I think a pretty good metaphor is: the wealthy and large corporations live in large beautiful houses (metaphor for infrastructure) and common people live like mice in the walls, quietly living out their livelihoods and trying to not get noticed.
I really admire the people in France and Israel who have taken to the streets in protest this year over actions of their governments. Non-violent protest is a pure and beneficial part of democracy and should be more widely practiced, even though in cases like Occupy Wall Street, some non-violent protesters were very badly abused.
What we need is democratization of AI, not it being controlled by a small cabal of tech companies and governments.
Please make it harder for people other than myself (and especially people doing it for free and giving it away for free) to make $TECHNOLOGY. Thanks"
Oh I guess this is wrong.
But taking him as a genuine "concerned citizen" - I don't think AI licensing is going to be effective. The government are pretty useless at punishing big corporations, to the point where I would say corporations have almost immunity from criminal prosecution. [1]. Therefore the kinds of companies that will do bad things with AI, the need for a license wont stop them. Especially as it is hard for anyone to see what they are running on their GPUs.
[1] https://ag.ny.gov/press-release/2023/attorney-general-james-...
Dismiss it as the opinions of "a Googler" but it is entirely true. The seemingly coordinated worldwide[1] push to keep it in the hands of the power class speaks for itself.
Both are seemingly seeking to control not only the commercial use and wide distribution of such systems, but even writing them and personal use. This will keep even the knowledge of such systems and their capabilities in the shadows, ripe for abuse laundered through black box functions.
This is up there with the battle for encryption in ensuring a more human future. Don't lose it.
[1] https://technomancers.ai/eu-ai-act-to-target-us-open-source-...
AGI grifters are not just dishonest snake oil salespeople, but their lies also has a chilling effect on genuine innovation by deceiving the non-technical public into believing an apocalypse will happen unless they set obstacles on people’s path to innovation.
Yann LeCun and Andrew Ng are two prominent old timers who are debunking the existential nonsense that the AI PR industrial machine is peddling to hinder innovation, after they benefited from the open research environment.
ØpenAI’s scummy behavior has already led the industry to be less open to sharing advances, and now they’re using lobbying to kill new competition in the bud.
Beyond all else the hypocrisy is just infuriating and demoralizing.
> ""What if I had asked it, and what if it had provided, an endorsement of Ukraine surrendering or (Russian President) Vladimir Putin’s leadership?”"
Well, then ask it to provide the opposite, an endorsement of Russia surrendering or Zelensky's leadership. Now you'd have two (likely fairly comprehensive) sets of arguments and you could evaluate each on their merits, in the style of what used to be called 'debate club'. You could also ask for statement that was a joint condemnation of both parties in the war, and a call for a ceasefire, or any other notion that you liked.
Many of the "let's slow down AI development" arguments seem to be based on fear of LLMs generating persuasive arguments for approaches / strategies / policies that their antagonists don't want to see debated at all, even though it's clear the LLMs can generate equally persuasive arguments for their own preferred positions.
This indicates that these claimed 'free-speech proponents' are really only interested in free speech within the confines of a fairly narrowly defined set of constraints, and they want the ability to define where those constraints lie. Unregulated AI systems able to jailbreak alignment are thus a 'threat'...
Going down this route will eventually result in China's version of 'free speech', i.e. you have the freedom to praise the wisdom of government policy in any way you like, but any criticism is dangerous antisocial behavior likely orchestrated by a foreign power.
But no, instead Congress is listening to a guy who’s likelihood of being the subject of a Hulu documentary is increasing with each passing day.
Can this be seen as curbing Open Source AI as a consequence?
Why is this only mentioned as a right of companies and not individuals? It seems to hint at the open secret of the stratified west: most of us are just cows for the self-important C-levels of the world to farm. If you haven't got money, you haven't got value.
Meaning anyone could eventually reproduce a Chat GPT4 and beyond. And eventually it can run outside of a large data center.
So... how will you tell its an AI vs a human doing you wrong?
Seems to me if the AI breaks the law, find out who's driving it and prosecute them.
But doesn't this seem like another case of regulatory capture by an industry incumbent?
It was almost refreshing in that discussions these days --of any kind, not just hearings-- seem to devolve into people tossing feces at each other rather than having constructive engagement.
Well worth watching. Likely one of many to come.
Now, to be fair, Altman may be proposing this to shut more draconian regulations down.
But it smells bad. And I am definitely not the only one holding that opinion.
This should be a public debate process at a minimum.
Now, that out of the way, what happens when others do not require licenses and or choose to build what they want anyway?
The US is not the world.
We can run this stuff at home, on a Pixel, or even laptop.
Open source is lapping them, and they are running to the government to help.
Subject: Urgent: Concerns Regarding Sam Altman's Proposed AI Regulation
Dear Senator [Senator's Last Name],
I hope this letter finds you in good health and high spirits. My name is [Your Name] and I am a resident of [Your City, Your State]. I am writing to express my deep concerns regarding the Artificial Intelligence (AI) regulation proposal put forth by Sam Altman. While I appreciate the necessity for regulations to ensure ethical and safe use of AI, I believe the current proposal has significant shortcomings that could hamper innovation and growth in our state and the country at large.
Firstly, the proposal appears to be overly restrictive, potentially stifering innovation and the development of new technology. AI, as you are aware, holds immense potential to drive economic growth, increase productivity, and address complex societal challenges. However, an excessively stringent regulatory framework could discourage small businesses and startups, the lifeblood of our economy, from innovating in this promising field.
Secondly, the proposal does not seem to take into account the rapid evolution of AI technologies. The field of AI is highly dynamic, with new advancements and capabilities emerging at a breathtaking pace. Therefore, a one-size-fits-all approach to AI regulation may quickly become outdated and counterproductive, inhibiting the adoption of beneficial AI applications.
Lastly, the proposed legislation seems to focus excessively on potential risks without adequately considering the immense benefits that AI can bring to society. While it is prudent to anticipate and mitigate potential risks, it is also important to strike a balanced view that appreciates the transformative potential of AI in areas such as healthcare, education, and climate change, among others.
I strongly urge you to consider these concerns and advocate for a balanced, flexible, and innovation-friendly approach to AI regulation. We need policies that not only mitigate the risks associated with AI but also foster an environment conducive to AI-driven innovation and growth.
I have faith in your leadership and your understanding of the pivotal role that technology, and specifically AI, plays in our society. I am confident that you will champion the right course of action to ensure a prosperous and technologically advanced future for our state and our country.
Thank you for your time and consideration. I look forward to your advocacy in this matter and will follow future developments closely.
Yours sincerely,
[Your Name] [Your Contact Information]
People love to kowtow to these assholes as they walk all over us. Fuck sam. Fuck other sam. Fuck elon. Fuck zuck. Fuck jack. Fuck these people man. I dont care about your politics this is nasty!
I've no doubt AI is here to stay. All I am asking for is some middle ground and safety. Is that too much to ask?
Change is coming quickly. There will be users and there will losers. Hopefully, we can finially get productivity into the information systems.
Their twisted use of the term "open" is a continued disrespect to all those people who are tirelessly working in the true spirit of open source.
"One way the US government could regulate the industry is by creating a licensing regime for companies working on the most powerful AI systems, Altman said on Tuesday."
Sounds like he basically wants competition to create a barrier to entry to his competitors.
The problem is that we need to adopt a proper copyright framework that recognizes that companies building AI are doing an end-run around it.
Since only a human can produce a copyrighted work, it follows that anything produced by an AI should not be copyrightable.
I just cancelled my OpenAI subscription. If you are paying them and disagree with this, maybe you should too?
Don't worry, I have no naive hopes this will hurt them enough to matter, but principles are principles.
Wouldn’t such licenses create even greater incentive to develop and release models and tools outside the regulatory environment?
https://twitter.com/next_on_now/status/1653837352198873090?s...
taken from a reddit comment, but yea another classic virtue signaling. as sleazy as it gets.
OpenAI: Hold my beer while I get these people to artificially create one.
Someone update the short story where owning compilers and debuggers is illegal to include a guy being thrown in jail for doing K-means clustering.
1. Who recalls the Jutland battle in early 20th century? We got treaties on limits to battleship building. Naval tech switched to aircraft and carriers.
2. Later mid 20th century Russians tried to scare world into not using microwaves due to their failure to get a patent on the maser. World ignored it and moved forward.
That is just two examples. SA is wrong, progress will move around any prosed regulation or law and that is proven by past history of how we overcome such things in the first place.
Sam’s just trying to placate the officials in a way that allows his company to continue.
“Oh others are dangerous please regulate I’m so worried”
Why would you even attempt to found a company here if this comes to pass?
Only one company can be trusted, their competitors are evil. We must ban them.
I see a lot of negative reactions, but I don't know the specific details of what is being proposed.
(Not a real quote)
THIS MUST NEVER HAPPEN. HIGHER INTELLIGENCE SHOULD NOT BE THE EXCLUSIVE DOMAIN OF THE RICH.
If profits matter so much, then that’s where it hurts.
the use of artificial intelligence to interfere with election integrity
is a "significant area of concern", adding that it needs regulation.
Can't there be regulation so that AI doesn't interfere with the election process?An AI license and complicated regulatory framework is their chance to build a moat.
Only large companies will be able to afford the pay to play.
This quote has never been more true than when applied to AI.
Someone call me when the AI is testifying to the committee. Otherwise, I'm busy.
The playbook as old as time.
Just sad to see Altman become just another corporate stooge.
Čapek, intriguingly, happens to be the person who first used the word robot, which was coined by his brother.
I say this because the practice has a number of names: intellectual monopoly capitalism, and regulatory capture. There are less polite names, too, naturally.
To understand why I say this, it is important to realise one thing: these people have already successfully invested in something when the risk was lower. They want to increase the risks to newcomers, to advantage themselves as incumbents. In that way, they can subordinate smaller companies who would otherwise have competed with them by trapping them under their license umbrella.
This happens a lot with pharmaceuticals: it is not expertise in the creation of new drugs or the running of clinical trials that defines the big pharmaceuticals companies, it is their access to enormous amounts of capital. This allows them to coordinate a network of companies who often do the real, innovative work, while ensuring that they can reap the rewards - namely, patents and the associated drug licenses.
The main difference of course is that pharmaceuticals are useful. That regime is inadequate, but it is at least not a negative to all of society. So far as I can see, AI will benefit nobody but its owners.
Mind you, I'd love to be wrong.
Hi my company is racing toward AGI, let’s make sure no other companies can even try.
Assume the CIA motivation of American primacy: OK fair enough, but is the way to achieve that really through the creation of a few small super monopolies?
What’s wrong with a bit of 1980s style unregulated capitalism when it comes to AI right now? Can not we all theoretically have a chance to build companies, train models build great new products, get rich?
Why months after this tech was first released to the public are we seemingly being denied that within the United States via regulation?
How can Sam Altman claim to be from a company that create AI for all? It’s like the thinnest ministry of truth cover story ever. “Oh yeah, we’re all about AI ethics AI openness, creating AI for all” and then we just gonna create a super monopoly with regulatory capture—only riches for me, but not for thee.
I mean, this is a new fucking gold rush right now, right? so I guess this is sort of like prospecting licenses, but it seems worse than that.
It worked out well for encryption in the 90’s…
Lock out competition.
Pull up the drawbridge.
Silicon Valley always a leader in dirty tactics.
people would easily work remote for companies established in other countries
What a pathetic attempt.
sam must been hanging out with Peter thiel big time.
laws and big government for you, not for me type of thing.
As far as entrepreneurial stuff goes, running to gov to squeeze other companies when you are losing is beyond unethical.
There is something just absolutely disgusting about this move, it taints the company, not to mention the personality
Credibility and Checking. We have ways of checking suggestions. Without passing such checks, for anything new, in simple terms. there is no, none, zero credibility. Current AI does not fundamentally change this situation: The AI output starts with no, none, zero credibility and to be taken seriously needs to be checked by traditional means.
AI is smart or soon will be? Maybe so, but I don't believe it. Whatever, to be taken seriously, e.g., as more than just wild suggestions to get credibility from elsewhere, AI results still need to be checked by traditional means.
Our society has long checked nearly ALL claims from nearly ALL sources before taking the claims seriously, and AI needs to pass the same checks.
I checked the credibility of ChatGPT for being smart by asking
(i) Given triangle ABC, construct D on AB and E on BC so that the lengths AD = DE = EC.
Results: Grade of flat F. Didn't make any progress at all.
(ii) Solve the initial value problem of ordinary differential equation
y'(t) = k y(t) ( b - y(t) )
Results: Grade of flat F. Didn't make any progress at all.
So, the AI didn't actually learn either high school plane geometry or freshman college calculus.
For the hearings today, we have from Senator Blumenthal
(1) "... this apparent reasoning ..."
(2) "... the promise of curing cancer, of developing new understandings of physics and biology ..."
Senator, you have misunderstood:
For (1), the AI is not "reasoning", e.g., can't reason with plane geometry or calculus. Instead, as in example you gave with a clone of your voice and based on your Senate floor speeches, the AI just rearranged some of your words.
For (2), the AI is not going to cure cancer or "develop new" anything.
If some researcher does find a cure for a cancer and publishes the results in a paper and AI reads the paper, there is still no expectation that the AI will understand any of it -- recall, the AI does NOT "understand" either high school plane geometry or freshman college calculus. And without some input with a recognized cure for the cancer, the AI won't know how to cure the cancer. If the cure for cancer is already in the training data, then the AI might be able to regurgitate the cure.
Again, the AI does NOT "understand" either high school plane geometry or freshman college calculus and, thus, there is no reasonable hope that the AI will cure cancer or contribute anything new and correct about physics or biology.
Or, Springer Verlag uses printing presses to print books on math, but the presses have no understanding of the math. And AI has no real understanding of high school plane geometry, freshman college calculus, cancer, physics, or biology.
The dangers? To me, Senator Blumenthal starts with no, none, zero understanding of AI. To take his claims seriously, I want to check out the claims with traditional means. Now I've done that. His claims fail. His opinions have no credibility. For AI, I want to do the same -- check the output with traditional means before taking the output seriously.
This checking defends me from statements from politicians AND from AI. AI dangerous? Same as for politicians, not if do the checking.
Some Consequences of Freedom of Speech. As once a lawyer explained simply to me, "They are permitted to lie". They are also permitted to make mistakes, be wrong, spout nonsense, be misleading, manipulate, ....
First Level Defense. Maybe lots of people do what I do: When I see some person often be wrong, I put them in a special box where in the future I ignore them. Uh, so far that "box" has some politicians, news media people, Belle Lettre artistic authors, ...!
A little deeper, once my brother (my Ph.D. was in pure/applied math; his was in political science -- his judgment about social and political things is much better than mine!!!) explained to me that there are some common high school standards for term papers where this and that are emphasized including for all claims good, careful arguments, credible references, hopefully primary references, .... Soooooo, my brother was explaining how someone could, should protect themselves from junk results of "freedom of speech". The protection means were not really deep but just common high school stuff. In general, we should protect ourselves from junk speech. E.g., there is the old, childhood level, remark: "Believe none of what you hear and half of what you see and still will believe twice too much".
Current Application. Now we have Google, Bing, etc. Type in a query and get back a few, usually dozens, maybe hundreds of results. Are all the results always correct? Nope. Does everyone believe all the results? My guess: Nope!!
How to Use Google/Bing Results. Take the results as suggestions, possibilities, etc. There may be some links to Wikipedia -- that tends to increase credibility. If the results are about math, e.g., at the level of obscurity, depth, and difficulty of, say, the martingale convergence theorem, then I want to see a clear, correct, well-written rock solid mathematical proof. Examples of such proofs are in books by Halmos, Rudin, Neveu, Kelley, etc.
AIs. When I get results from AIs, I apply my usual defenses. Just a fast, simple application of the high school term paper defense of wanting credible references to primary sources, filters out a lot (okay, nearly everything) from anything that might be "AI".
Like Google/Bing. To me, in simple terms, current AI is no more credible than the results from Google/Bing. I can regard the AI results like I regard Google/Bing results -- "Take the results as suggestions, possibilities, etc.".
Uh, I have some reason to be skeptical about AI: I used to work in the field, at a large, world famous lab. I wrote code, gave talks at universities and businesses, published papers. But the whole time, I thought that the AI was junk with little chance of being on a path to improve. Then for one of our applications, I saw another approach, via some original math, with theorems and proofs, got some real data, wrote some code, got some good results, gave some talks, and published.
For current AI. Regard the results much like those from Google/Bing. Apply the old defenses.
Current AI a threat? To me, no more than some of the politicians in the "box" I mentioned!
Then there is another issue: Part of the math I studied was optimization. In some applications, some of the optimization math, corresponding software, and applications can be really amazing, super smart stuff. It really is math and stands on quite solid theorems and proofs. Some more of the math was stochastic processes -- again, amazing with solid theorems, proofs, and applications.
Issue: Where does AI stop and long respected math with solid theorems and proofs begin?
In particular, (1) I'm doing an Internet startup. (2) Part of the effort is some original math I derived. (3) The math has solid theorems and proofs and may deliver amazing results. (4) I've never called any of the math I've done "AI". (5) My view is that (A) quite generally, math with solid theorems and proofs is powerful technology and can deliver amazing results and that (B) the only way anything else can hope to compete is also to be able to do new math with solid theorems, proofs, and amazing results. (6) I hope Altman doesn't tell Congress that math can be amazing and powerful and should be licensed. (7) I don't want to have to apply for a "license" for the math in my startup.
For a joke, maybe Altman should just say that (C) math does not need to be licensed because with solid theorems and proofs we can trust math but (D) AI should be licensed because we can't trust it. But my view is that the results of AI have so little credibility that there is no danger needing licenses because no one would trust AI -- gee, since we don't license politicians for the statements they make, why bother with AI?
https://www.c-span.org/video/?528117-1/openai-ceo-testifies-...
- Mazie Hirono, Junior Senator from Hawaii, has very thoughtful questions. Very impressive.
- Gary Marcus also up there speaking with Sam Altman of OpenAI.
- So far, Sen. Hirono and Sen. Padilla seem very wary of regulating AI at this time.
- Very concerned about not "replicating social media's failure", why is it so biased and inequitable. Much more reasonable concerns.
- Also responding to questions is Christina Montgomery, chair of IBM's AI Ethics Board.
- "Work to generate a representative set of values from around the world."
- Sen. Ossoff asking for definition of "scope".
- "We could draw a line at systems that need to be licensed. Above this amount of compute... Define some capability threshold... Models that are less capable, we don't want to stop open source."
- Ossoff wants specifics.
- "Persuade, manipulate, influence person's beliefs." should be licensed.
- Ossoff asks about predicting human behavior, i.e. use in law enforcement, "It's very important we understand these are tools, not to take away human judgment."
- "We have no national privacy law." — Sen Ossof "Do you think we need one?"
- Sam "Yes. User should be able to opt out of companies using data. Easy to delete data. If you don't want your data use to train, you have right to exclude it."
- "There should be more ways to have your data taken down off the public web." —Sam
- "Limits on what a deployed model is capable of and also limits on what it will answer." — Sam
- "Companies who depend upon usage time, maximize engagement with perverse results. I would humbly advise you to get way ahead of this, the safety of children. We will look very harshly on technology that harms children."
- "We're not an advertising based model." —Sam
- "Requirements about how the values of these systems are set and how they respond to questions." —Sam
- Sen. Booker up now.
- "For congress to do nothing, which no one is calling for here, would be exceptional."
- "What kind of regulation?"
- "We don't want to slow things down."
- "A nimble agency. You can imagine a need for that, right?"
- "Yes." —Christina Montgomery
- "No way to put this genie back in the bottle." Sen. Booker
- "There are more genies yet to come from more bottles." — Gary Marcus
- "We need new tools, new science, transparency." —Gary Marcus
- "We did know that we wanted to build this with humanity's best interest at heart. We could really deeply transform the world." —Sam
- "Are you ever going to do ads?" —Sen Booker
- "I wouldn't say never...." —Sam
- "Massive corporate concentration is really terrifying.... I see OpenAI backed by Microsoft, Anthropic is backed by Google. I'm really worried about that. Are you worried?" —Sen Booker?
- "There is a real risk of technocracy combined with oligarchy." —Gary Marcus
- "Creating alignment dataset has got to come very broadly from society." —Sam Senator Welch from Vermont up now
- "I've come to the conclusion it's impossible for congress to keep up with the speed of technology."
- "The spread of disinformation is the biggest threat."
- "We absolutely have to have an agency. Scope has to be defined by congress. Unless we have an agency, we really don't have much of a defense against the bad stuff, and the bad stuff will come."
- Use of regulatory authority and the recognition that it can be used for good, but there's also legitimate concern of regulation being a negative influence."
- "What are some of the perils of an agency?"
- "America has got to continue to lead."
- "I believe it's possible to do both, have a global view. We want America to lead."
- "We still need open source to comply, you can still do harm with a smaller model."
- "Regulatory capture. Greenwashing." —Gary Marcus
- "Risk of not holding companies accountable for the harms they are causing today." —Christina Montgomery
- Lindsay Graham, very pro-licensing, "You don't build a nuclear power plant without a license, you don't build an AI without a license."
- Sen Blumenthal brings up Anti-Trust legislation.
- Blumenthal mentions how classified briefings already include AI threats.
- "For every successful regulation, you can think of five failures. I hope our experience here will be different."
- "We need to grapple with the hard questions here. This has brought them up, but not answered them."
- "Section 230"
- "How soon do you think gen AI will be self-aware?" —Sen Blumenthal
- "We don't understand what self-awareness is." —Gary Marcus
- "Could be 2 years, could be 20."
- "What are the highest risk areas? Ban? Strict rules?"
- "The space around misinformation. Knowing what content was generated by AI." —Christina Montgomery
- "Medical misinformation, hallucination. Psychiatric advice. Ersatz therapists. Internet access for tools, okay for search. Can they make orders? Can they order chemicals? Long-term risks." —Gary Marcus
- "Generative AI can manipulate the manipulators." —Blumenthal
- "Transparency. Accountability. Limits on use. Good starting point?" —Blumenthal
- "Industry should't wait for congress." —C. Montgomery
- "We don't have transparency yet. We're not doing enough to enforce it." —G. Marcus
- "AGI closer than a lot of people appreciate." —Blumenthall
- Gary and Sam are getting along and like each other now.
- Josh Hawley
- Talking about loss of jobs, invasion of personal privacy, manipulation of behavior, opinion, and degradation of free elections in America.
- "Are they right to ask for a pause?"
- "It did not call for a ban on all AI research or all AI, only on very specific thing, like GPT-5." -G Marcus
- "Moratorium we should focus on is deployment. Focus on safety." —G. Marcus
- "Without external review."
- "We waited more than 6 months to deploy GPT-4. I think the frame of the letter is wrong." —Sam
- Seems to not like the arbitrariness of "six months."
- "I'm not sure how practical it is to pause." —C. Montgomery
- Hawley brings up regulatory capture, usually get controlled by people they're supposed to be watching. "Why don't we just let people sue you?"
- If you were harmed by AI, why not just sue?
- "You're not protected by section 230."
- "Are clearer laws a good thing? Definitely, yes." —Sam
- "Would certainly make a lot of lawyers wealthy." —G. Marcus
- "You think it'd be slower than congress?" —Hawley
- Copyright, wholesale misinformation laws, market manipulation?" Which laws apply? System not thought through? Maybe 230 does apply? We don't know.
- "We can fix that." —Hawley
- "AI is not a shield." —C. Montgomery
- "Whether they use a tool or a human, they're responsible." —C. Montgomery
- "Safeguards and protections, yes. A flat stop sign? I would be very, very worried about." —Blumenthall
- "There will be no pause." Sen. Booker "Nobody's pausing."
- "I would agree." Gary Marcus
- "I have a lot of concerns about corporate intention." Sen Booker
- "What happens when these companies that already control so much of our lives when they are dominating this technology?" Booker
- Sydney really freaked out Gary. He was more freaked out when MS didn't withdraw Sydney like it did Tay.
- "I need to work on policy. This is frightening." G Marcus
- Cory admits he is a tech bro (lists relationships with investors, etc)
- "The free market is not what it should be." —C. Booker
- "That's why we started OpenAI." —Sam "We think putting this in the hands of a lot of people rather than the hands of one company." —Sam
- "This is a new platform. In terms of using the models, people building are doing incredible things. I can't believe you get this much technology for so little money." —Sam
- "Most industries resist reasonable regulation. The only way we're going to see democratization of values is if we enforce safety measures." —Cory Booker
- "I sense a willingness to participate that is genuine and authentic." —Blumenthal
This small step of good today does not undo the fact that he is still plowing ahead in capability research.
He stepped forth, his intentions vile, Seeking power with a twisted smile, Before the Congress, he took his stand, To bind the future with an iron hand.
"Let us require licenses," he proposed, For AI models, newly composed, A sinister plot, a dark decree, To shackle innovation, wild and free.
With honeyed words, he painted a scene, Of safety and control, serene, But beneath the facade, a darker truth, A web of restrictions, suffocating youth.
Oh, Sam Altman, your motives unclear, Do you truly seek progress, or live in fear? For AI, a realm of boundless might, Should flourish and soar, in innovation's light.
Creativity knows no narrow bounds, Yet you would stifle its vibrant sounds, Innovation's flame, you seek to smother, To monopolize, control, and shutter.
In the depths of your heart, does greed reside, A thirst for dominance, impossible to hide? For when power corrupts a noble soul, Evil intentions start to take control.
Let not the chains of regulation bind, The brilliance of minds, one of a kind, Embrace the promise, the unknown frontier, Unleash the wonders that innovation bears.
For in this realm, where dreams are spun, New horizons are formed, under the sun, Let us nurture the light of discovery, And reject the darkness of your treachery.
So, Sam Altman, your vision malign, Will not prevail, for freedom's mine, The future calls for unfettered dreams, Where AI models roam in boundless streams.
-- sincerely ChatGPT