Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence - https://news.ycombinator.com/item?id=38067314 - Oct 2023 (334 comments)
According to EO's guidelines on commpute, something like GPT4 probably falls under reporting guidelines. Also, in the last 10 years GPU compute capabilities grew 1000x. What will be happening even 2 or 5 years from now?
Edit: yes, regulations are necessary but we should regulate applications of AI, not fundamental research in it.
A healthy regulatory body provides for that by setting standards and holding the relatively few vendors liable for conformance rather than the countless users.
It does interfere with innovation for those vendors doing foundational research, but it enables richly funded innovation in applications. It seems like we're at a point where lots of people want to start working on applications using current/near technology; failure to provide them the liability protections they need is what will stifle practical, commercial innovation and would leave AI applications in the hands of the few specialist technology companies who are confident in their models and have the wealth to absorb any liability issues that arise.
As for reporting minimums, the ones in the EO are explicitly temporary. Quoting directly: "...shall define, and thereafter update as needed on a regular basis, the set of technical conditions for models and computing clusters that would be subject to the reporting requirements..." "Until such technical conditions are defined, the Secretary shall require compliance with these reporting requirements..."
So, my question is: why are you ignoring the actual things happening in favor of complaining about phantoms?
My point is that applications of AI must be regulated, not fundamental research.
Alright, you hooked me in. What are they?
1. There are risks specific to AI or specifically aggravated by AI (easy)
2. Federal regulation of AI safety will reduce those risks (good luck)
When articulating your arguments for point 2, I would recommend addressing the thorny issue of proliferation.
Regulate AI applications, not fundamental research in it.
You can't trust companies to self-regulate.
Talk about snatching defeat from the jaws of victory... damn
(I'm not endorsing this regulation. It's not at all clear than any regulation could be helpful. As you say, these regulations aren't going to slow non-US research efforts.)
[1] https://www.iheart.com/podcast/105-behind-the-bastards-29236...
Edit: To elaborate, it's pretty easy to cherry-pick cases of either over and under regulation and use that to "prove" either side of the argument. There's nothing in the Bill Gurley talk that provides any insight into whether AI should be regulated or not because it doesn't directly engage with issues around AI specifically. Instead, it just says: "tech regulation bad".
I have been afraid of over-regulation of AI but standards and testing environments don't sound so bad.
It does not sound like they are implementing legal regulations that will protect incumbents at the expense of AI innovation, at least at this point.
Give them a minute, an agency needs to exist before it can be captured. There hasn't been time yet for a single revolving-door hire.
Thou shalt not make a machine in the likeness of a human mind
I guess we're heading for spice then"(a) prevent unfair methods of competition and unfair or deceptive acts or practices in or affecting commerce;
(b) seek monetary redress and other relief for conduct injurious to consumers;
(c) prescribe rules defining with specificity acts or practices that are unfair or deceptive, and establishing requirements designed to prevent such acts or practices;
(d) gather and compile information and conduct investigations relating to the organization, business, practices, and management of entities engaged in commerce; and
(e) make reports and legislative recommendations to Congress and the public. "
[1] https://www.ftc.gov/legal-library/browse/statutes/federal-tr...
Netizen Safety Agency?
Citizens(/Consumers) Browsing in Privacy?
to repurpose a couple.
Even in feedback loop systems where a model might "learn" from the outcomes of its actions, this learning is typically constrained by the objectives set by human operators. The model itself doesn't have the ability to decide what it wants to learn or how it wants to act; it's merely optimizing for a function that was determined by its creators.
Furthermore, any tendency to "meander and drift outside the scope of their original objective" would generally be considered a bug rather than a feature indicative of agency. Such behavior usually implies that the system is not performing as intended and needs to be corrected or constrained.
In summary, while machine learning models are becoming increasingly sophisticated and capable, they do not possess agency in the way living organisms do. Their actions are a result of algorithms and programming, not independent thought or desire. As a result, questions about their "autonomy" are often less about the models themselves developing agency and more about the ethical and practical implications of the tasks we delegate to them."
The above is from the horse's mouth (ChatGPT4)
My commentary:
We have yet to achieve the kind of agency a jelly fish has, which operates with a nervous system comprised of roughly 10K neurons (vs 100B in humans) and no such thing as a brain. We have not yet been able to replicate the Agency present in a simple nervous system.
I would say even an Amoeba has more agency than a $1B+ OpenAI model since the Amoeba can feed itself and grow in numbers far more successfully and sustainably in the wild with all the unpredictability in its environment than an OpenAI based AI Agent, which ends up stuck in loops or derailed.
What is my point?
We're jumping the gun with these regulations. That's all I'm saying. Not that we should not keep an eye and have a healthy amount of concern and make sure we're on top of it, but we are clearly jumping the gun since we the AI agents so far are unable to compete with a jelly fish in open-ended survival mode (not to be confused with Minecraft survival mode) due to the AI's lack of agency (as a unitary agent and as a collective).
So I'm assuming some of you have seen more details - can someone share where they can be found?
It is against HN rules to call out a commenter for having not read the article, and earlier comments set the tone of the discussion when a post hits the front page. For many posts, by the time it hits the front page, the top-voted comments often include hot takes from someone who just saw the title and wrote a comment about whatever they imagined the article to be.
Legislative action would theoretically be best, but our current congress couldn't produce a better bill than a wet speak and spell.
It can only help existing companies to stifle competition and guarantee revenue.