1. There are risks specific to AI or specifically aggravated by AI (easy)
2. Federal regulation of AI safety will reduce those risks (good luck)
When articulating your arguments for point 2, I would recommend addressing the thorny issue of proliferation.
But don't you agree at least some legal questions should be asked about this overhype of AI ? Because I don't see any so far.
Edit : this is the kind of legal question I was talking about, just learned it now : https://news.ycombinator.com/item?id=38102760
I have trouble answering that question as you've asked it. It seems like we agree on several things, namely:
1. that any technology is subject to worst-case analysis; and,
2. that it is appropriate in principle for law to govern the use of technology.
Here's what I'm having trouble unpacking in your question:
1. What are the exact legal questions you think should be asked, and aren't? (N.B. Your link is paywalled, and doesn't seem to refer to a specific legal question)
2. What is it about AI exactly that you think is overhyped, and seem to think I disagree with?
I don't have a lot of context to go on, so some of my questions may also contain unwarranted assumptions. I hope you'll point them out :)
1. Have you thought about the difficulties involved in legislating around AI? Specifically, I've found it very difficult to articulate what is and isn't appropriate use of AI with any real precision. Let me give an example. I think we can all agree that "nudifying" photographs of minors is at least in poor taste, if not outright dangerous, and that it is fair game to make this particular usage of technology illegal. However, where do you stand on the idea that regulators should disallow the "nudification" use altogether? I can think of several legitimate (if a bit niche) uses, ranging from the creation of medical diagrams and teaching materials to filming love scenes in mainstream cinema with cloths-on and removing the cloths in post-processing. Do you think it's fair game to disallow these uses? If so, should this be absolute liability or should there be a notion of intent? If you think, as I do, that the technical capability should be unrestricted except insofar as it is employed to illegal ends, then we don't need any new laws. We simply apply the laws against, say, involuntary pornography and sexual exploitation of minors, and the problem is solved from a legal perspective; it is now a job for the executive branch.
2. I would appreciate it if you could speak to the risk of misclassification. Many of the proposed regulations involve training AI systems to monitor other AI systems (or themselves, as with the case of prompt engineering). What happens when the black box makes mistakes? Do we accept that a small number of innocent people will be labeled X by AI? How should the law take this possibility into account? Again, do we accept that legitimate uses are de facto crippled or entirely disabled? That's one outcome I would very much like to avoid.
3. On a macro-scale, how do we deal with the fact that other (perhaps less scrupulous) nations will have access to unrestricted AI?
Point 3 is particularly troubling from a regulation perspective, because the penchant of software for proliferation is astronomically higher than that of, say, nuclear weapons. This feels like the 90's crypto export-controls all over again, which is minimally a gigantic waste of resources and maximally a crippling economical vulnerability.
P.S.: My friend, it is exactly your job to argue your case when speaking about public issues. The term for this is "civic duty".