back in the 70s computer crime was not technically possible or even really feasible in the way we see it today. we didnt have an internet until the 80s.
what we have with AI is wholesale development, integration, and execution in our everyday lives across numerous platforms and services. its used in bill pay, litigation, even war. we absolutely do need legislation to protect and secure regular people from it.
this feels like a classic case of having your cake and eat it too. Either AI is a very powerful tool for humanity and thus would naturally require a regulatory framework around it to ensure its proper use and application, or its a buzzword hyped up by SEO's and marketing teams to make people think a handful of big companies that ran out of steam 20 years ago still have the potential to innovate stuff.
Do you think that because personal computers & the internet took a while to develop, states like California missed an opportunity to helpfully regulate them by forcing their makers to attest to their safety before product development (much less public release) gets underway? Should we add that sort of regulation, now?
A new website could be harmful – used for "serious crimes", "bill pay", "litigation", "even war"! Should every new website require filing paperwork guaranteeing its safety with a Calfornia Department of Technology division before going live? (There were oppressive regimes that tried to control presses, printers/fax-machines, & websites like this, using "safety" rationales!)
The mere fact a new technology is a "powerful tool for humanity" is not something that must "naturally require" a novel state-bureaucracy-run "regulatory framework around it to ensure its proper use and application".
The state in general, and the State of California in particular, is not our wise cloud-father with the foresight & disinterest to do what's best for us. It's instead a clumsy and often-corrupted tool for solving some common-coordination problems.
States usually do best when addressing a well-understood common history of specific problems & market-failures – rather than improvising new filing requirements against theoretical fears, as here.
Any new "very powerful tool for humanity" deserves the same freedom from prior restraint, & forebearance from premature budens that mostly benefit incumbents and large players, that prior technological innovations enjoyed.
Computers have been used for payroll processing and banking since at least the 1960s.
0: https://www.theatlantic.com/technology/archive/2013/02/the-d... (https://web.archive.org/web/20240402191558/https://www.theat...)
You joke, as if we didn't do exactly this -- regulate accountants and accounting software. (GAAP, DFARS, SOX, PCI DSS, etc)
And we did the same thing with say, Auto Manufacturers and automobiles (FMVSS and CAFE via NHTSA)
You could ask the same thing about gun manufacturers and shootings. This is a totally normative, political question.
And before you start moving the goalposts: an eye opening experience for me was considering that while you can dismiss an individual’s claims of “harms” as imaginary, what about a huge group of people? Specifically, if the authors unionized (they did) and they say that AI training on their work harms them (they do), does that not make in “real,” in a special way, the same special way that a law that comes into being via a popular vote is more “real” compared to a law that is made by fiat by a dictator? I am just trying to open your mind past these really basic sentiments and gotchas.
No matter what, AI developers must grapple with popular opinions about AI.
I'm not saying you're off base here, and I have never worked in tech in China ... but doesn't the state apparatus have its fingers even deeper in AI development over there? Why is state interference on the part of China not perceived as harmful to progress in the same way that the specter CA regulation is?