first of all, protection of individuals is the only thing that matters.
yes, the current rules are a patchwork, but i don't see any alternative.
how is setting constraints on the language model going to help protect me from abuse by that model? for example how would such a regulation prevent facial recognition? a more limited model only limits the capacity of a facial recognition system, potentially leading to more false positives which would make things worse.
on the other hand, a rule banning facial recognition provides full protection, as does a ban on using machine algorithms to make decisions that affect a persons life.
AI use is either safe or low risk, or it is dangerous. those dangers need to be averted. as i see it, the EU does not regulate AI at all. it regulates the harmful effects of technology on people. you can build whatever AI tool you want, as long as you use it in a manner that does not hurt people. or is my understanding of the current regulations wrong?
The difference -- why all of this stuff is being regulated now and not 20 years ago -- is that under current techniques, these models are just much more powerful and accurate today. The impetus for regulation is not that a given machine learning application exists, but the fact that it works really well.
The power and sophistication of machine learning models corresponds extremely strongly to the scale of data that it is trained on. If you are pro-regulation, then what you really want to regulate is not the mere existence of a machine learning application, but the scale of data with which it is created.
--
For another way of making the point: consider the phrase you used, "a ban on using machine algorithms to make decisions that affect a persons life". Examine it like an adversarial lawyer: what's the threshold for "affect"? Everything affects a person's life. Does Google Search work under this standard? It uses a machine algorithm to decide what to show, which can affect the user's life. Does Netflix's film recommendation work? Does Spotify's recommendation work? Okay, you want those things to work, but you don't want [insert other purpose]. You're going to find that the lines are blurry everywhere you look, and that makes for really difficult regulation.
and now you are suggesting that if these are to be regulated, they should effectively stop becoming better? what would be the point of that?
Everything affects a person's life
well, yes, so there must be a way to force a company to reverse a change that affects me.
kneecapping models doesn't prevent a company from disabling an electronic lock or closing my account. these are problems that already exist regardless of what caused those changes. google or facebook should not be allowed to terminate users unless they can prove fraud. how they arrived at the decision is quite irrelevant. insurance companies should not be allowed to deny coverage without a human verifying the decision, and also not without a human who is able to reverse a decision. weaker models are not going to enforce that, unless the models are so weak that they become useless. again, what would be the point of that?
until recently those models were not good enough. i read that as: they were useless for serious applications. they were research under development. we have been working on this for decades, and only now we are approaching the point where these tools actually become useful.
but the impetus for regulation is that these models are being used and yet still do not work well enough. they do make mistakes, and those mistakes need to be supervised and fixed if needed. if they would work perfectly to the point that an affected person can get them to reverse decisions, then this would be less of an issue.
i agree with you that the current regulations are difficult, but i do not see the benefit in regulating how those models are built instead. the damage happens at the interface between human and machine, and to prevent humans from getting hurt that interface is what needs to be regulated.
what you are suggesting sounds to me like proposing that knives made from steel are to dangerous, because a steel blade doesn't become dull fast enough. so we should instead only make knives from wood to make them weaker. but a wooden blade can still kill. so really what needs to be regulated is how the knives are used, not how they are made.