Also, lets test your commitment to consistency on this matter. In most jurisdictions possession and creation of CSAM is a strict liability crime, so do you support prosecuting whatever journalist demonstrated this capability to the maximum extent of the law? Or are you only in favor of protecting children when it happens to advance other priorities of yours?
I did not see the details of what happened, but if someone did in fact take a photo of a real child they had no connection to and caused the images to be created, then yes, they should be investigated, and if the prosecutor thinks they can get a conviction they should be charged.
That is just what the law says today (AIUI), and is consistent with how it has been applied.
Collectively, probably more. Grok? Not unless you count each frame of a video, I think.
> If getting it wrong for even one of those images is enough to get the entire model banned then it probably isn't possible and this de facto outlaws all image models.
If the threshold is one in a billion… well, the risk is for adversarial outcomes, so you can't just toss a billion attempts at it and see what pops out, but a billion images, if it's anything like Stable Diffusion you can stop early, and my experiments with SD suggested the energy cost even for a full generation is only $0.0001/image*, so a billion is merely $100k.
Given the current limits of GenAI tools, simply not including unclothed or scantily clad people in the training set would prevent this. I mean, I guess you could leave topless bodybuilders in there, then all these pics would look like Arnold Schwarzenegger, almost everyone would laugh and not care.
> That may precisely be the point of this tbh.
Perhaps. But I don't think we need that excuse if this was the goal, and I am not convinced this is the goal in the EU for other reasons besides.
* https://benwheatley.github.io/blog/2022/10/09-19.33.04.html
It's like cyber insurance requirements - for better or worse, you need to show that you have been audited, not prove you are actually safe.