Could it ever be the case, I wonder, if we could trust/enforce/believe that a model had so abstracted what it learned from the training inputs such that the model was not a derived work from them?
I've seen the examples where the model is able to reproduce recognizable characters from popular media. Those look like they might be "just" overfitting? While I can see that as desirable from the point of view of being able to create a picture of "Robocop shopping for diapers". But maybe we could compromise and converge to a point where AI art isn't quite so demonized and instead is seen as a useful tool.