"Valve is not willing to publish games with AI generated content anymore"
Your title changes the meaning- they didn't ban games afaict.
It's also a misleading post, as it's specifically GenAI where authors can't prove or don't have rights to content.
If you use ProcGen etc or have full rights to the data used, I can't imagine there would be any issues.
Yep, you are absolutely right.
i think valve is over reaching with this policy.
It should be that you need to prove the art used in the game itself that does not breach any copyright. The tool used to create said art has no bearing on the final art with regards to copyright.
Otherwise, would valve also have mentioned that the developer should also produce evidence of their photoshop license/subscription (if they used photoshop in the course of making their game)? Do they need to check that the version of windows being used to make the game is a licensed one?
Even more specifically, the author admitted to the images being "obviously AI generated" and Valve alleges that the images themselves in the game's initial submission contained copyrighted third-party content.
Even the original title seems questionable until properly substantiated, so I've reverted to it plus tacked on a question mark.
Under u.s. law you don't need to prove rights for content used in this way. It's transformative and does not replicate the heart of the originals work, therefore it's protected under fair use. AI generated content cannot be copyrighted, so they cannot claim ownership regardless.
https://www.artnews.com/art-news/news/ai-generator-art-text-...
The AI generated images are not protected by copyright because they are not authored by a human. Whether or not it is legal to train a model on copyrighted images is irrelevant for this. So it seems clear that Valve made the wrong call here.
Valve's worried that AI-generated art is in a murky copyright state, and don't want to open themselves up to being sued.
Three possiblilities:
1. It's just fictional. Probably written by a troll or generated by ChatGPT.
2. Steam refused to publish the game due to some obvious copyright issues (like they told Midjourney to generate superman or one-piece characters)
3. Steam is banning any AI generated assets.
My bet is 1 > 2 > 3.
The internet is flooded with content right now to the effect of "Valve might be doing this thing", but not one of those sources has actually reached out to Valve for comment. Instead they all cite a random commenter on Reddit (or they cite each other).
So it sounds like OP slapped some half-assed generated images into a game and tried to submit it. Valve now can't really trust someone that does that to have done any due diligence.
from post:
> contains art assets generated by artificial intelligence that appears to be relying on copyrighted material owned by third parties.
So I'm guessing 2
Besides my already established biases towards AI: It's threatening to creative endeavors, not because it exists, but because it will impact the earning potential of creatives.
If so, the reaction I've seen is quite positive. Very unlikely though.
I'm sure they mostly just don't want to wind up in court with a lawyer being able to say that they let [blatant example here] get published on their store. So long as they can credibly claim that there was no way for them to tell something was in an objectionable category, I'd imagine they're fine with it.
Their rules, if you're curious: https://partner.steamgames.com/steamdirect
Adhering to legal and copyright standards isn't a "cop-out"
They are obviously able to identify some copyright material.
Assuming this is even real, it may have more to do with preventing another 1989 video game crash resulting from the market being overwhelmed with crappy games.
Then again, most AAA games today are broken pieces of suck, so IDK.
They also care about their reputation amongst content-producers (game makers). Youtube faced this exact dynamic back in the day and have found it better to side with the large creators who care very much about protecting IP rights and so they exercise a heavy hand against copyrighted material.
And take the reputation hit that would go along with that. Valve's business is 30% technology and 70% the reputation of being much less untrustworthy than the alternatives. If they lose that they can close shop.
But AI generated content is NOT banned. You just have to prove you have the copyright (or permission) for the training data.
Notably: not all AI-generated content, but rather AI-generated content from models that were trained on material that's not owned by the person submitting the game.
The Copyright Act does not expressly impose liability for contributory infringement. According to the U.S. Supreme Court, the "absence of such express language in the copyright statute does not preclude the imposition of liability for copyright infringements on certain parties who have not themselves engaged in the infringing activity.
One who knowingly induces, causes or materially contributes to copyright infringement, by another but who has not committed or participated in the infringing acts themselves, may be held liable as a contributory infringer if they had knowledge, or reason to know, of the infringement. See, e.g., Metro-Goldwyn-Mayer Studios Inc. v. Grokster, Ltd., 545 U.S. 913 (2005); Sony Corp. v. Universal City Studios, Inc., 464 U.S. 417 (1984).
IANAL. Considering Valve not only gives games a retail platform, has to approve games before sale, and takes a cut of that sale and assuming the reddit post isn't a lie then I am gonna guess Valve's probably well staffed legal dept decided not to take a seemingly iffy legal gamble on a game that probably wasn't going to rake in a ton of sales anyway.
Recent text-to-image models have improved enough such that it's possible to get realistic, not-Midjourney-dreaminess in the generations with a modicium of effort, so banning obviously-AI-generated images is shortsighted and unsustainable.
The whole space is somewhat amusing to me. what is the bigger moral hazard: Openly disclosing everything about your content pipeline and getting your team's efforts shitcanned, or keeping everything private unless a court order shows up?
What would an satisfy an audit trail that no tainted AI data have made it into a digital image? It would involve a chain of attribution per fraction of a pixel through all it's past iterations.
Which wouldn't be sufficient since, as stated many times before, the diffusion process most text-to-image AIs use is not collaging.
This has been the legal street smarts for a while and doesn't seem like a big development to me. As usual, you don't admit or allude to anything. It's like when I'd write code and say no I've never even visited StackOverflow.
As placeholders or to create little bits and doodles (like a mouse cursor in the style of an armored fist), there are lots of little graphical icons in a game that would other have to be created by a graphical artist. Generative art is really useful in my experience.
It's reduced the work to the point where I can toy with it in my off time and spend most of my effort in the actual programming and development.
The other idea I have toyed with, coming from professional ML experience - was to build my own generative model and use it to create my own art assets. Here I wonder how the copyright rules would work - would the assets I train on be subjected to copyright? This is a much bigger conversation at that point and I wont be the only one affected.
The issue people have is when you just use a dataset trained on someone else's work and pass it off as your own, and in the case of Steam games, most likely profit from it.
Direct incorporation of generative art into a commercial product is much more murky.
Unless you're training on assets available in the public domain, any generated output from your "custom model" would have the same potential copyright issues as midjourney, stable diffusion, etc. What exactly are you confused about?
Automation will be a force multiplier for laziness and predation more so than for creativity.
No, we've been inundated with low-quality content for decades now. This is nothing new.
Fortunately, ratings and reviews and popularity have always been an extremely effective antidote.
Even if 99% of stuff on a platform is total crap, nobody cares. It's a non-issue. Whether you're talking about music, books, TV shows, or whatever. The 1% rises to the top and you don't honestly need to pay attention to the rest.
If you choose to pay $20 for something that has 3 reviews that are all 1-star, then that's more your problem than the system's problem.
Trends matter too. If it gets to be an order of magnitude cheaper to shill your products on social media and forums, set up seo crap articles to phish users from search results, or churn out good old fashioned email scams, then expect an order of magnitude worse signal to noise ratio on the internet as a result. There could be a point reached where the internet is functionally broken, with the signal to noise ratio too low to make it useful for anything, save for navigating directly to known good hosts that themselves will become increasingly more lucrative targets for enshittification.
The problem with this argument is that scale matters.
When a small number of people were trading mp3 files on FTP servers it was not seen as a problem. When Napster came out, it was seen as a problem. It was correctly seen as a qualitative shift in the effect it would have on society.
I'm working on simulating a small town using Generative AI agents, schedules, social interactions, realistic reactions to outside events, dialogue between characters, the whole shebang.
A year ago that wasn't an "after work side project".
I just did a full launch on https://www.generativestorytelling.ai/ - a side project that was only possible because of AI help. Between art assets and also coding in brand new areas that I hadn't used before, AIs are an obscene boost to what individuals can do.
The price and complexity of software development projects has been increasing for years now, AI is a huge reset on the amount of effort needed to make stuff.
Depends what the individual is trying to do. Making memes? Blog spam? Sure. But for non-trivial content I haven’t seen an example that was compelling.
Everywhere. Games, porn, text, articles, music, etc. For the generation that grew up with the internet already existing, this is their epoch moment, lives pre and post generative AIs.
I was thinking the other day that original artworks are going to be far more valued with a glut of AI generated.
Thinking slightly ahead, you can find your absolute favorite artist, and in seconds use their style you love so much to make the family portrait you would never be able to commission them to do. But going forward even more…
It’s just not the same as a print right? Well, thanks to AI being able to learn and determine where brush strokes would land, we take advancements from 3D printers and your desktop painting rig picks up a brush and paints it just as the artist would have.
Then going forward even more.. the artist himself needs some cash and knocks out 100 of these customs while they sleep, signs them, and now they are originals, sort of.
So… verifiable originals are going to be the hot thing. A painting with a video of the artist painting it… but not an AI generated video of course!
Maybe the artist will have to print it on location while you watch.
For the music industry, this happened already. Artists makes a lot more money from live performances than from album sales or streaming fees.
"you think I'd be working in a place like this if I could afford a real snake?!?"
Good luck convincing enough people in the art community to give you data on their process so you can do this. There's definitely enough out there in things like PSD files but the AI community has been so rude and antagonistic to art communities that most of them have a kneejerk hate reaction on any mention of the technology, and rightfully so. AI users and companies have been gleefully abusing artists from day 1.
AI generated content is not meaningfully better or worse than these low effort games, though taking the time to generate passable content with AI is probably a lot more effort than just using $50 worth of assets that are already packaged up for unity.
Steam is like eBay but for low quality games
It happens to them all. There are big games on steam, but 95% of the stuff on there is low value, low cost content
I'm pretty sure if this is Valve policy they'll have no trouble saying so publicly. I miss the old days of journalism where someone made an effort to get the story correct including responses from the named parties.
> In its statement to PC Gamer, Valve said that "The introduction of AI can sometimes make it harder to show a developer has sufficient rights in using AI to create assets, including images, text, and music. In particular, there is some legal uncertainty relating to data used to train AI models. It is the developer's responsibility to make sure they have the appropriate rights to ship their game.
> We know it is a constantly evolving tech, and our goal is not to discourage the use of it on Steam; instead, we're working through how to integrate it into our already-existing review policies. Stated plainly, our review process is a reflection of current copyright law and policies, not an added layer of our opinion. As these laws and policies evolve over time, so will our process."
It doesn't matter if they are able to enforce it, Valve can use this policy as cover if they ever get sued.
Don't overthink the motivation. They will not even have a bulletproof way to detect AI imagery as it evolves every single day as an arm's race and detection is a full-time job. Even a FAANG or a state actor would need to dedicate team(s) to detection technology and still have false negatives.
The same sorts of things already happen for example on YouTube and Twitch, where types of content are against TOS or copyright but enforcement is sporadic and selective, smaller operations often fly under the radar of enforcement, bigger creators who are netting the org sufficient revenue will likely be able to get away with more, etc, the automated tools for detection are flawed.
Are people actually trying to detect AI-generated content? That would not only be pointless and futile; the threat of false positives would be enormously detrimental to anyone creating legitimate work.
It is such a ridiculously bad idea I'm dumbfounded that anyone smart would be trying to do it.
People are 'offering their services' where you can DM them a link to an image and they'll eyeball it and tell you if its made by AI. Laughable hubris, if it wasn't for the inevitable ramifications of false positives.
Think about security or trust and safety or anti-scam or anti-fraud.
AI generated image, video, and audio can be used to circumvent a lot of systems used in these domains. Many of these domains are for protecting users from being scammed, being impersonated, being tracked, etc.
Think about criminal court. Evidence may become impermissible if it can't be proven whether an image or video or audio document is a forgery or captured reality.
It's a bit flippant and absurd to insult the intelligence of people working on AI detection. I'd be a bit dumbfounded by someone dismissing an effort w/o spending time to think about why that effort may exist.
>It is such a ridiculously bad idea I'm dumbfounded that anyone smart would be trying to do it.
Agree with you there.
If the art used in a game violates copyright or contains imagery of exploited children, ban it of course, but what does that have to do with whether it was generated via AI or created in another manner?
If anything AI generated art should be _less_ susceptible to copyrighted stuff because everything is original (even if it's not in original style)
Imagine you are a trademark holder and someone is using your IP but you don't enforce your trademark by litigating. Your claim is weakened.
It shows the public and the court how significant this problem is for your party.
Edit: copyright -> trademark
Intellectual property is an encompassing term that seems to lead to this sort of confusion.
Also from a store perspective, any game where shortcuts like this are used tend to be shit games. They don't want spam games to be pumped. There's already enough indie trash platformers that nobody wants.
High On Life used Midjourney for it's ingame posters - https://store.steampowered.com/agecheck/app/1583230/ - https://www.thegamer.com/high-on-life-ai-generated-art/
And Firmament https://store.steampowered.com/app/754890/Firmament/ - https://www.pcgamer.com/firmament-ai-generated-content/
So is Valve now going to remove these games off the store? This seems like a very terrible way to handle this - they need to make clear rules and make a public statement, not just start banning apps that they sense use AI art.
And yeah, they should kick those games off for using copyrighted materials that they do not own.
I find this hard to agree with. A game engine is a "shortcut" too, I can imagine people saying at some point anything developed with Unity would "tend to be shit games".
Associating quality with visual fidelity anyway is wrong, look at Terraria, I'm pretty sure anyone semi competent with AI generation could produce better assets, but it wouldn't help them produce a better game.
People will use gen AI art in good games, and people will use gen AI art in terrible games.
Yeah, they indicate that they have already submitted multiple games with AI generated assets, and submitted this one "with a few assets that were fairly obviously AI generated." Maybe I'm being unfair and they are making really good games, but these are not good indicators to me.
Stable Diffusion spits out slightly blurrier versions of the pictures in its training set.
Guess it’s time to ask for forgiveness rather than ask for permission and not let Valve know where my art assets are coming from in my web-based API.
If I were making a game I’d just lie and lie at this point.
It's cool to see the development of new ethical standards in response to new technology. If I could get an option for ethically-sourced AI, which only uses public-domain art / text / code for training, that'd be nice.
Nowadays there isn't the same attitude so much. Many people still pirate sounds, but skeptic listeners will sometimes ask musicians to show off their project files to embarass them over how many pirated Cymatics drums they use and their version of Sylenth licensed to "RuTorrent".
It wouldn't surprise me if the same thing happened today. AI-assisted development will take off for a while, and then people will ask self conscious questions like "nice art, who's your art director?"
And the musicians comply? Weird.
GenAI clearly meets the "transformative" standard.
OTOH it seems likely that it will have difficulty with the "Amount and substantiality" as it considers the whole art work, OTOH this is not necessarily a hard barrier given the "transformative" nature.
My guess is that the "Effect upon work's value" standard vs. the "transformative" standard will be the area where there is most action. Clearly, in aggregate, GenAI will have great impact upon works value. However, this is not the usual standard (it is about individual works), and I would argue that this would be creating new law by the courts.
Hopefully we will get a case to the supreme court to resolve this, quickly. I think that this is a boon for humanity and I would like to see the cuffs taken off as quickly as possible.
IANAL, but the problem is that fair use is an affirmative defense and is decided for each case separately. One GenAI may be transformative, while others may not, depending on how much of the original training data they throw back at you.
> we are declining to distribute your game since it’s unclear if the underlying AI tech used to create the assets has sufficient rights to the training data.
It's not AI generate graphics. Instead, it's AI-generated graphics where the rights to the training data cannot be established. I think that's an important distinction.
Even Adobes system has questionable training data mixed in
Governments and companies everywhere trying to lock out small time people today before they get too much traction with AI generated content. They know indie devs will never be able to prove their model is only trained on their art. Only massive companies with billions of dollars can do that right now.
Every big company is trying to create rules to ban AI but keep a small enough loophole that they can use it when the time is right.
This seems like a completely fair response from Valve. On top of that, they gave them notice and an opportunity to remove the offending content (with that content explicitly called out) and offered to refund if that was not a viable option.
If this was an iOS/Android app, they would have just been told to pound sand and swallow the dev fee. Good on Valve for not lapsing communication here.
> we reviewed [Game Name Here] and took our time to better understand the AI tech used to create it.
And offered a refund on the $100 app submission credit:
> App credits are usually non-refundable, but we’d like to make an exception here and offer you a refund. Please confirm and we’ll proceed.
Seems incredibly reasonable.
Not that it bothers me, but I feel oddly validated that appears to be the path taken. It makes sense, even from just purely 'we can't review it all' perspective.
You have to have rights to do AI things with the content of your datasets. No more "download the whole internet" or "create image generation models from the scraped contents of a stock image provider".
I think it's going to turn into a new class of copyright permissions.
Along the lines of
> thou shalt not make a machine in the likeness of a human mind
More like
> License is hereby given for the consumption of these contents by human minds
You can assert you own or have the rights to those images, based on your license with Adobe.
"Yes, I intentionally designed the static image of this man to have 5 and a half fingers on one hand with a distorted logo on their t-shirt, please allow this game, Valve."
How can you prove that something is AI generated? Would creating graphics in Adobe's photoshop AI filler tool count as AI-generated content to Valve, or is Adobe's AI data-set using copyright-free graphics?
I wonder if this is Valve trying to also somewhat cater/attract artists on the platform, as I'm sure artists are against using AI under the guise it'd "steal their jobs/hamper creativity".
Just because now generative AI has made a significant leap doesn't mean its anything new. And copyright is irrelevant because models are clearly derivative works the same way artists remix existing works of art, if that were to change, copyright law would destroy the majority of all creative endeavors.
These are not the same things. Procedural generation is not the same as feeding different prompts to a model until it vomits up something that looks sort-of like what you want.
now that's how you know when a comments section is gonna be amazing
Musicians are still being screwed over because engineers wanted change how music is distributed. The goals are noble enough, just as with the music, but large corporations inserted themselves in the middle to capitalize on the work of the artists. I can't fault artists teaming up with lawyers again in an attempt to be paid for their work. It's didn't really work out for the music industry, but hey, what can they do?
As engineer we clearly aren't on the side of the artists, we help companies in the middle, not the artists. When developers created ChatGPT, or Stable Diffusion, did anyone of the developers insist on building in licens tracking, to ensure that only work in the public domain or under appropriate licenses was used, or at least tracked?
We're once again trying to build a new industry, but we don't care how that might affect others. It's dumb, it's not like there wasn't enough publicly available material, it's just that it's cheaper to ignore licensing.
Clearly that would clarify the legal situation, and it’d be a lot harder to ban it. But would that help artists? If so, how?
"Valve is not willing to publish games with AI generated content anymore"
aka "copyright doesn't apply":
https://cacm.acm.org/news/273479-japan-goes-all-in-copyright...
And from 23 days ago.
AND misleading clickbait title.
What's the difference?
A) Human creates artwork in the style of [insert artist here]
B) Computer creates artwork in the style of [insert artist here]
Both "trained" against existing copyrighted works except one is human. Is this to "save jobs"?
This gives social networks an edge, which often have EULAs that allow the business to use uploaded content _at least_ internally, if not commercially.
_And_, in the short term, there's an opportunity for someone to pay armies of artists to create _decent renditions_ of existing styles and known works. It's not a copyright violation if a human being mimics another human being in creating a new, original work.
[0]: https://www.youtube.com/watch?v=dR4IuN2tF78
[1]: https://nitter.net/unitygames/status/1673650585860489217
Steam's objection is other copyright even indirectly in the AI training dataset and to remove it, not to conceal the issue better.
Tricky copyright questions aside, inability to follow basic instructions is definitely a disadvantage when going through approval processes
1. Copyright Office made it clear that AI generated output is generally not copyrightable irrespective of training data: https://www.federalregister.gov/documents/2023/03/16/2023-05...
2. In fact, there is credible legal theory that goes as far as to conclude that training dataset license cannot effect the resulting weights under US law (EU's take on copyright make it less clear)
3. DMCA already provides Valve with legal cover in the unlikely event that training dataset license is somehow found to effect the IP rights of generated content.
4. By adopting this policy, they are acting more as a traditional publisher rather than a platform, thus exposing themselves to even more liabilities not less.
This policy makes no sense to me no matter how I look at it.
The harsh reality that Valve and everyone else needs to accept is that AI-generated content from "unclear" datasets is here to stay. People need to accept this fact^1 and move on. I already have.
1: Copyright is an incredibly limited, obsolete, broken invention that was never meant to handle concepts like this. It's very much like a poor analogy that we stubbornly insist on applying to situations were it simply does not fit. We will continue to find ourselves arriving to bizarre and nonsensical conclusions like this as long as we continue holding onto this broken invention. Reward authors in another way. Placing arbitrary limitations on information was never the right way to do it.
No one is ever going to accuse DLSS of creating new art works containing some other legal entity's existing IP for example, its literally just a (very clever) upscaling of the original art. If it did, it would presumably render the game being upscaled almost unplayable as it would be changing the output to a state unrecognizable from the input frame.
I totally get why Valve is taking the stance that they are. I imagine its hard even for them to know where to draw the line (evidenced by how long the turnaround time was on the support response).
Well first we need to know if using images for AI training can be considered fair use.
Steam says "we don't allow AI content".
Someone shoves AI content on the platform anyways.
If it can be proved they violated the TOS, they then have the ability to nuke their game and stop someone from suing them. If they can't prove it, well they can't prove it and the game stays.
To do otherwise opens up the door to steam having to "vet" all the AI content. So yes AI content will slip through (in massive droves) but it will be indy scene.
The biggest impact here is going to be AAA devs who can't just neglect to mention they used AI at some point. This is actually the first thing that could "kill" steam or give Epic a competitive advantage. There's 0 doubt that companies like EA/Activision/whatever want to jump all over AI to make yearly releases like FIFA even cheaper, and if Epic is willing to say "come on over" we might see epic exclusivity for that reason alone, rather than the current "here's a pile of money to make up for all the sales you'll miss out on when no one remembers your game released"
Valve has an official position that they don't allow AI content (apparently). When the lawsuits show up they can say that they don't serve any AI content as official policy. When someone points out the AI content that they do serve, then they pull out their expert witness that testifies that their AI detection method is as good as possible and they couldn't haven been expected to do any better. Meanwhile, they're more than happy to remove anything explicitly flagged that falls through the cracks.
Finally, I suspect that anyone who can prove that they're able and willing to indemnify Valve against lawsuits for AI content that their game contains will be allowed to have AI content.
Yes, people will work around it, and some will slip through the cracks. That doesn't mean it's a useless policy with no impact.
Why do you think Valve is just trying to slow progress? Don't they win when people on their store win?
It seems more likely to me that this is CYA against the major lawsuits that are happening right now from copyright holders.
Until there's a killer/must-have game built with AI content, I don't think this is going to have much of a noticeable impact.
I'll watch, but I disbelieve the reddit poster. Probably a CEO bot drumming up obvious bait comments over current computer events.