I don't understand why Stability gets so little support from the community. They released the first usable open-source models and their models are the foundation of the most interesting AI-bashing workflows out there - VC funded or otherwise.
That's a feature of open-source development, not a bug. But it's a reason (along with the general financial issues which are the company's fault alone) why Stability is switching to a "need a membership to use commercially" business model, and IMO it won't work.
that said i think its impt to acknowledge how much stability has shared in its research, just the other day they were on HN for Stable Video 3D, not to mention hourglass diffusion and other Stable* models. may not be the overwhelming SOTA but its real open source AI work that pushes the frontiers. you have to give them credit for that.
Which means, there is nothing with even remotely the same fine tuning ecosystem.
And for that - stability is way ahead of the competition.
The community is entrechend in 1.5 because that's what everyone is now familiar with, IMO
> It’s a dramatic exodus that comes less than 18 months after Stability’s 2022 fundraise that valued the company at $1 billion. Now, the company is facing a cash crunch, with spending on wages and compute power far outstripping revenue, according to documents seen by Forbes. Bloomberg earlier reported that the company was spending $8 million a month. In November 2023, CEO Emad Mostaque tweeted that the company had generated $1.2 million in revenue in August, and would make $3 million in November. The tweet was later deleted.
This sounds a lot like the two-year period leading up to the dot com crash. Insane valuations and no revenue model. Meanwhile those insanely-valued companies bought Sun Microsystems servers like there was no tomorrow. When the games ended, a lot of those insanely-valued companies went to zero and left a massive overhang of Sun hardware in their wake. Sun began its long nosedive not long after that.
The model for Stability ends in a likely way, they will be swallowed up in a payday. We have recent examples of this.
For now, they’re the most open area in a garden of closing gates. And I wish them all the best.
They feel that they are entitled to be dishonest while writing articles about how CEOs are dishonest.
It takes $500 to be a “featured contributor” and they will post whatever nonsense you like. The brand is diluted.
Articles can be bought outright.
For example - https://news.ycombinator.com/from?site=forbes.com/sites/iain...
Linking to archive (which incidentally gets into loops for me with the captcha) means that I can't see the original article either (or use other methods to try to find reproductions).
Meanwhile, even though I hit the "please get a subscription" for Forbes, I can click the reader mode and read the page in its entirety.
I think “flounder” is fine in this context.
E: -4 in 10 minutes. Stay classy hackernews. I hope this company and OpenAI choke on the algorithmic disgorgement when the law catches up.
While some of this are people practicing their artwork and I don't see any reason we should care what artwork someone practices on, this is also the general trend for artwork being sold. Go to any convention where artists sell work and look at how much artwork is sold of characters the artists do not have license to. While I think one can take a philosophical stance against the current IP laws that outlaw this, such a stance would make it quite hard to oppose the use of content in training an AI.
In short, if those making the AI stole IP to train the AI, it was stolen from a community that was fine with IP theft that benefitted them. And if the claim is that it wasn't IP theft because the law was generally tolerating it (as long as no one became so much a target they received a C&D), then unless there are some lawsuits won against the AI it would be equally allowed.
(And of course individuals will have their own philosophical stances which might be much more consistent, I'm speaking of the generalized view I have developed from overall interactions with parts of the community and as such it is not meant to be strongly prescriptive to any specific member of the community).
Download a movie and you can get sued or your Internet connection terminated, but pirate the entire collective output of humanity and sell it back to us from behind a paywall and that's fine.
I have more sympathy for Stability here because at least they opened the models. IMHO models trained on not-properly-licensed (pirated) data should at the very least not be copyrightable and should be public domain. (These piracy enterprises are aware of this as a possible legal outcome in some jurisdictions, so the whole AI safety bullshit performance is an attempt to scare people about open models to head off the potential of questionably-trained models being declared uncopyrightable and forced to be released.)
My understanding is that ML model weights cannot be copyrighted as an original creative work. They are trade-secrets and protected through contracts but once leaked to third parties it’s not a copyright violation to use/distribute.
Whether the model is actually a derivative work of the training data is another interesting question.
Or is my theory off here?
That's the sticking point. If it's an open tool for humanity's benefit being created given back to us, that's one thing... but to sell it back to us...
With that said, piracy is close to what's happening... but I think we should be careful classifying where/what exactly is the matter. I reason I think that matter's lies may be down the end of a slippery slope, or it may be straight ahead of us... the future is hard to know. If we classify it poorly we may unintentionally cause human(post/trans-human) right's issues {if I upload my consciousness to a digital mind, I don't want archaic laws to dominate what I can see/compute based on the material of which I'm made}.
ARRRRR..
This is a grey area still for me. It's a neural network. It works similar to our brains work, but more consistent. It's doesn't seem like piracy to me. If an artist was really into Salvidor Dali, and happened to imitate his surrealist style, it would not be considered piracy. In fact, this is how art has evolved over the centuries. Each relevant artist in the past has incrementally contributed to what we call art today.
I feel like the people unwilling to accept that AI may impact their career are more worried about putting food on the table than anything else, which is very understandable, but it's just the cost of progress.
The bigger problem we need to deal with is how to retrain and provide job placement who are affected by disruptive technologies. We've really failed the public on this in the past and I don't think it's worth nerfing emerging tech just to keep people employed. This is not the first or last time this has happened, and it's going to be more frequent as technology advances.
Forget about AI. Instead it is almost the entire art industry, wholesale!
The semi-professional online art commissioning market is almost entirely copyright infringing fan art works, being sold without permission of IP owner.
Yes, fan art is infringing. Especially when it is sold. And if you go to a convention center, to the artists section, you will see that over half of the booths are straight up selling other people's IP without permission.
This is the case for conventions, online art commissions, etsy/handmade items, all of it.
Its all illegal, all infringing, and the only reason why anyone cares now is because someone else can do the same thing that others have been doing for decades, but quicker and cheaper.
Could it ever be the case, I wonder, if we could trust/enforce/believe that a model had so abstracted what it learned from the training inputs such that the model was not a derived work from them?
I've seen the examples where the model is able to reproduce recognizable characters from popular media. Those look like they might be "just" overfitting? While I can see that as desirable from the point of view of being able to create a picture of "Robocop shopping for diapers". But maybe we could compromise and converge to a point where AI art isn't quite so demonized and instead is seen as a useful tool.
Just like all art. When you draw something you don't cite every single thing you've seen and experienced in life that inspired your drawing and style. Nor did you own or pay royalties to all that inspiration either.
I think uncopyrightable is a likely outcome, but where are you coming up with forced to be released?
IMO, if a model is deemed to be such, all copies of that model should be destroyed. Actual copyright law allows for the destruction of equipment used for copyright infringement, and those laws were written in the days where this meant "a printing press".
> the whole AI safety bullshit performance
The people who care about AI safety have been loudly warning about it for so much longer than these companies and models have existed, that they roll their eyes at newspapers using stock photos from Terminator to illustrate the discussion.
> The entire AI industry
Also includes self-driving cars, spam filters, medical diagnosis tools, …
If the "art community" can't understand what an insane gift SD1.5 and SDXL was to them then I don't know what to tell them.
Without those open models we could have easily ended up in a world where this tech existed but was only in the hands of people who could pay OpenAI or Adobe a month to use it, and I mean with the power of it what should that cost be? I mean to have such an advantage the monthly cost could have easily been in the hundreds a month like high end CAD/3D/VFX software is and only viable for huge studios leaving normal people in the dirt.
Emad's decisions mean for the rest of eternity a tool that could have ended up entirely locked behind an Adobe paywall can now be run on any machine you owned and tweaked entirely on your own hardware to work in a way specifically beneficial to your workflow.
I'm an artist and designer too, the fear of how fast these tools can replicate styles and take jobs becomes a lot less scary when I can take advantage of it myself or enhance my workflow with it myself without paying a subscription tax to do so. But if the "art community" can't understand or imagine how bad this situation could have been then I don't know what to tell them, some people just like being screwed over I guess...
Have you tried to train SD on your artwork? Pretty curious about the results an artist can achieve when embracing this tech.
1) Brand destruction: when SD was new, lots of people put "Greg Rutkowski, trending on artstation" in their prompts in order to get better images. It's possible that Greg Rutkowski being the single most popular example of this means he personally lucked out on this (some reporting suggests so), and the exposure really did boost his career. Do you think everyone else this has happened to was so lucky?
If I image search for "Greg Rutkowski", I see some cool things yes, but I also see this: https://creator.nightcafe.studio/creation/gt4Z0uOIrrmop13OoU...
I suspect that many others have suffered from this association.
2) Substitution: the exact opposite problem.
Now that the image generators are pretty good, why should anyone hire an artist?
This image was generated in 267 milliseconds, for free: https://github.com/BenWheatley/AI-art/commit/d4e0322a30ab508...
That image is not perfect, but it's good enough for people like me, and that by itself is an economic risk to the future employability of that entire segment of the economy.
This really is important and does matter because all the talking heads were all busy confidently saying creative jobs like "artist" and "writer" were safe, and that it was truck drivers and factory workers who needed to re-skill, and thus we as a society have done basically nothing to prepare for or mitigate this economic disruption.
--
I don't know what's coming, not for me, not for anyone.
But I get why they feel scared, and I get why they feel this has taken something from them, even though the specific arguments about copyright and "parroting" that make it into public discussion (Gell-Mann amnesia warning) are often also deeply flawed and unconvincing.
To me, this is a trademark issue in the first case, not a copyright one; and in the second, the same disregard for workers that led to the creation of the actual literal Communist Manifesto.
Boo hoo. This is not the first nor the last democratization of art. First people weren’t starving so many more could afford to become artists, printing presses could mass copy art, world wide shipping lanes moved styles, computer aids then photoshop, now AI. It’s always “been damaging to the art community”.
Now… to steelman the argument, it’s never been lower skill or easier to create your own modification or idea and get it in the style of some artist. In my opinion the low barrier to entry is obviously going to seem unfair - but - this is just going to make physical art more valuable.
If I were a sad-about-ai artist, I would jump in and see how new tools could improve my game.