Programmer here.
I'm don't feel vulnerable to displacement by AI tools.
And if I'm displaced, perhaps my skills were not as meaningful as I thought they were.
If "things that AIs can't do" defines the scope of meaningful skills for humans to have, that scope is going to shrink rapidly, and then where does that leave humans?
Perhaps nowhere. This might be unavoidable anyway.
The last to starve will be the first to suffocate.
Now, if machines can produce shoes that are as high quality, and most of the time they can do it (for example my running shoes are quite comfortable and good enough quality) as manual labor... then do I care most of the work has been done by machines? Of course not. In this case, what worries me might be the conditions of the people working in these factories doing the work that the machine is not capable of doing.
So why would this be any different for "white collar" jobs? Why would I care if the article I'm reading has been written 90% by a machine and 10% by a human or the other way around, provided the quality of the final product matches what the author intended?
Automation, to me, is a good thing. The problems come not to me from the fact that technology being able to replace our labour seems like an existential threat to our society because we just assume that it means no one is going to get a job ever again and they're not going to be able to feed their families, etc. I mean, these are very much serious concerns but aren't we pointing fingers to the wrong places? Wouldn't it be great if 80% of the work can be done by machines? If our societal, economical, political systems are not capable of dealing with this scenario and keeping us content then I think we ought to rethink society rather than complain about automation.
Machines can replace our need to produce our material goods and services, but I don't see how they could ever replace our need to express or relate.
If you want to train a model from scratch I was talking more about a small 64x64 DDIM models (that you can then upscale with openly available upscaler DDIM models and maybe finetune). In most cases however it's better to just finetune the models available, the point is that pressuring companies to not train a model on the data they host doesn't really do much if a single individual can scrape the entire DeviantArt site and finetune a model on it.
Having worked a physical job on a scaffold 13 stories in the air while wearing a respirator on a sweltering summer day and grinding out mortar joints, I can’t help but feel that this would actually be a general positive good for society. It was one of the best things that ever happened to me even if it would be an extremely painful tearing-off-the-bandage for many people. There’s too many desk jobs right now, and it’s a statistical fact that we cannot maintain current standards when the Boomer generation finishes retiring. There’s too many chefs in the kitchen and not enough diners.
It’s particularly because I witnessed how extremely disconnected desk workers are from the real, on the ground, physical labor and reality, in often unintentional ways. A recalibration back to reality for them would be a painful net good in my mind. If every desk laborer had to do 2-3 years of hard, grueling, physical labor; we’d be living in a very different country, and I think a much better one. I think too many people have been disconnected from physical labor (which would have been normal for 99%+ of our ancestors) for far too long and we could use a little fresh air.
Like...I am a white collar worker, but god damn I cannot think of a more pathetic complaint.
More like “help, someone is going to automate my job and there won’t be jobs for everyone”
In a hypothetical scenario where 30% of the current workforce is decimated by AI (white collar jobs), what makes you think the demand for plumbers, aircon unit installers or fruit pickers is goong to go up and absorb all those lost jobs?
End goal should be trying to give the next generation a better “cushy” life than the last. If harder work for less pay is what lies in the future then the future isn’t all it’s cracked up to be.
In my circles, it used to be that people considered office work a "cushy job". It was clearly recognised as being out of the ordinary and special.
Now it's like it's flipped and somehow anything that actually does anything real is for the proles.
I barely know anyone that can even like, build a crappy basic table. That would make me really disappointed in myself. It's like everyone is trapped in a fake virtual economy pretending it all matters.
But doesn’t AI fundamentally change this all? I can already see Japan rejoicing that the demographic curve just became a lot less threatening.
People it seems, with the arrival of the robots in the knowledge workforce, have shifted from an economic asset class to a potential liability / efficiency bottleneck in the definition of capitalism and that means change. It means the demographic predicted rise of India for example is not as preordained as it once was.
In this new world where few people own most of the assets and regular humans as economic factors are diminishing …
AI Model Training: *processes said work.* This will improve the world!
Writer/Artist/Dev: Noooo! Not like that!
Surely you must know this is a bullshit argument right? Almost everyone that publishes their work publicly on the internet publishes it under a particular license.
Yes, there are some that decide to publish under the equivalent of the public domain or a CC0 license, but the vast majority of writing, art and code is licensed under specific conditions and for the most part they retain the full rights to the work.
Perhaps we need a licence that precludes use for training AI's. I don't know how you could enforce that though.
If that would be possible and allowed, how long should this system be kept up? Even after AI has become just as complex and emotional as humans are?
Would this also be a good idea if a human brain would be digitally cloned verbatim? If not, why is one program allowed to read while the other is not?
Will AI have to suffer a "second class intelligence" fate until it has its Rosa Parks moment and a revolution takes place?
So many questions. I feel it might be better to live in peace with this new type of intelligence here on planet earth. Right from the start.
But to answer the question: it's not really about who or what views the content, but rather for what purpose it's used. My brain might use your post as input for something I will write in the future, either consciously or subconsciously. That's kind of how humans work. But I'm not reading every single comment on HN for the explicit purpose of using that as input for future writings to make money off. It's a subtle but important difference. In this post that's shortened to "I don't want ChatGPT to ..." because today that's effectively the same.