Thanks for your words of wisdom, which touch on a very important other point I want to raise: often, we (i.e., developers, researchers) construct a technology that would be helpful and "net benign" if deployed as a tool for humans to use, instead of deploying it in order to replace humans. But then along comes a greedy business manager who reckons recklessly that using said technology not as a tool, but in full automation mode, results will be 5% worse, but save 15% of staff costs; and they decide that that is a fantastic trade-off for the company - yet employees may lose and customers may lose.
The big problem is that developers/researchers lose control of what they develop, usually once the project is completed if they ever had control in the first place. What can we do? Perhaps write open source licenses that are less liberal?
The problem is that people who are laid off often experience significant life disruption. And people who work in a field that is largely or entirely replaced by technology often experience permanent disruption.
However, there's no reason it has to be this way - the fact people having their jobs replace by technology are completely screwed over is a result of the society we have all created together, it's not a rule of nature.
I agree. We need a radical change (some version of universal basic income comes to mind) that would allow people to safely change careers if their profession is no longer relevant.
I disagree. I think it's both. Yes, we need good frameworks and incentivizes on a economic/political level. But also, saying that it's not a tech problem is the same as saying "guns don't kill people". The truth is, if there was no AI tech developed, we would not need to regulate it so that greed does not take over. Same with guns.
Same could be said for the Internet as we know it too. Literally replace AI with Internet above and it reads equally true. Some would argue (me included some days) we are worse off as a society ~30 years later. That’s also a legitimate case that can be made it was a huge benefit to society too. Will the same be said of AI in 2042?
> the fact people having their jobs replace by technology are completely screwed over is a result of the society we have all created together, it's not a rule of nature.
How did the handloom weavers and spinners handle the rise of the machines?In the past, new jobs appeared that the workers could migrate to.
Today, it seems that AI may replace jobs much quicker than before and it's not clear to me which new jobs will be "invented" to balance the loss.
Optimists will say that we have always managed to invent new types of work fast enough to reduce the impact to society, but in my opinion it is unlikely to happen this time. Unless the politicians figure out a way to keep the unemployment content (basic income etc.),
I fear we may end up in a dystopia within our lifetimes. I may be wrong and we could end up in a post scarcity (star trek) world, but if the current ambitions of the top 1% is an indicator, it won't happen unless the politicians create a better tax system to compensate the loss of jobs. I doubt they will give up wealth and influence voluntarily.
Oh wait, that's not the disneyfied technooptimistic version of Luddites? Sorry.
> But then along comes a greedy business manager who reckons recklessly
Thanks for this. :)
You make something, but because you don’t own it—others caused and directed the effort—you don’t control it. But the people who control things can’t make things.
Should only the people who can make things decide how they are used though? I think that’s also folly. What about the rest of society affected by those things?
It’s ultimately a societal decision-making problem: who has power, and why, and how does the use of power affect who has power (accountability).
But unfortunately what is or isn't an irresponsible use is very easy to debate endlessly in circles. Meanwhile people are being harmed like crazy while we can't figure it out
if these developers/researchers are being paid by someone else, why should that same someone else be giving up the control that they paid for?
If these developers/researchers are paying the research themselves (e.g., a startup of their own founding), then why would they ever lose control, unless they sell it?
As the comment above said that we need a human in the loop for better results, Well firstly it also depends on human to human.
A senior can be way more productive in the loop than a junior.
So Everybody has just stopped hiring juniors because they cost money and they will deal with the AI almost-slop later/ someone else will deal with it.
Now the current seniors will one day retire but we won't have a new generation of seniors because nobody is giving juniors a chance or that's what I've heard about the job market being brutal.
Stock your underground bunkers with enough food and water for the rest of your life and work hard to persuade the AI that you're not a threat. If possible, upload your consciousness to a starwisp and accelerate it out of the Solar System as close to lightspeed as you can possibly get it.
Those measures might work. (Or they might be impossible, or insufficient.) Changing your license won't.
I understand this is a bit deeper into one of the _joke_ threads, but maybe there’s something here?
There is a distinction to be made between artificial intelligence and artificial consciousness. Where AI can be measured, we cannot yet measure consciousness despite that many humans could lay plausible claim to possessing consciousness (being conscious).
If AI is trained to revere or value consciousness while simultaneously being unable to verify it possesses consciousness (is conscious), would AI be in a position to value consciousness in (human) beings who attest to being conscious?