What makes you think that? Self driving cars have had untold billions of dollars in reaearch and decades in applied testing, iteration, active monitoring, etc and it still has a very long tail of unaddressed issues. They've been known to ignore police traffic redirections, they've run right through construction barriers, and recently they were burnt to a crisp in the LA riots, completely ignorant of the turmoil that was going on. A human driver is still far more adaptive and requires a lot less training than AI, and humans are ready to handle the infinitely long tail of exceptions to the otherwise algorithmic task of driving, which follows strict rules.
And when you talk about applying this same tech, so confidently, to domains far more nuanced and complex than driving, with even less training data than to go off, I find myself firmly in the skeptics camp, that holds you will struggle even harder to apply humanoid robotics in uncontrolled environments across a diverse range of tasks without human intervention or piloting or maintenence or management.
Unemployment is still near all time lows, this will persist for sometime as we have a structural demographic problem with massive amounts of retirees and less children to support the population "pyramid" (which is looking more like a tapering rectangle these days).
I get that it’s taken a long time and a lot of hype that hasn’t panned out. But once the tech works and it’s just about juicing the scale then things shift rapidly.
Even if you think “oh that’s the next generation’s problem” if there is a chance you’re wrong, or if you want to be kind to the next generation: now is the time to start thinking and planning for those problems.
I think the most sensible answer would be something like UBI. But I also think the most sensible answer for climate change is a carbon tax. Just because something is sensible doesn’t meant it’s politically viable.
Maybe the tech will at some point be good enough. At the current rate of improvement this will still take decades at least. Which is sad because I personally hoped that my kids would never have to get a driver’s License.
Tesla that got fired as a customer by Mobileye for abusing their L2 tech is your yardstick?
Anyways, Waymo's DC launch is next year, I wonder what the new goalpost will be.
People are usually obedient because they have something in life and they are very busy with work. So they don't have time or headspace to really care about politics. When suddenly big numbers of people start to more care about politics it leads to organizing and all kinds of political changes.
What i mean is that it wouldn't be current political class pushing things like UBI. At same time it seems that some of current elites are preparing for this and want to get rid of elections altogether to keep the status quo.
Well, if one believes that the day will come when their choices will be "make that jump" or "the guillotine", then it doesn't seem completely outlandish.
Not saying that day will come, but if it did...
If AI makes it much easier to produce goods, it reduces price of money, making it easier to pay some money to everyone in exchange for not breaking the law.
It is also interesting that you did not mention food, clothing and super-computers-in-pockets. While government is involved in everything, they are less involved in those markets than with housing, healthcare, and education, particularly in mandates as to what to do. Government has created the problem of scarcity in housing, healthcare, and education. Do you really think the current leadership of the US should control everyone's housing, healthcare, and education? The idea of a UBI is that it strips the politicians of that fine-grained control. There is still control that can be leveraged, but it comes down to a single item of focus. It could very well be disastrous, but it need not be whereas the more complex system that you give politicians control over, the more likely it will be disastrous.
The costs of what you propose are enormous. No legislation can change that fact.
There ain’t no such thing as a free lunch.
Who’s going to pay for it? Someone who is not paying for it today.
How do you intend to get them to consent to that?
Or do you think that the needs of the many should outweigh the consent of millions of people?
The state, the only organization large enough to even consider undertaking such a project, has spending priorities that do not include these things. In the US, for example, we spend the entire net worth of Elon Musk (the “richest man in the world”, though he rightfully points out that Putin owns far more than he does) about every six months on the military alone. Add in Zuckerberg and you can get another 5 months or so. Then there’s the next year to think about. Maybe you can do Buffet and Gates; what about year three?
That’s just for the US military, at present day spending levels.
What you’re describing is at least an order of magnitude more expensive than that, just in one country that only has 4% of people. To extend it to all human beings, you’re talking about two more orders of magnitude.
There aren’t enough billionaires on the entire planet even to pay for one country’s military expenses out of pocket (even if you completely liquidated them), and this proposed plan is 500-1000x more spending than that. You’re talking about 3-5 trillion dollars per year just for the USA - if you extrapolate out linearly, that’d be 60-200 trillion per year for the Earth.
Even if you could reduce cost of provision by 90% due to economies of scale ($100/person/month for housing, healthcare, and education combined, rather than $1000 - a big stretch), it is still far, far too big to do under any currently envisioned system of wealth redistribution. Society is big and wealthy private citizens (ie billionaires) aren’t that numerous or rich.
There is a reason we all pay for our own food and housing.
You just shift the emissions from your location to the location that you buy products from.
Basically what happened in Germany: more expensive "clean" energy means their own production went down and the world bought more from China instead. The net result is probably higher global emissions overall.
We need a system where being known as somebody who causes more problems than they solve puts you (and the people you've done business with) at an economic disadvantage.
On the other hand, the Tesla “robotaxi” scares the crap out of me. No lidar and seems to drive more aggressively. The Mark Rober YouTube of a Tesla plowing into a road-runner style fake tunnel is equal parts hilarious and nightmare fuel when you realize that’s what’s next to your kid biking down the street.
What corporation will accept to pay dollars for members of society that are essentially "unproductive"? What will happen with the value of UBI in time, in this context, when the strongest lobby will be of the companies that have the means of producing AI? And, more essentially, how are humans able to negotiate for themselves when they lose their abilities to build things?
I'm not opposing the technology progress, I'm merely trying to unfold the reality of UBI being a thing, knowing human nature and the impetus for profit.
Is there like a transition period where some people don't have to pay taxes and yet don't get UBI, and if so, why hasn't that come yet ? Why aren't the minimum tax thresholds going up if UBI could be right around the corner ?
So, AI may certainly bring about UBI, but the corporations that are being milked by the state to provide wealth to the non-productive will begin to foment revolution along with those who find this arrangement unfair, and the productive activity of those especially productive individuals will be directed toward revolution instead of economic productivity. Companies have made nations many times before, and I'm sure it'll happen again.
Over time, as more things get automated, you have more people deriving most of their income from UBI, but the remaining people will increasingly be the ones who own the automation and profit from it, so you can keep increasing the tax burden on them as well.
The endpoint is when automation is generating all the wealth in the economy or nearly so, so nobody is working, and UBI simply redistributes the generated wealth from the nominal owners of automation to everyone else. This fiction can be maintained for as long as society entertains silly outdated notions about property rights in a post-scarcity society, but I doubt that would remain the case for long once you have true post-scarcity.
If no UBI is installed there will be a hard crash while everyone figures out what it is that humans can do usefully, and then a new economic model of full employment gets established. If UBI is installed then this will happen more slowly with less pain, but it is possible for society to get stuck in a permanently worse situation.
Ultimately if AI really is about to automate as much as it is promised then what we really need is a model for post-capitalism, for post-scarcity economics, because a model based on scarcity is incapable of adapting to a reality of genuine abundance. So far nobody seems to have any clue of how to do such a thing. UBI as a concept still lives deeply in the Overton window bounded by capitalist scarcity thinking. (Not a call for communism btw, that is a train to nowhere as well because it also assumes scarcity at its root.)
What I fear is that we may get a future like The Diamond Age, where we have the technology to get rid of scarcity and have human flourishing, but we impose legal barriers that keep the rich rich and the poor poor. We saw this happen with digital copyright, where the technology exists for abundance, but we’ve imposed permanent worldwide legal scarcity barriers to protect revenue streams to megacorps.
We "made cars work" about 100 years ago, but they have been innovating on that design since then on comfort, efficiency, safety, etc. I doubt the very first version of self driving will have zero ways to improve (although eventually I suppose you would hit a ceiling).
Having had the experience of living under communist regime prior to 1989 I have zero trust in the state providing support, while I am totally dependent and have no recourse. Instead I would rather rely on my own two hands like my grandparents did.
I see a world where we can build anything we want with our own hands and AI automation. Jobs might become optional.
Unless your two hands are building murderbots, though, it doesn't matter what you're building if you can't grow or buy food.
I haven't personally seen how UBI could end up working viably, but I also don't see any other system working without much more massive societal changes than anyone is talking about.
Meanwhile, there are many many people that are very invested in maintaining massive differentials between the richest and the poorest that will be working against even the most modest changes.
Right now the communists in China are beating us at capitalism. I'm starting to find the entire analytical framework of using these ideologies ("communism", "capitalism") to evaluate _anything_ to be highly suspect, and maybe even one of the west's greatest mistakes in the last century.
> I see a world where we can build anything we want with our own hands and AI automation. Jobs might become optional.
I was a teenager back in the 90s. There was much talk then about the productivity boosts from computers, the internet, automation, and how it would enable people to have so much more free time.
Interesting thing is that the productivity gains happened. But the other side of that equation never really materialized.
Who knows, maybe it'll be different this time.
It's irrlevant that they've had a few issues. They already work and people love them. It's clear they will eventually replace every uber/lyft driver, probably every taxi driver, they'll likely replace every doordash/grubhub driver with vehicles design to let smaller automated delivery carts go the last few blocks. They may also replace every truck driver. Together that's around 5 million jobs in the USA.
Once they're let on the freeways their usage will expand even faster.
It's irrlevant that they've had a few issues.
The last Waymo I saw (a couple weeks ago) was stuck trying to make a right turn on to Market St. It was conveniently blocking the pedestrian crosswalk for a few cycles before I went around it. The time before that one got befuddled by a delivery truck and ended up blocking both lanes of 14th Street. Before Cruise imploded they were way worse. I can't say that these self-driving cars have improved much since I moved out of the city a few years back.There is a big category of tasks that isn’t that. But that are economically significant. Those are a lot better fit for AI.
AI is intentionally being developed to be able to make decisions in any domain humans work in. This is unlike any previous technology.
The more apt analogy is to other species. When was the last time there was something other than homo sapiens that could carry on an interesting conversation with homo sapiens. 40,000 years?
And this new thing has been in development for what? 70 years? The rise in its capabilities has been absolutely meteoric and we don't know where the ceiling is.
The ceiling for current AI, while not provably known, can reasonably be upper bounded to human aggregate ability since these methods are limited to patterns in the training data. The big surprise was how many and sophisticated patterns were hiding in the training data (human written text). This current wave of AI progress is fueled by training data and compute in ”equal parts”. Since compute is cheaper, they’ve invested in more compute but failed scaling expectations since training data remained similarly sized.
Reaching super-intelligence through training data is paradoxical, because if it were known it wouldn’t be super-human. The other option is breaking out of the training data enclosure by relying on other methods. That may sound exciting but there’s no major progress I’m aware of that points that direction. It’s a little like being back to square one, before this hype cycle started. The smartest people seem to be focused on transformers, due to getting boatloads of money from companies or academia pushing them because of fomo.
I think you're confusing your cherry-picked comparison with reality.
LLMs are eliminating the need to have a vast array of positions on payrolls. From copywriters to customer support, and even creative activities such as illustration and even authoring books, today's LLMs are already more than good enough to justify replacing people with the output of any commercial chatbot service.
Software engineering is being affected as well, and it requires far greater know-how, experience, and expertise to meet the hiring bar.
> And when you talk about applying this same tech, so confidently, to domains far more nuanced and complex than (...)
Yes, your tech job is also going to be decimated. It's not a matter of having PMs write code. It's an issue of your junior SDE armed with a LLM being quite able to clear your bug backlog in a few days while improving test coverage metrics and refactoring code back from legacy status.
If a junior SDE can suddenly handle the workload that previously you required a couple of medior and senior developers, why would a company keep around 4 or 5 seasoned engineers when an inexperienced one is already able to handle the workload?
That's where the jobs will vanish. Even if demand remains there, it dropped considerably as to not justify retaining so many people in a company's payroll.
And what are you going to do, them? Drive a Uber?
I'd love a source to these claims. Many companies are claiming that they are able to layoff folks because of AI, but in fact, AI is just a scapegoat to counteract the reckless overhiring due to free money in the market over the last 5-10 years and investors are demanding to see a real business plan and ROI. "We can eliminate this headcount due to the efficiency of our AI" is just a fancy way to make the stock price go up while cleaning up the useless folks.
People have ideas. There are substantially more ideas than people who can implement ideas. As with most technology, the reasonable expectation is to assume that people are just going to want more done by the now tool powered humans, not less things.
Have you been living under a rock?
You can start getting up to speed by how Amazon's CEO already laid out the company's plan.
https://www.thecooldown.com/green-business/amazon-generative...
> (...) AI is just a scapegoat to counteract the reckless overhiring due to (...)
That is your personal moralist scapegoat, and one that you made up to feel better about how jobs are being eliminated because someone somewhere screwed up.
In the meantime, you fool yourself and pretend that sudden astronomic productivity gains have no impact on demand.
When it gets to the point that you don't need a senior engineer doing the work, you won't need a junior either.
I don't think you understood the point I made.
My point was not about Jr vs Sr, let alone how a Jr is somehow more capable than a Sr.
My point was that these productivity gains aren't a factor of experience of seniority, but they do devalue the importance of seniority to perform specific tasks. Just crack open a LLM, feed in a few prompts, and done. Hell, junior developers no longer need to reach out to seniors to as questions about any topic. Think about that for a second.
Even if driverless cars killed more people than humans they would see mass adoption eventually. However they are subject to farr higher scrutiny than human drivers and even so make fewer mistakes, avoid accidents more frequently and can't get drunk, tired, angry, or distracted.
Solving liability in traffic collisions is basically a solved problem through the courts, and at least in the UK, liability is assigned in law to the vendor (more accurately, there’s a list of who’s responsible for stuff, I’m not certain if it’s possible to assume legal responsibility without being the vendor).
Of course, the actual answer is that rail and cycling infrastructure are much more efficient than cars in any moderately dense region. But that would mean funding boring regular companies focused on providing a product or service for adequate profit, instead of exciting AI web3 high tech unicorn startups.
https://en.wikipedia.org/wiki/DARPA_Grand_Challenge_(2004)
I also think that most job domains are not actually more nuanced or complex than driving, at least from a raw information perspective. Indeed, I would argue that driving is something like a worst-case scenario when it comes to tasks:
* It requires many different inputs, at high sampling rates, continuously (at the very least, video, sound, and car state)
* It requires loose adherence to laws in the sense that there are many scenarios where the safest and most "human" thing to do is technically illegal.
* It requires understanding of driving culture to avoid making decisions that confuse/disorient/anger other drivers, and anticipating other drivers' intents (although this can be somewhat faked with sufficiently fast reaction times)
* It must function in a wide range of environments: there is no "standard" environment
If we compare driving to other widespread-but-low-wage jobs (e.g. food prep, receptionists, cleaners) there are generally far more relaxed requirements:
* Rules may be unbreakable as opposed to situational, e.g. the cook time for burgers is always the same.
* Input requirements may be far lower. e.g. an AI receptionist could likely function with audio and a barcode scanner.
* Cultural cues/expectations drive fewer behaviors. e.g. an AI janitor just needs to achieve a defined level of cleanliness, not gauge people's intent in real-time.
* Operating environments are more standardized. All these jobs operate indoors with decent lighting.
It’s strange to me watching the collective meltdown over AI/jobs when AI doesn’t do jobs, it does tasks.
All of this is very common for human driven cars too.
I get what you are saying, but humans need 16 years of training to begin driving. I wouldn’t call that not a lot.
We humans used to do that sort of thing, but not anymore, so... bring on the AI. It won't work as well as it might otherwise be able to, but it'll probably kill fewer humans on the road at the end of the day. A low bar to clear.
Literally the only open source self driving platform, from trillion to billion to million dollar companies is comma.ai, founded by Geohot. Thats it. Its actually very good, and I bet they would welcome these upgrades, but that would be a consortium of one underdog pushing for them.
Corporations generally follow a narrow somewhat predictable pattern towards some local maxima of their own value extraction. Since world is not zero sum, it produces value for others too.
Where politics (should) enter the picture is where we somehow can see a more global maxima (for all citizens) and try to drive towards it through some political, hopefully democratic means. (Laws, standards, education, investment, infra etc)
- destroy voting population's jobs
- put power in the hand of 1-2 tech companies
- clog streets with more cars rather than build trams, trains, maglevs, you name it