> They decide how quickly they deploy, which industries they automate, whether they cooperate with unions, etc. These are all decisions that shape the economy.
They control how quickly they deploy, but I don't see how they have any control over the rest: "which industries they automate" is a function of how well the model has generalised. All the medical information, laws and case histories, all the source code, they're still only "ok"; and how are they, as a model provider in the US, supposed to cooperate (or not) with a trade union in e.g. Brandenburg whose bosses are using their services?
> Widespread job losses as a path to post-work are about as plausible as a car accident as a path to bringing a vehicle to a standstill.
Certainly what I fear.
Any given UBI is only meaningful if it is connected to the source of economic productivity; if a government is offering it, it must control that source; if the source is AI (and robotics), that government must control the AI/robots.
If governments wait until the AI is ready, the companies will have the power to simply say "make me"; if the governments step in before the AI is ready, they may simply find themselves out-competed by businesses in jurisdictions whose governments are less interested in intervention.
And even if a government pulls it off, how does that government remain, long-term, friendly to its own people? Even democracies do not last forever.