That belief is wrong. Today's robots can't be made useful in everyday life no matter how advanced the software. The hardware is too inflexible, too unreliable, too fragile, too rigid, too heavy, too dangerous, too expensive, too slow.
In the past the software and hardware were equally bad, but today machine learning is advancing like crazy, while the hardware is improving at a snail's pace in comparison. Solving robotics is now a hardware problem, not a software problem. When the hardware is ready, the software will be comparatively easy to develop. Without the right hardware, you can't develop the appropriate software.
OpenAI is right to ignore robotics for now. It's a job for companies with a hardware focus, for at least the next decade.
"When the hardware is ready, the software will be comparatively easy to develop." I take it you've never written any software for a robot? The long tail of the real world takes years and years to handle. Probably the most advanced robotics company, at the cutting edge of the ML+Robotics, is Covariant and their entire business model rests on an understanding that the long tail can and should be handled by humans.
I agree that OpenAI is right to cut out the hardware, but all your reasoning about why is wrong.
The reason, which they state, is that data collection on physical devices is slow and modification to those devices is slow and maintenance on those devices is expensive. You want to simulate everything, not because it reproduces the real world in high fidelity, that doesn't matter, but because it gives you approximations with sufficient variety and complexity that you can continually challenge your AI, and you can do all that at 1M fps.
Not for everyday tasks with anywhere near the efficiency, reliability, speed, and cost that humans have without robots. You can't waldo any robot to do the laundry or the cooking in a normal home anywhere near as well as a human can do it. (I'd love to see you try!)
Sure, you can make a robot that can do work humans can't. You can make a robot stronger, or more precise, or better suited to repetitive motion than a human. Those attributes are useful in specialized tasks. But generally not for the everyday tasks humans do today that we want robots to help us with. For everyday tasks you need a robot that is comparable in speed, efficiency, weight, reliability, durability, flexibility, sensor capability, and cost to a human. Not one of those areas, but all of them simultaneously. That's the hard part.
The world will be ready for robotics revolution - creating immense demand for robotics software - when you'll be able to get a decent arm for the price of a fridge, not the price of a fancy car; just as we got the computer revolution not when we developed capable computers, but only after we developed affordable capable computers.
Humans are excellent to handle the long-tail when they're already handling the rest. Take driving. We're already seeing cars with large cognitive assistance, taking more and more an active role in 'easy' tasks. Think Tesla's autopilot. You're supposed to be there and 'take over' in case the 'machine' fails to handle the 'long tail' or decides to give you the responsibility of whatever happens next (because you trained it to do so).
Driving is a very complex task, you need training, experience, anticipation and (very important) context. There's no easy way to scramble all the details necessary for a decision in a human brain in the time to take the decision 'correctly'. Similar problem for industrial automation where you call the 'long tail' person once in a while and that person probably doesn't have the expertise of reconstructing the context, after 3 turnover phases in your provider.
I think we're taking this problem the wrong way, and aiming for the lower fruits, and higher and higher, while handwaving the long tail and sending it over the fence to the human. We should be putting the human at the center of this, and extend their capabilities, reduce the repetitiveness, help, not take over.
The paper I like a lot on this is 'automation should be like Iron Man, not like Ultron'.
The average human spends most of their time barely engaged, our brains and bodies are operating far below what we're capable of, the romanticised sci-fi vision of a world filled with intelligent robots performing every menial task for humans builds on the idea that humans have better things to do, but do we? We already have enough knowledge and resources to end world hunger, to bring a high standard of living to every human, but we choose not to: our problem is social, not software or hardware.
As an aside, I'd dispute the claim that hardware is lagging behind software: Tesla has lots of money and lots of smart people and they haven't been able to deliver self driving cars after more than a decade of promises (because of software).
You're absolutely wrong. Anyone with basic electronics knowledge and a few hundred bucks can build a passable robot body out of hobby grade servos and 3D printed parts. If you're willing to spend $10k+ you can make something quite capable.
Programming it to then actually do anything, let alone anything useful in the real world, is still out of reach for all but a tiny fraction of companies.
Hardware still has a long way to go before it's as capable as biological systems but it's usable. Real world AI is far from that in most areas.
And that will not even be taking into account the time-to-maintenance of such a system.
On the other hand, Boston Dynamics' manifold, where they do the control of the dozens of hydraulical parts, is an absolute marvel of technology that shows what you can achieve with 45 (?) years of dedicated focus.
You might be able to teleoperate their robot for something useful in a human environment, and I guess that would be a gamechanger. But even there I want to wait-and-see if they can escape the fate of many that came before them.
If it worked that way, my job would be much easier.
Later you will have a Spot robot chasing a person and getting it to stop, surrendering at the machine without being threatened by it, but just by recognizing that there's no longer a point in running away.
Also, regular ML researchers sit at tables with laptops. Robotics people need electronics labs and electronics technicians, machine shops and machinists, test tracks and test track staff...
If you have to build stuff, and you're not in a place that builds stuff on a regular basis, it takes way too long to get stuff built.
I wonder why they don't invest in establishing the competency for robotics. The potential return might seem enormous, though their choices might signify that they don't agree.
Or maybe they just aren't willing to leave their comfort zone. 'Software will eat the world' is a convenient idea for people who want to stay in that comfort zone.
Reinforcement learning can work quite well if you produce the hardware, so that your simulation model perfectly matches the real-world deployment system. On the other hand, training purely on virtual data has never really worked for us because the real world is always messier/dirtier than even your most realistic CGI simulations. And nobody wants an AI that cannot deal with everyday stuff like fog, water, shiny floors, rain, and dust.
In my opinion, most recent AI breakthroughs have come from restating the problem in a way that you can brute-force it with ever-increasing compute power and ever-larger data sets. "end to end trainable" is the magic keyword here. That means the keys to the future are in better data set creation. And the cheapest way to collect lots of data about how the world works is to send a robot and let it play, just like how kids learn.
Given that, unless they want to commercialise fruit picking or warehouse robots, it seems sensible.
One of the reasons ML-based AI is pretty dumb still is possibly that this autonomous exploration side of AI is largely ignored.
It all seems to tie back into what Judea Pearl talks about in his "book of Why" (how you can't model intelligence without modelling learning of causal inference) or what Jeff Hawkins explores with his "reference frames of reference frames of the world" 1000 brains theory.
How successful do you think attempts to monetize this will be? Apart from Kiva at Amazon, I'm not even sure most shelf-moving robots are profitable enterprises (GreyOrange, Berkshire Grey, etcetera). I'm very skeptical of more general purpose warehouse robots such as you see from Covariance, Fetch, etcetera. I don't really know too much about fruit-picking other than grokking how hard it would be and how little it would pay.
To be clear, I'm not saying these companies make no money or have no customers. But it's not clear to me that any of them are profitable or likely will be soon, and robots are very expensive. I'm happy to learn why I'm wrong and these companies/technologies are further ahead than I realize.
I think OpenAI has progressively narrowed down its core competency - for a company like 3M it would be something like "applying coatings to substrates", and for OpenAI it's more like "applying transformers to different domains".
It seems like most of their high-impact stuff is basically a big transformer: GPT-x, copilot, image gpt, DALL-E, CLIP, jukebox, musenet
their RL and gan/diffusion stuff bucks the trend, but I'm sure we'll see transformers show up in those domains as well.
https://arxiv.org/abs/2102.02202
Not to mention a bunch of relatively inexpensive reinforcement learning research relying on consumer knockoffs of Spot from Boston Dynamics...
Really does seem like they are following the money and while there's nothing wrong with that it's also nothing like their original mission.
The VC community is in denial about how much Go resembled a problem purpose built to be solved by deep neural networks.
I genuinely believe how we as a society act once human labour is replaced is first aspect of the great filter.
But so many of the little problems have been solved. Batteries are much better. Radio data links are totally solved. Cameras are small and cheap. 3-phase brushless motors are small and somewhat. Power electronics for 3-phase brushless motors is cheap. 3D printing for making parts is cheap.
I used to work on this stuff in the 1990s. All those things were problems back then. Way too much time spent on low-level mechanics.
You can now get a good legged dog-type robot for US$12K, and a good robot arm for US$4K. This is progress.
I'd just note that "decades away" means "an unforeseeable number of true advances away" - which could mean ten years or could mean centuries.
And private companies can't throw money indefinitely at problems others have been trying to solve and failing at. They can it once and a while but that's it.
We have been at this since at least the dawn of the industrial revolution and do not have it right yet. Backing off and taking it slow now to let some cultural adjustments happen is a responsible step.
My cultural norms are repulsed by the thought of me not working as much as possible, it is how I expect my value to society to be gauged (and rewarded).
This line of reasoning will be (is) obsolete and we need another in its place globally.
I hope some may have better ideas of what these new cultural norms should look like than I with my too much traditional indoctrination.
I only know what I will not have it look like; humanity as vassals of non corporeal entities or elites.
That hasn't stopped the march of progress so far. Conveniently (or not), humanoid robots do not appear likely for the foreseeable future. But keep worrying, the problem you list are appearing in other fashions anyway.
The ability to train huge models does not belong to a single entity and many of these models get shared with everyone. So you can right now type "import transformers" and have thousands of trained models at your fingertips. All these toys are ours (thanks to important work done for free by some of us) we just need imagination to use them.
Humans ARE genral bipedal robots. The price of these robots is determined by the minimum wage.
Robotics research is going to be extremely binary. It's expensive and frustrating, and there's little use for it until it works as well as human labor, which is a high bar.
But, once that Rubicon is crossed, I believe there will be a sort of singularity in that space. It's related to but somewhat orthogonal to the singularity that's prognosticated for g a i.
No need for bipeds, car factories employ dumb robot arms, no humans needed. Not very general purpose robots though.
The first country/company to create robots that can be instructed similar to a humans to do any job will indeed have great benefits, but how long until that happens? Not within any amount of time that an investor wants to see. I'm unsure if I will ever see that in my life (counting on ~60 years to go still maybe?)
EDIT: Imagine the "credit unions" I mention in the following linked comment, but holding homes and manufacturing space to be used by members. https://news.ycombinator.com/item?id=27860696
I don't work for openAI but I would guess they are going to keep working on RL (e.g hide and seek, gym, DoTA style Research) to push the algorithmic SoTA. But translating that into a physical robot interacting with the physical world is extremely difficult and a ways away.
With the mentioning that they can shift their focus to domains with extensive data that they can build models of action with etc... Why not try the following (If easily possible)
---
Take all the objects on the various 3D warehouses (thingiverse, and all the other 3d modeling repos out there) -- and have a system whereby an OpenAI 'Robotics' platform can virtually learn to manipulate and control a 3D model (solidworks/blender/whatever) and learn how to operate it.
It would be amazing to have an AI robotics platform where you feed it various 3D files of real/planned/designed machines, and have it understand the actual constituancy of the components involved, then learn its degrees of motion limits, or servo inputs etc... and then learn to drive the device.
Then, give it various other machines which share component types, built into any multitude of devices - and have it eval the model for familiar gears, worm-screws, servos, motors, etc... and have it figure out how to output the controller code to run an actual physically built out device.
Let it go through thousands of 3D models of things and build a library of common code that can be used to run those components when found in any design....
Then you couple that code with Copilot and allow for people to have a codebase for controlling such based on what OpenAI has already learned....
As Copilot is already built using a partnership with OpenAI...
Are there any Open* organizations for robotics that could perhaps fill the void here? I think robotics is really important and I think the software is a big deal also, but it's important that actual physical trials of these AIs are pursued. I would think that seeing something in real space like that offers an unparalleled insight for expert observers.
I remember the first time I ever orchestrated a DB failover routine, my boss took me into the server room when it was scheduled on the testing cluster. Hearing all the machines spin up and the hard drives start humming, that was a powerful and visceral moment for me and really crystallized what seemed like importance about my job.