Simulated training is so cool. Related, is anyone interested in a plugin for Blender that allows you to easily build physically-accurate simulation environments for robots and then apply reinforcement learning to the virtual robots? I have a hodge-podge amount of code for doing exactly this, and I'm curious if anyone else would be interested in it?
AFAIK there are a lot of publicly available RL algorithms out there, but not many (any) blender like interfaces to make physically accurate simulations.
"Launching too early means failure, but being conservative & launching later is just as bad because regardless of forecasting, a good idea will draw overly-optimistic researchers or entrepreneurs to it like moths to a flame: all get immolated but the one with the dumb luck to kiss the flame at the perfect instant, who then wins everything, at which point everyone can see that the optimal time is past."
Robotics has been a money pit for startups and corporations for a long time. Think of the billions Toytoa has spent on home robotics research, to little avail.
But at some point it won't be. Some entity will "kiss the flame" at the right movement. The wealth they create will be beyond any company ever, by an almost incomparable margin.
The next startup after that, the boss was very excited to have competitors, because without them you are alone in trying to validate that sector. Competition means people are voting with their feet and wallets that you are, if not right, at least not wrong.
It kind of felt like I understood him on a level many of my coworkers did not.
But you're describing kind of the opposite end of the spectrum. When there are too many people, you have no control over the narrative. If you are surrounded by idiots, you get painted with the same brush.
And now that I'm thinking about it, it would even be tenuous for you to buy up your more clue-ful competitors because combining forces may improve your narrative but now it's one voice instead of two. That's a new wrinkle in the post-hype consolidation pattern that I hadn't considered before.
But those don't look like "robots", they look like arms with tools on the end of them.
The kind of humanoid servant robot from books and movies, however, is still pretty much fictional. The required capabilities are mostly really hard, even after you factor in the recent advances in ML et. al.
I remember when Sony made that little humanoid robot that danced. I was like, "Big deal! I like to dance. Make a robot that does the dishes."
- - - -
To make it big with robots (per se as opposed to just building an automated factory, or toys) you have to find the economic niches.
> I was like, "Big deal! I like to dance. Make a robot that does the dishes."
From these comments, I think you're missing a really huge category of robots - appliances. Why does a dishwasher or laundry machine not qualify as a robot, after all?
Since solving it better than how it is being solved requires much more than tech (distribution, habits, pricing), it does take a number of experiments before value from tech is unlocked by an entrepreneur.
- Only 20% attempts successful on hardest configs with 26+ moves
- Solving steps are not generated by RL (but could be[1])
- Cube is modified internally to transmit additional state via bluetooth
- Highly calibrated and fine tuned environment+MuJoCo based sim to match simulation to reality as much as possible
- Open AI Five algorithm is pretty much reused as-is
- Cumulative training time = 13 thousand years, same order of magnitude as the 40 thousand years
- 32+64 V100 GPUs per training cycle
But one of my favorite inventions of his was a creature that had somehow evolved wheels. With veins and nerves and such there is hardly a creature on earth that can rotate a limb farther much farther than 200°, and the ones that can, like owls, we treat with a certain reverence.
Developing an artificial wrist that can spin arbitrarily would be, I'd think, a quite compelling compensation for someone having to use a prosthetic arm. It would also make for some wicked Rubix solving skills. I wonder how proprioception would deal with that though...
...I'm not feeling so confident now.
I'll be impressed by RL is a) they manage to do sim2real in open environments, think Doom -> office building or b) they manage to get data efficient enough that sim2real is still necessary but you don't have to do real data collection with 10 parallel robots for days on end.
As someone in mobile robotics as opposed to pure manipulation, I read these papers and I'm like: "How the hell am I supposed to get this to work on a robot moving in the real world???". I don't see anyone being close to this right now.
Of course, I cannot emphasize enough that this is a good research paper! We're just really really far away mobile robots trained end-to-end with RL for doing any kind of tasks in open environments. And that's fine. It means more cool research to do for me.
I thought we were still a decade away from having machines beat humans at real chess and real go, but this makes me think maybes it’s just 5 years out. Very impressive.
In particular, far from being "just 5 years out", robot hands that execute chess moves have been already demoed many times, including by hobbyists with very limited resources. Reliable computer vision was a bit more trickier a decade ago, but that's not a problem now; Having a robot beat grandmasters at "real chess" (i.e. the same thing as "virtual chess" but also manipulating the physical pieces) would not be considered a hard problem nor a valuable achievement, it's a nifty parlor trick that could make a cute demo 10 years ago, and could be used as a homework project for engineering students nowadays - however that's likely to be two separate projects, as the mechanical manipulation and visual recognition is likely to be different skillsets and thus different students.
Here's a random article from 2010 https://newatlas.com/chess-terminator-robot-takes-on-kramnik...
Here's a hobbyist project from 2013 https://www.robotshop.com/community/blog/show/a-chess-playin...
Here's a tutorial from 2017 on how to make the chess piece manipulation yourself - https://www.youtube.com/watch?v=NefiXZ7BCsE
Here's a student project, replacing the vision with sensors - https://www.instructables.com/id/Chess-Robot/
> Manipulating chess pieces is trivial for e.g. a pick and place robot,
Perhaps in a sterile, well-known, controlled environment; but not in a real world, novel, potentially adversarial environment.
I guess my point is about AGI is that I would bet a 7-year old could currently beat the best AI in the world at real, physical chess, played in a randomly chosen park. Kids can quickly figure out strategies in the real world with its more degrees of freedom than you have in the digital world of computer chess. In other words, perhaps a kid may figure out that if they place a piece in a certain position, the computer is unable to "see" or "execute" the desired move, perhaps because the angle of the sun or some line of sight obstruction. While an adult might be generous and offer help, a lot of children will take advantage of the robot's weaknesses.
The pieces seemed to move by themselves.
non-world-class: doing <30" one handed is very doable, anyone can do less than 1 min (yes, if you know how to solve and you trained one handed. ofc not if you never solved a cube in your life)
that said... I really don't understand how the hand keeps the cube "floating" around. In one handed the technique is pretty much to keep the cube fixed holding front/back centers with thumb and index. Something like https://www.youtube.com/watch?v=mUF3aPDTO-4
I understand the achievement, but wow, this solve is HORRIBLE. What did they train the network with to get this?!