I recently watched a YouTube video where someone reacted to Sam Altman's
statements. One comment he made left me wondering if I've missed something in
the current "AI summer." Altman suggested that "robots will do all current jobs
on our behalf, and we humans must therefore find new things to do."
If I understood correctly, it seems like OpenAI's leader is implying that large
language models (LLMs) have something significant to contribute to robotics.
This strikes me as curious because, as far as I know, these are two very
different problem domains. Only one (LLMs) has experienced rapid advancements
recently, while the other (robotics) hasn’t.
Robotics in the physical world demands fast and precise responses. A robot
walking to a grocery store and picking up items, for example, needs quick
and accurate decision-making to avoid falling, breaking things, or picking up
the wrong items. LLMs, on the other hand, provide slow, approximate answers
that often resemble truth but can include hallucinations. This works well
for paraphrasing non-exact information but seems ill-suited to the precision
robotics requires.
So, my questions are:
1. Have LLMs contributed to advancements in robotics?
2. Is there any reason to believe they will?
From my understanding, the current paradigm of LLMs relies on scaling compute
power for diminishing returns, which is the opposite of what useful robotics
needs: efficient and accurate computation.