Joining Google gives him ready access to data sets of almost unimaginable size, as well as unparalleled infrastructure and skills for handling such large data sets, putting him in an ideal position to connect researchers in academic and corporate settings with the data, infrastructure, and data management skills they need to make their visions a reality.
According to the MIT Technology Review[1], he will be working with Peter Norvig, who is not just Google's Director of Research, but a well-known figure in AI.
--
[1] http://www.technologyreview.com/view/508896/what-google-sees...
==================================
I don't think that's a very fair assessment of Kurzweil's role in technology.
He was on the ground, getting his hands dirty with the first commercial applications of AI. He made quite a bit of money selling his various companies and technologies, and was awarded the presidential Medal of Technology from Clinton.
As I was growing up, there was a series of "Oh wow!" moments I had, associated with computers and the seemingly sci-fi things they were now capable of.
"Oh wow, computers can read printed documents and recognize the characters!"
"Oh wow, computers can read written text aloud!"
"Oh wow, computers can recognize speech!"
"Oh wow, computer synthesizers can sound just like pianos now!"
I didn't realize until much later that Kurzweil was heavily involved with all of those breakthroughs.
“Ray’s contributions to science and technology, through research in character and speech recognition and machine learning, have led to technological achievements that have had an enormous impact on society – such as the Kurzweil Reading Machine, used by Stevie Wonder and others to have print read aloud. We appreciate his ambitious, long-term thinking, and we think his approach to problem-solving will be incredibly valuable to projects we’re working on at Google.”
He was weird then too. That's why he did such interesting work. His work, combined with a lack of fame at that time, just kept the weird from showing through.
I suspect that genius is made up almost, but not quite, entirely of crazy.
The problem with Peter Norvig is that he comes from a mathematical background and is a strong defender the use of statistical models that have no biological basis.[1] While they have their use in specific areas, they will never lead us to a general purpose strong AI.
Lately Kurzweil has come around to see that symbolic and bayesian networks have been holding AI back for the past 50 years. He is now a proponent of using biologically inspired methods similar to Jeff Hawkins' approach of Hierarchical Temporal Memory.
Hopefully, he'll bring some fresh ideas to Google. This will be especially useful in areas like voice recognition and translation. For example, just last week, I needed to translate. "I need to meet up" to Chinese. Google translates it to 我需要满足, meaning "I need to satisfy". This is where statistical translations fail, because statistics and probabilities will never teach machines to "understand" language.
[1] http://www.tor.com/blogs/2011/06/norvig-vs-chomsky-and-the-f...
You know, it would be wonderful if Ray Kurzweil actually works on software/hardware projects, and he's just hush-hush because he doesn't want to release experiments. Maybe he does more than writing books and speaking at conferences, and he secretly provisions ec2 clusters to experiment with Hadoop or whatever. Maybe he's not just some old geezer that pops lots of pills, maybe he's an old geezer that pops pills and writes Go.
At least, that's what I tell myself to not be as angry about his "prediction from a distance" branding.
On a somewhat related note, http://heybryan.org/fernhout/ has some old emails someone sent to Ray, exploring his lack of involvement in the open source transhumanist hardware/software community.
Um...
More important question: how many AI researchers respect the last 20 years of his work?
It's a shame, he's brought many great contributions to our field, but I fear he has jumped the shark a while ago. Maybe going to Google will force him to work on solutions to problems of which the correctness can be more easily assessed.
Really? Because if so, then they stole that quote almost verbatim from Mitch Kapor when he was discussing the singularity in 2007. And it seems to have a lot less relevance to a book about how the brain works than it does to an imagined singularity.
>Mitch Kapor, the founder of Lotus Development Corporation, has called the notion of a technological singularity "intelligent design for the IQ 140 people...This proposition that we're heading to this point at which everything is going to be just unimaginably different—it's fundamentally, in my view, driven by a religious impulse. And all of the frantic arm-waving can't obscure that fact for me."
I am very grateful for the inventions he brought forth and his work on AI but I think his current goals in life are unreasonable and of course, related to the death of his father.
As much as he doesn't want to be human anymore, his entire goal in life relies on the human condition...to reconnect with his father and transgress life in its current form.
I think he could do so much more at the moment if, like you said, he would focus on problems that can be solved as soon as possible and demonstrate a use of his solution.
While are a number of obvious problems with the theory, it's still an invaluable idea. Even if 2042 doesn't pan it, Kurzweil has still provided an enormously powerful tool to help understand the world around us. (Well, technically he didn't invent the idea, but he was the one who did most of the work aggregating the data.)
What field? The fluffsters? Saying stuff like, "Ideas are presented in a way to fit nicely together, but ultimately lack any depth or critical insights.", is saying nothing.
A hire like this one certainly reinforces that perception.
I don't know if it's truly possible to accomplish, but it's fascinating to see a major company taking steps in that directions.
The comments about book scanning led to some controversy at the time [2], which gave a glimpse into Google's AI motivations that have now become much more explicit, thanks to projects like Google Now, Google Glass, and self-driving cars.
1. http://www.edge.org/3rd_culture/dyson05/dyson05_index.html
2. http://www.zdnet.com/google-side-steps-ai-rumours-3039237225...
If you take that to the limit, the logical consequence is some sort of planet-wise consciousness that can instantly pull up any of humanity's collective knowledge at a moment's notice.
Singularity U as far as I understand is not really there so people can more quickly get to the point of uploading their brain to the cloud or anything - it's essentially for business strategists who want to have a better grasp of where things will be in 5-10+ years out. If the Goog believes strongly in the Kurz's ability to do this then it seems like a pretty nice score for the Goog.
Maybe because of his role at Google, "Director of Engineering". That's not a good description of what Singularity University offers their customers. They do maybe one or two field trips to BioCurious and call it quits.
Also, why is Singularity University managing TedxAustin? That was a bizarre email to see.
Why would this not in alignment w/ Google's aim for such a position? Why would they not want a strategist they believe who could direct their engineering staff in this manner?
The people who attend the university are CEOs, CTOs... Directors of Engineering, etc. It's not for fringe kooks to congregate in celebration of the upcoming nerd rapture. Not at $25k/10 weeks it ain't.
I get that he's a polarizing figure. But there are some very powerful people in this world who believes the man can walk on water.
I see what DRF means, and The Singularity is Near did seem mostly a perfunctory literature review, with important issues not discussed, just skimmed over. (For example, he doesn't discussed the causes of accelerating returns, doesn't support the causes with data, only the effects. Another example: is it necessarily true that we are intelligent enough to understand ourselves? We're effective when we can something decompose hierarchically into simpler concepts... but what if there isn't such a decomposition of intelligence? i.e. the simplest decomposition is too complex for us to grasp. Hofstadner asks if a giraffe is intelligent enough to understand itself.)
But I thought he supported his basic thesis, that progress is accelerating, compellingly. Really did a great job (seems to be the result of ongoing criticism, and him finding ways to refute it).
I agree with this. It seems to be a huge hole in the entire discussion. It's not enough to cite historical data, and assert that exponential growth will continue indefinitely. I could speculate a bit about some explanations. But I'm curious if there are any good discussions out there, does anyone have some recommendations?
Read between the lines - "next decade’s ‘unrealistic’ visions" - is likely nothing less than brain computer interfaces with the end goal of extending life by storing the entire human mind on a machine. Certainly not far off from Kurweil's timelines on Law of Accelerating Returns. I can understand why the PR does not say this, but it seems clear this is where Kurzweil would want to invest his time.
He's a visionary who can deliver a finished product. I think he must have some pretty specific ideas, and he wants to partner with Google.
A few guesses:
- New interfaces to replace keyboard/mouse/touch. Voice, gesture, face, brainwaves. Sign language with humming, blinking, and pupil pointing. Works with tablets, TVs, wearables, cars, buildings, ATMs, etc.
- SuperPets (r) that can pass the Turing test. And do the shopping.
- Surgically implanted Bluetooth. (It could literally be a tooth!)
- Hover skateboards.
- The Matrix. (Or the 13th Floor, which was a better movie in my not-so humble opinion.)
I don't think it'll have to do with life-extension though. That's just too crazy far out-there.
Unfortunately, it turns out you can only get a limited number of bits out by looking at brainwaves (EEG). Gesture is much higher bandwidth, and keyboards seem to be the highest.
And I can't type as fast as I can talk. So I'm thinking gesticulating > speaking > typing.
I don't know exactly what Google's motives are here, I suspect it's something less than actually bringing about some of his, let's say, loftier ideas.
I found the hire curious from the standpoint that Kurzweil's tendency to handwave rather than retreat to data has historically been a red flag in the hiring process at Google. This tended to unfairly penalize theorists over experimentalists at Google. One wonders if they've changed.
I remember him giving a tech talk and talking about how many computers you'd need to simulate a brain and how nobody would put that together for years yet, and chuckling knowingly :-).
You think he's a theorist rather than an experimentalist? How can you possibly get that idea with all of his game-changing inventions?
I don't know the specifics of this situation though, so take it with a grain of salt. reply
I mean even if you don't believe in the Singularity, you must believe in Google, right?
Believe that they'd never be mistaken?
This makes "the singularity" sound very much like a religion.
I think what most people mean is, "even if you don't believe that anything other than squishy brains can ever recursively do what the brain does".
-- The Age of Spiritual Machines (1999)
But maybe he's been there and done that, and wants mucho resources from day one. Maybe the AI space has grown up and it's hard to start up companies now, you need the resources and big data sets to do anything significant? Or he's just after the free lunches.
In 2008, Ray Kurzweil said in an expert panel in the National Academy of Engineering that solar power will scale up to produce all the energy needs of Earth's people in 20 years.
lololololol
> 1%... you're pretty much finished... try that with product submission schedules [1]
...so now we know who to blame for future Google product delays.
[1]: http://www.youtube.com/watch?v=zihTWh5i2C4
EDIT: added the source link
Google is badly managed but they're not going to subject a heavyweight like that to their typical nonsense (blind allocation, manager-as-SPOF) and if they do, I'm sure he'll be just fine.