==================================
I don't think that's a very fair assessment of Kurzweil's role in technology.
He was on the ground, getting his hands dirty with the first commercial applications of AI. He made quite a bit of money selling his various companies and technologies, and was awarded the presidential Medal of Technology from Clinton.
As I was growing up, there was a series of "Oh wow!" moments I had, associated with computers and the seemingly sci-fi things they were now capable of.
"Oh wow, computers can read printed documents and recognize the characters!"
"Oh wow, computers can read written text aloud!"
"Oh wow, computers can recognize speech!"
"Oh wow, computer synthesizers can sound just like pianos now!"
I didn't realize until much later that Kurzweil was heavily involved with all of those breakthroughs.
I'm sure he is a smart guy, but I think we have put him on a pedestal when he probably is not as remarkable as we want him to be.
I find it instructive to occassionally go to Youtube and load up commericals for Windows 95, 3.1, the first Mac, etc., or even to dust off and boot up an old computer I haven't touched for decades. Not to get too pretentious, but it's a bit like Proust writing about memories of his childhood coming flooding back to him just from the smell of a cake he ate as a child.
When you really make a concerted effort to remember just how primitive previous generations of computing were, I think it puts Kurzweil's predictions and accomplishments in a much more impressive context.
This was the state of the art PC back when Ray was forming his first companies: https://www.youtube.com/watch?v=vAhp_LzvSWk
I posted some other thoughts about Ray's track record awhile back:
==========================================
I read his predictions for 2009 only a couple of years before they were supposed to come about (which he wrote in the late 90s), and many seemed kind of far fetched - and then all of a sudden the iPhone, iPad, Google self-driving car, Siri, Google Glass, and Watson come out, and he's pretty much batting a thousand.
Some of those predictions were a year or two late, in 2010 or 2011, but do a couple of years really matter in the grand scheme of things?
Predicting that self-driving cars would occur in ten years in the late 90s is pretty extraordinary, especially if you go to youtube and load up a commercial for Windows 98 and get a flashback of how primitive the tech environment actually was back then.
Kurzweil seems to always get technological capabilities right. Where he sometimes falls flat is in technological adoption - how actual consumers are willing to interact with technology, especially where bureaucracies are involved- see his predictions on the adoption of elearning in the classroom, or using speech recognition as an interface in an office environment.
Even if a few of his more outlandish predictions like immortality are a few decades - or even generations - off, I think the road map of technological progress he outlines seems pretty inevitable, yet still awe inspiring.
There have been predictions of self-driving cars for more than half a century. It's in Disney's "Magic Highway" from 1958, for example. There was an episode of Nova from the 1980s showing CMU's work in making a self-driving van.
Researching now, Wikipedia claims: "In 1995, Dickmanns´ re-engineered autonomous S-Class Mercedes-Benz took a 1600 km trip from Munich in Bavaria to Copenhagen in Denmark and back, using saccadic computer vision and transputers to react in real time. The robot achieved speeds exceeding 175 km/h on the German Autobahn, with a mean time between human interventions of 9 km, or 95% autonomous driving. Again it drove in traffic, executing manoeuvres to pass other cars. Despite being a research system without emphasis on long distance reliability, it drove up to 158 km without human intervention."
You'll note that 1995 is before "the late 90s." It's not much of a jump to think that a working research system of 1995 could be turned into something production ready within 20 years. And you say "a year or two late", but how have you decided that something passes the test?
For example, Google Glass is the continuation of decades of research in augmented reality displays going back to the 1960s. I read about some of the research in the 1993 Communications of the ACM "Special issue on computer augmented environments."
Gibson said "The future is already here — it's just not very evenly distributed." I look at your statement of batting a thousand and can't help but wonder if that's because Kurzweil was batting a thousand when the books was written. It's no special trick to say that neat research projects of now will be commercial products in a decade or two.
Here's the list of 15 predictions for 2009 from "The Age of Spiritual Machines (1999)", copied from Wikipedia and with my commentary:
* Most books will be read on screens rather than paper -- still hasn't happened. In terms of published books, a Sept. 2012 article says "The overall growth of 89.1 per cent in digital sales went from from £77m to £145m, while physical book sales fell from £985m to £982m - and 3.8 per cent by volume from £260m to £251m." I'm using sales as a proxy for reads, and while e-books are generally cheaper than physical ones, there's a huge number of physical used books, and library books, which aren't on this list.
* Most text will be created using speech recognition technology. -- way entirely wrong (there's goes your 'batting 1000')
* Intelligent roads and driverless cars will be in use, mostly on highways. -- See above. This is little more common now than it was when the prediction was made.
* People use personal computers the size of rings, pins, credit cards and books. -- The "ring" must surely be an allusion to the JavaRing, which Jakob Nielsen had, and talked about, in 1998, so in that respect, these already existed when Kurzweil made the prediction. Tandy sold pocket computers during the 1980s. These were calculator-sized portable computers smaller than a book, and they even ran BASIC. So this prediction was true when it was made.
* Personal worn computers provide monitoring of body functions, automated identity and directions for navigation. -- Again, this was true when it was made. The JavaRing would do automated identity. The Benefon Esc! was the first "mobile phone and GPS navigator integrated in one product", and it came out in late 1999.
* Cables are disappearing. Computer peripheries use wireless communication. -- I'm mixed about this. I look around and see several USB cables and power chargers. Few wire their house for ethernet these days, but some do for gigabit. Wi-fi is a great thing, but the term Wi-Fi was "first used commercially in August 1999", so it's not like it was an amazing prediction. There are bluetooth mice and other peripherals, but there was also infra-red versions of the same a decade previous.
* People can talk to their computer to give commands. -- You mention Siri, but Macs have had built-in speech control since the 1990s, with PlainTalk. Looking now, it was first added in 1993, and is on every OS X installation. So this capability already existed when the prediction was made. That's to say nothing of assistive technologies like Dragon which did supported text commands in the 1990s.
* Computer displays built into eyeglasses for augmented reality are used. -- "are used" is such a wishy-washy term. Steve Mann has been using wearable computers (the EyeTap) since at least 1981. Originally it was quite large. By the late 1990s it was eyeglasses and a small device on the belt. It's no surprise that in 10 years there would be at least one person - Steve Mann - using a system where the computer was built into the eyeglasses. Which he does. A better prediction would have been "are used by over 100,000 people."
* Computers can recognize their owner's face from a picture or video. -- What's this supposed to mean? There was computer facial recognition already when the prediction was made.
* Three-dimensional chips are commonly used. -- No. Well, perhaps, depending on your definition of "3D." Says Wikipedia, "The semiconductor industry is pursuing this promising technology in many different forms, but it is not yet widely used; consequently, the definition is still somewhat fluid."
* Sound producing speakers are being replaced with very small chip-based devices that can place high resolution sound anywhere in three-dimensional space. -- No.
* A 1000 dollar pc can perform about a trillion calculations per second. -- This happened. This is also an extension based on Moore's law and so in some sense predicted a decade previous. PS3s came out in 2006 with a peak performance estimated at 2 teraflops, giving the hardware industry several years of buffer to achieve Kurzweil's goal.
* There is increasing interest in massively parallel neural nets, genetic algorithms and other forms of "chaotic" or complexity theory computing. -- Meh? The late 1990s, early 2000s were a hey-day for that field. Now it's quieted down. I know 'complexity'-based companies in town that went bust after the dot-com collapse cut out their funding.
* Research has been initiated on reverse engineering the brain through both destructive and non-invasive scans. -- Was already being done long before then, so I don't know what "initiated" means.
* Autonomous nanoengineered machines have been demonstrated and include their own computational controls. -- Ah-ha-ha-ha! Yes, Drexler's dream of a nanotech world. Hasn't happened. Still a long way from happening.
So several of these outright did not happen. Many of the rest were already true when they were made, so weren't really predictions. How do you draw the conclusion that these are impressive for their insight into what the future would bring?
This is a problem common to other AI pioneers, including Norvig.
People had a lists of phonemes and improved those.
Then people experimented with different waveforms.
Here's a collection of different voices. (Poor quality sound, unfortunately.) (http://www.youtube.com/watch?v=aFQOYBNAMHg)
Why did all those people take so long to make the jump to biphones, to smoothing out the joins between individual phonemes?
You had the Japanese with their '5th generation' research who were physically modelling the human mouth, tongue, and larynx, and blowing air through it. (You don't hear much about the Japanese 5th generation stuff nowadays. I'd be interested if there's a list of things that come from that research anywhere.)
Saying "talking computers" is easy; doing it is tricky.
EDIT: (http://www.japan-101.com/business/fifth_generation_computer....)
> By any measure the project was an abject failure. At the end of the ten year period they had burned through over 50 billion yen and the program was terminated without having met its goals. The workstations had no appeal in a market where single-CPU systems could outrun them, the software systems never worked, and the entire concept was then made obsolete by the internet.
This does not mean he was not smart, I am simply making a general truth: there are few "original" inventions, and many "obvious" inventions. If you do not think these things were obvious, how long do you think it would take for the next implementation to appear? I would bet 1-3 years at most. No single human is that extraordinary -- some just work harder than others at becoming visible.
What does "predicting" a thing has with actually IMPLEMENTING it? Here, I predict "1000 days runtime per charge laptop batteries". Should I get a patent for this "prediction"?
No, text-to-speech, speech recognition and synthesis are not "fairly obvious applications of computer science that anyone could have pioneered". And even if it was so, to be involved in the pioneering of ALL three takes some kind of genius.
Not only that, but all three fields are quite open today, and far from complete. Speech recognition in particular is extremely limited even today.
Plus, you'd be surprised how many "anyones" scientists failed to pioneer such (or even more) "obvious applications". Heck, the Incas didn't even have wheels.
(That said, I don't consider Kurzweill's current ideas re: Singularity and "immortality" impressive. He sounds more like the archetypical rich guy (from the Pharaohs to Howard Hughes) trying to cheat death (which is a valid pursuit, I guess) than a scientist).