But the fact that we can't yet do that doesn't mean that bots are going to be useless. That'd be like looking at the first ocean-going freight vessels and saying "well, it can't move 300,000 tonnes of freight, so it's useless), or the Wright Flyer and saying that because it can't cross the Atlantic, it's a bit rubbish. (I'm aware this is close to straw manning, but bare with me...)
Sure, early proofs of concept are often both limited in scope and fairly dire. But that doesn't mean there's no potential utility in them and what they do. I suspect bots are similar. Initial, narrow use-case versions will be very useful at providing value in specific circumstances, and eventually they'll become more general in nature. But decrying them at this stage seems a bit like throwing the baby out with the bathwater.
Does anyone else find it a little bit gross when companies take stock photographs, give them a name and write stories about how they use the product? It's clearly meant to work as a social proof and to look almost like an endorsement from a peer.
It's just another warmer, fuzzier dark pattern in my opinion.
It kind of makes you wonder if this whole article is just PR/advertising to say "look at us! no bots! personal touch!"
http://dangrover.com/blog/2016/04/20/bots-wont-replace-apps....
The author of the article could have collected all the conversations and learned a bot that would correctly converse with people asking similar questions.
there's no reason that having the same question asked in different ways should be a problem.
language is structured. structured learning and prediction exists for more than two decades and just recently there have been very nice improvements to known methods (learning to search, neural networks for structured learning etc.).
one can try to summarize an answer to a question from relevant fetched documents. summarization is a structured prediction task.
for example, in the conversations with a bot, you store all of the questions and your answers.
your answers were formed by using documents that contain the needed information. now you're trying to find a mapping that will successfully fetch the relevant documents for the question, and then summarize all of the documents to as close as possible summarization (summarized text should be similar to your stored answer).
structured prediction techniques use simple methods such as pos tagging and then pruning the dependency parse tree of sentences in document to shorten it, excluding whole sentences or text-between-commas or unneeded-adjectives etc. (these methods are based on statistical machine learning, not some silly rule based technique, one can incorporate word2vec features or other neural network magic)
it's not impossible, given enough data, to build a bot that would interact successfully.
sarcasm, and emotions are still a bit away, mostly because they require knowledge about the world, and if your world is a small set of documents you won't successfully get the sarcasm or emotions. this is also the case with people when they come to a different culture.