From what they promise I'd expect their team to be stacked with AI researchers, but it looks like just a CEO, a COO and a single PhD "advisor." Who's going to actually build all this? Maybe that's why their "careers" page shows that they are looking for everything from embedded systems to ML engineers.
Seems like either a money grab or an overly idealistic founding team happy to promise the world and figure out how to deliver it later.
Edit: nathantross, you posted this and you're the COO, right? Wanna respond?
1. Market Hysteria as a revolutionary product in whatever space is currently attracting the most hype. Get everyone excited by promising to bring a popular science fiction book/film to life.
2. Spend half an afternoon cloning bitcoin to create new cryptocurrency linked to your product and start selling the coins. Offer limited time "early mover" price to capitalize on FOMO.
3. Work your press contacts to create more buzz around Hysteria.
4. Once you collected a few million dollars selling coins, go out of business. You haven't really defrauded anyone (you delivered the promised coins, and it isn't your fault they didn't turn out to be worth anything), and in an industry where the mantra is "fail fast" your activity isn't likely to attract that much attention.
Edit: Found it: https://getasteria.com/currency
Anyone know someone else who's trying something similar?
So, it's your friend...but it's also there to sell you stuff. But just think of it as your friend.
It sounds like they don't quite know what they're selling or how it's going to be useful to people (which they kinda admit). I could see getting utility from a VA that also suggests services for specific needs, but a friendship with "ambient intelligence" behind it figuring out how it's going to chum up its next product placement? If it's really a "true AI" why not sell it on that merit alone?
As if we don't have enough narcissistic people to deal with every day. ;)
It would sound like an American tourist by doing that.
(There’s lots of articles from US expats in europe, or european expats in the EU, showing how US-Americans tend to speak a lot louder than Europeans in quiet settings, from museums to restaurants)
This leads to an interesting question: Which culture should a voice assistant follow? Should there be multiple variants of each assistant?
(Maybe.)
That's what museum tour guides, human or automated, do today.
Right now, you can find the version of this "done wrong" in "dating site" populated by chatbots.
Synopsis from the publisher:
> By day, Angie, a twenty-year veteran of the tech industry, is a data analyst at Tomo, the world's largest social networking company; by night, she exploits her database access to profile domestic abusers and kill the worst of them. She can't change her own traumatic past, but she can save other women.
> When Tomo introduces a deceptive new product that preys on users’ fears to drive up its own revenue, Angie sees Tomo for what it really is—another evil abuser. Using her coding and hacking expertise, she decides to destroy Tomo by building a new social network that is completely distributed, compartmentalized, and unstoppable. If she succeeds, it will be the end of all centralized power in the Internet.
> But how can an anti-social, one-armed programmer with too many dark secrets succeed when the world’s largest tech company is out to crush her and a no-name government black ops agency sets a psychopath to look into her growing digital footprint?
I prefer a techno-optimistic point of view shown here http://foundersfund.com/anatomy-of-next/
Some words have become so vague and ambiguous in the computer world that sometimes I wish we would stop using them altogether, like: who is a hacker, what is AI, what is Cloud, etc.
Siri, GoogleNow, Cortana, Amazon Echo and others claim to be "intelligent" of some sort, but they're just as smart as their programmers.
Please just stop labeling your next super cool algorithm an "AI".
By our current definitions:
"Cloud" = hosted infrastructure.
"AI" = machine learning.
"Hacker" = programmer.
We've watered these things down to the point that they no longer have any meaning.
"Serverless" = server
I don't think so. ML/DL is just the beginning. Better AI solutions will be discovered in the future. Note that computer neural networks are just simulations of some reality, they're not complete yet. Many intricacies are still to be researched on.
The internet and social media can both be incredibly connecting things (that is their purpose after all, right?).
An assistant, AI or otherwise, that can handle the minutiae of life isn't a bad thing if you're focusing on greater problems or concerns within your life. The danger is that many people are already obsessed with nothing but minutiae and will instead be ruled by their bot rather than vice versa. See also everyone obsessed with the Tamagotchi-like behavior of their cellphones.
Hardware is really hard.
The hardware didn't sound like anything too special to me; especially with it only needing to handle audio. Fitting enough processing power to handle realtime "AI" in a package that size is the only thing jumping out at me. I'm sure they would be planning to offload that work to some 'cloud' to crunch though. (I personally dislike functionless, network-dependent hardware, but everybody seems to be doing it...)
Promising to deliver an AI that people could see as a friend is absolutely insane though. I don't see people being friends with something that couldn't complete the Turing Test, which will likely stand for at least another decade. Speech recognition and synthesis are in fairly good places, but not human interaction that isn't transparently shallow.
This is an interesting case. Turns out, given a creative approach it is possible to persuade a human that there is another human behind the screen. See ELIZA, "Turing tests". The methods are quite similar: constrain the domain and/or creatively manipulate human's expectations (e.g. the program that "passed" the Turing Test pretended to be a 13-year boy, so human jury tolerated its errors). The question is not how to fool humans but how to make such product non-trivially useful.
I think that the best approach currently available is applied in facebook M - use human workers to interact with customers while storing all interaction data and experimenting with training state of art ML models on it to eventually replace human workers.
[0] http://www.xilinx.com/products/silicon-devices/soc/zynq-7000...
Any wireless radio chip (BLE, Bluetooth, Ant, Wifi), if it needs to be on all the time will also have a huge impact on battery life.
I wish good luck to the Asteria team, and am genuinely curious about how they'll pull it off.
Also we don't know if the company will really develop their product to fruition, they may simply develop a good (but not viable as a product) demo and be acqui-hired by one of big players.
We really need to embrace hardware more.
Fascinating stuff. Other topics in the book: nanotechnology and bioengineering as everyday commodities, gender fluidity and the CC-human relationship. Reminds me that the book is due for a reread!