The two biggest features I want are for the voice assistants to read something for me, and to do something on google/Apple Maps hand free. Neither of these ever work. “Siri/ ok google add the next gas station on the route” or “take me to the Chinese restaurant in Hoboken” seem like very obvious features for a voice assistant with a map program.
The other is why can I tell Siri to bring up the Wikipedia page for George Washington but I can’t have Siri read it to me? I am in the car, they know that, they just say “I can’t show you that while you’re driving”. The response should be “do you want me to read it to you?”
Me: “OK Google, take me to the Chinese restaurant in Hoboken”
Google Assistant: “Calling Jessica Hobkin”.
The pattern for current world's voice assistants is: ${brand 1}, ${action} ${brand 2} ${joiner} ${brand 3}.
So, "OK Google, take me to Chinese restaurant in Hoboken using Google Maps".
Which is why I refuse to use this technology until the world gets its shit together.
"I'd like an iced tea" "An icee?" "No an iced tea" "Hi-C?"
I say "ok google, add a stop for gas" a lot, and it works well for me.
I used Bing yesterday and it was able to parse out exactly what I wanted, and then give me idiot-proof steps to making the recipe in-game. (I didn't need the steps, but it gave me what I wanted up front, easily.) I tried it twice and it was awesome both times. I'll definitely be using it in the future.
You mean these? Took me a few seconds to find, not sure how an LLM would make that easier. I guess the biggest benefit of LLM then is for people who don't know how to find stuff.
Imagine if iOS had something like apple script and all apps exposed and documented endpoints. LLMs would be able to trivially solve problems that the best voice assistants today cannot handle.
Then again none of the current assistants can handle all that much. "Send Alex P a meeting invite tomorrow for a playdate at the Zoo, he's from out of town so include the Zoo's full address in the invite".
"Find the next mutual free slot on the team's calendar and send out an invite for a zoom meeting at that time".
These are all things that voice assistants should have been doing a decade ago, but I presume they'd have required too much one off investment.
Give an LLM proper API access and train it on some example code, and these problems are easy for it to solve. Heck I bet if you do enough specialized training you could get one of the tiny simple LLMs to do it.
Example from a couple days ago:
Me, in the shower so not able to type: "Hey Siri, add 1.5 inch brad nails to my latest shopping list note."
Siri: "Sorry, I can't help with that."
... Really, Siri? You can't do something as simple as add a line to a note in the first-party Apple Notes app?
The other day I asked it about the place I live and it made up nonsense, I was trying to get it to help me with an essay and it was just wrong, it was telling me things about this region that weren't real.
Do we just drive through a town, ask for a made up history about it and just be satisfied with whatever is provided?
I have tried to use it many times to learn a topic, and my experience has been that it is either frustratingly vague or incorrect.
It's not a tool that I can completely add to my workflow until it is reliable, but I seem to be the odd one out.
I find this highly concerning but I feel similar.
Even "smart people" I work with seem to have gulped down the LLM cool aid because it's convenient and it's "cool".
Sometimes I honestly think: "just surrender to it all, believe in all the machine tells you unquestionably, forget the fact checking, it feels good to be ignorant... it will be fine...".
I just can't do it though.
This. I hate being told the wrong information because I will have to unlearn the wrong information. I would rather have been told nothing.
They're only good on universal truths. An amalgam of laws from around the globe doesn't tell me what the law is in my country, for example.
I feel like using LLM today is like using search 15 years ago - you get a feel for getting results you want.
I'd never use chatGPT for anything that's even remotely obscure, controversial, or niche.
But through all my double-checking, I've had phenomenal success rate in getting useful, readable, valid responses to well-covered / documented topics such as introductory french, introductory music theory, well-covered & non-controversial history and science.
I'd love to see the example you experienced; if I ask chatGPT "tell me about Toronto, Canada", my expectation would be to get high accuracy. If I asked it "Was Hum, Croatia, part of the Istrian liberation movement in the seventies", I'd have far less confidence - it's a leading question, on a less covered topic, introducing inaccuracies in the prompt.
My point is - for a 3 hour drive to cottage, I'm OK with something that's only 95% accurate on easy topics! I'd get no better from my spouse or best friend if they made it on the same drive :). My life will not depend on it, I'll have an educationally good time and miles will pass faster :).
(also, these conversations always seem to end in suffocatingly self-righteous "I don't know how others can live in this post-fact free world of ignorance", but that has a LOT of assumptions and, ironically, non-factual bias in it as well)
I don't think it's quite the same.
With search results, aka web sites, you can compare between them and get a "majority opinion" if you have doubts - it doesn't guarantee correctness but it does improve the odds.
Some sites are also more reputable and reliable than others - e.g. if the information is from Reuters, a university's courseware, official government agencies, ... etc. it's probably correct.
With LLMs you get one answer and that's it - although some like Bard provide alternate drafts but they are all from the same source and can all be hallucinations ...
A person who uses ChatGPT must have the understanding that it's not like Google search. The layman, however, has no idea that ChatGPT can give coherent incorrect information and treats the information as true.
Most people won't use it for infotainment and OpenAI will try its best to downplay the hallucination as fine print if it goes fully mainstream like google search.
Rather than asking it about facts, I find it useful to derive new insights.
For example: "Tell me 5 topics about databases that might make it to the front page of hacker news." It can generate an interesting list. That is much more like the example they provided in the article, synthesizing a bed time story is not factual.
Also, "write me some python code to do x" where x is based on libraries that were well documented before 2022 also has similarly creative results in my experience.
Like talking to most people you mean?
If people are treating LLMs like a random stranger and only making small talk, fair enough, but more often they're treating it like an inerrable font of knowledge, and that's concerning.
All human interactions from all of history called and they …
I verify just about everything that I ask it, so it isn’t just a general sense of improvement.
Ah yes, I dont understand how to talk to people either!
Comments like yours make me think that no one cares about this...and judging by a lot of the other comments, I guess they don't.
Probably going to be people, wading through a sea of AI generated shit, and the individual is supposed to just forever "apply critical thinking" to it all. Even a call from ones spouse could be fake, and you'll just have to apply critical thinking or whatever to workout if you were scammed or not.
Then it makes stuff up far less frequently.
If the next version has the same step up in performance, I will no longer consider inaccuracy an issue - even the best books have mistakes in them, they just need to be infrequent enough.
> Then it makes stuff up far less frequently.
Now there's a business model for a ChatGPT-like service.
$1/month: Almost always wrong
$10/month: 50/50 chance of being right or wrong
$100/month: right 95% of the time
"Hey Google, why do ____ happen?" "I'm sorry, I don't know anything about that"
But you're GOOGLE! Google it! What the heck lol
So yeah, ChatGPT being able to hear what I say and give me info about it would be great! My holdup has been wakewords.
Our REST endpoint can talk to whatever you want and we’ll have native ChatGPT soon.
Still can’t quite make it work. I feel like I could learn a lot if I could have random conversations with GPT.
+ bonus if someone else in the car got excited when I see cows. Don’t care if it’s an AI.