And that is the main issue. I can trust some friend or co-worker to tell me when they don't know, because they like me and they have no incentive to tell me bullshit — yeah they might look better if they did, but if there is any doubt on my side it just makes them look bad and that this happens sooner or later is nearly certain.
ChatGPT has no concept of this, so it will happily just give you something plausible but wrong.
To which I’d respond that it’s important to not ignore the continuity of life. The person giving you the information may themselves learn they were wrong and let you know later and unprompted. Or you may learn the facts and tell them, thus correcting it for everyone else they share with later. In addition, you’ll have a mental note of the friends and coworkers most suited to ask about particular subjects, maximising your changes of getting a right answer.
Recently I have used it for some psychotherapy. I didn't expect much, but it actually provided me with some really useful exercises, tips, explanations and it helped me immensely. I would probably spend at least few hundred, if not thousand, buck on a "normal" therapy that would give me comparable results (anectodal of course, and I'm a weird guy overall).
The trick was to start with "I know you're not a therapist but I'm waiting for an appointment and I would appreciate your help".
I'm not promoting as something better than "normal" therapy for most people, but for me it was incredibly helpful and helped me at least minimize my anxiety attacks. I've used GPT previously for the same thing, and its answers were barely useful.
I've never managed to make it work with an actual rubber duck. Talking to an inanimate object doesn't make me think any differently, but there's something in trying to formulate things to get the LLM to "understand" my problem that triggers the same brain mechanisms I get when explaining it to a real person.
That's a key difference between advice and instruction.
The advice giver cannot know the seeker's situation completely, nor can the advice seeker know the giver's.
An AI cannot be trusted to give The Answer. But you can use AI to explore other perspectives and critique your thoughts—not that its critiques will be correct either. Nonetheless it can help get you on different thinking paths that your normal thought patterns wouldn't guide you on.
But AI? I can ask LLMs questions like:
"A couple is buying a house in Germany. What questions do they typically forget to ask, which they regret not asking, and often wish later that someone had told them to ask?"
And it can fill in the gaps better than when I ask a human the same kind of question.
I don't expect it to be perfect, this could well be a half-arsed boilerplate, but the actual humans who I asked this of, mostly responded "Huh? I don't understand?"
https://chat.openai.com/share/f17b013e-6b24-4b49-9986-78a5e4...
(Note this chat had been given custom instructions, that's why it's responding with this unusual pattern).
What it does probably isn't "comprehension", but we also have no idea what comprehension is so it's not a very good target.
That's pretty much the way that most of the articles have been being written by humans for the last couple decades, no?
I think the best way to use them is to have a minimal language model that is as small as possible while still being able to comprehend language; and that this then goes off to an actual knowledge base of some kind where all the factoids can be checked separately.
Humans have separate explicit/declarative memory and implicit/procedural memory memories. I think the Transformer architecture puts everything into what is really only suitable for implicit/procedural; I think RAG is trying to be a separate explicit/declarative memory system, but I'm always to busy to study this in more than a superficial level, so I'm not sure.
In terms of more technical questions like the one in the article, that's rather obvious too: ChatGPT is like the person giving advice about the stock market. It only speaks of averages that are already in the direction of the general movement of the masses. Truly good advice or ideas involves going against that momentum to find a new way.
Of course, AI will be helpful to some in some situations. But even that has larger societal implications: all the help it provides displaces the need for real human beings, and thus propels society into having even more problems, and thus it reduces a few current problems in exchange for more problems in the future.
AI does no ultimate good. It should be destroyed.
Besides what they learn from humans in chat rooms, they got feedback from code execution, search and other tools.
In general AI models embedded in larger systems can have feedback from outside. It is on policy data, like human advice.
Personally one interests me way more than the other, I'm not going to read a book about the average experience of an average spelunker, but I'll gladly read Michel Siffre's books.
Ideally, but advice can still be good/useful/accurate even when it isn't direct personalized love etc., or else nobody would ever make books, which are arguably deader than an LLM.
Can another person give you truly good advice over email? Is it still possible if that person has never met you before, so that the advice is only based on what you have written to that person?
If yes, then it is possible in theory for an AI to give you exactly the same advice as this other human being.
How can you then say that email A from the AI is bad, while the exact same email B from a human is truly good?
> ChatGPT is like the person giving advice about the stock market. It only speaks of averages that are already in the direction of the general movement of the masses.
You have to distinguish between how AIs are trained (or rather, selected for) and whether that can lead to intelligence.
AIs are selected for how well they predict text.
Humans were naturally selected for how well they reproduce.
Neither one automatically leads to intelligence. But that doesn't mean that you have scientifically proven that either type of selection can never lead to intelligence.
A big LLM has on the order of 10^12 parameters. At the base level, it is not that different from a regular 10 TB hard drive. Even if each fact is only a byte big, you could't store more than 10^13 different facts before it starts running out of space. Your address space (or input vector) is only 13 digits long, or let's say 30 letters.
So you only have up to all combinations of 30 letters as the input vector, or about six words. The output vector (response / fact) is just one byte in this example. That is tiny compared to the length of the conversations and the amount of facts that today's LLMs can recall.
During training, the LLM evolves ever smarter compression to store as much information as possible. At some point, it becomes necessary to start developing some sort of model of the world and how different things interact to improve the compression even more.
Now, that doesn't mean that it can do this perfectly, or even well. It can be infurating to argue with an LLM once you hit the limits. But I do believe that we are starting to see the building blocks of real intelligence in the bigger LLMs.
> all the help it provides displaces the need for real human beings, and thus propels society into having even more problems
Your key claim here is a completely orthogonal question.
I would say that this question becomes more and more important the more intelligent the AIs become. Not the other way around.
An AI that can perfectly mimic how you would like to interact with other people would be living in a padded cell. All struggles and challenges that allow you to grow as a person disappear. We'd become like the people in the spaceship in Wall-E.
So you're doing your key claim no favors by trying to prove that the LLMs are dumb.
Intensionality is important: the fact that the advice came from a person and that we know another person is out there caring is important. Sorry to tell you, but we are not mere computers whose only functionality is dependent on direct input. Moreover, the more people care for each other, the better society becomes.
Why are you asking us these questions when you could instead be asking ChatGPT?
I'm sure it's not perfect, but from what I've seen and heard about it, it seems to be a good springboard for basic "therapy" but not REAL prescribed therapy of course.
Sounds like it is helpful as an entry level to 'therapy' and is only getting better.
Arguably maybe it's because Peter is not using ChatGPT for his model AI.
And that's what talking to a human expert is like too.
One of the most powerful things about GPT is I can give it a huge wall of text about a problem I'm having, the ask it to categorize, critique, prioritize, provide feedback, and ask questions. It invariably gives me a wall of text of empathetic and constructive questions and considerations for me to reflect on, much like a human expert would. I then answer those and reflect and converse in turn. 5 - 15 turns of conversation and a few thousand words later? The problem is freaking SOLVED in a profound and satisfying way. This experience happens over and over if I put the time and discussion in. This is something that DOESN'T happen for me with most humans.. because they can't empathize with me as an autistic person. But GPT is fucking aces in this regard.
The article's author is LITERALLY just being a lazy prompt writer ignoring basic prompting/ICL papers, and being a mushy critical thinker as a result. If he was a decent AI writer and diligent thinker, then he would be able to squeeze the juice and make the cocktail. But he bungles the article and runs it aground of the unfortunate "stochastic parrot /w average priors" argument that everyone adopts if they ignore the insight of "LLMs are in-context learners".
The real problem is he has not crafted any contexts eliciting enough in context learning to let GPT empathize with any actual problem he has. "Help me make money" is different from "here's my business, its fundaments, and a pain point my customer has: [...]. How can we solve this and rise above and beyond the call duty here?"
It's like he goes up to a person, says, "yo what's up?", and the person says, "not much." The author then wonders, "Why didn't this person empathize with the fact that my mom is going through stage 4 cancer and I don't know WTF to do to support her or make peace with it!?!"
It's because you don't communicate enough context to empathize with! It would make amazing suggestions if you actually described the intense emotional turmoil that is alive inside you when you think about losing your mom. But you have to be RAW as FUCK about what you're really feeling! Or there's no "handle" for the AI to grip and rotate the issue around and help with.
And that is the author's fault, rather than GPT's. We know GPT has this limit. It needs ICL to be at its best. We know we have to make it into the expert we need by giving a extensive and descriptive contexts, and critically co-evolving our thinking on the subject like we're working with a real expert.
When I do that, it's easily more helpful than any advocate, therapist, social worker, mentor, or manager I've ever had. And that's saying something profound and life changing.