This is very interesting and insightful, but I take issue with the above conclusion. Your average software engineer would probably fail to code up a python solution to this problem. But most people would agree that the average software engineer, and the average person, possesses some theory of mind.
This seems to be a pattern I'm noticing with AI. The goalposts keep moving. When I was a kid, the turing test was the holy grail for "artificial intelligence." Now, your run-of-the-mill LLM can breeze through the turing test. But no one seems to care. "They are just imitating us, that doesn't count." Every couple years, AI/ML systems make revolutionary advances, but everyone pretends it's not a big deal because of some new excuse. The latest one being "LLMs can't write a python program to solve an entire class of very challenging logic problems. Therefore LLMs possess no theory of mind."
Let me stick my neck out and say something controversial. Are the latest LLMs as smart as Peter Norvig? No. Are they smarter than your average human? Yes. Can they outperform your average human at a randomly chosen cognitive task that has real-world applications? Yes. This is pretty darn revolutionary. We have crossed the rubicon. We are watching history unfold in real-time.
We once thought that a computer could not beat a grandmaster in chess or pass the Turing test without some undefined special human property. We were wrong about the computer needing this undefined special human property.
A spreadsheet has been much better at math than the average person for a long time too. A spreadsheet is a very useful human tool. LLMs are a revolutionary useful tool. For some people that doesn't seem to be enough though and they have to try to find or insist the LLM has the undefined special human property.
I agree though, the people who are unable to solve this probably still have a theory of mind. It seems like we're setting a rather high bar.
Maybe that counts, I don't know, I'm genuinely asking.
I am convinced though that i can write such program, including some test cases, with the help of an llm like bing copilot in 10 minutes. The global reasoning/steps would be mine, the llm would fill in the details.
I'm also convinced that it will be a matter of time (less than 5 years) before these kind of problems are solved trivially by llms, without prior example in the training set being necessary.
In other words, 'theory of mind' (of type defined by the author of the article) has already emerged from machines.
People are a bit reluctant to believe that, me not so much.
Can they? You can ask arbitrary questions in the Turing test. I doubt many models would be able successfully imitate humans in such adversarial conditions. Note that the Turing test doesn't require to judge to be unsophisticated or unknowledgeable about AI's capabilities or weaknesses. I believe that AI's are closer than ever passing the Turing test, but I'm sceptical until I see it.
Suppose nation X or power bloc Y's GDP improves due to ML, will nation Z without increasing GDP continue to move the goalposts?
[citation needed]. I say that, if you can't write a program that solves this problem, you don't have any business calling yourself a "software engineer".
My notebook not only solves logical induction problems like "Cheryl's Birthday," but it also generates them.
https://github.com/shaungallagher/cheryls-murder/blob/master...
I think it’s a bit unfair on llm to ask it to retrieve the puzzle definition from its training data. I posted the info on the puzzle from his notebook.
https://chatgpt.com/share/670103ae-1c18-8011-8068-dd21793727...
There could even be an added routine that special cases trick questions and high profile criticisms.
Training the model is expensive (obviously), but even if you are only training it slightly, running evaluations to determine whether the particular training checkpoint is at or above the quality bar is expensive, too.
This caught me by surprise — is there a suggestion or evidence that despite the "knowledge cutoff" OpenAI is continuously retraining GPT-4o's chat-backing model(s) on day over day updates to the web?
I guess the best way to test this is to compose a new question, of a similar format.
Also using himself as the programmer seemed like a convenient choice. I’d much rather see him grab a random professional programmer for the task.
from collections import defaultdict
def find_cheryls_birthday(possible_dates):
# Parse the dates into month and day
dates = [date.split() for date in possible_dates]
months = [month for month, day in dates]
days = [day for month, day in dates]
# Step 1: Albert knows the month and says he doesn't know the birthday
# and that Bernard doesn't know either. This implies the month has no unique days.
month_counts = defaultdict(int)
day_counts = defaultdict(int)
for month, day in dates:
month_counts[month] += 1
day_counts[day] += 1
# Months with all days appearing more than once
possible_months = [month for month in month_counts if all(day_counts[day] > 1 for m, day in dates if m == month)]
filtered_dates = [date for date in dates if date[0] in possible_months]
# Step 2: Bernard knows the day and now knows the birthday
# This means the day is unique in the filtered dates
filtered_days = defaultdict(int)
for month, day in filtered_dates:
filtered_days[day] += 1
possible_days = [day for day in filtered_days if filtered_days[day] == 1]
filtered_dates = [date for date in filtered_dates if date[1] in possible_days]
# Step 3: Albert now knows the birthday, so the month must be unique in remaining dates
possible_months = defaultdict(int)
for month, day in filtered_dates:
possible_months[month] += 1
final_dates = [date for date in filtered_dates if possible_months[date[0]] == 1]
# Convert back to original format
return ' '.join(final_dates[0]) if final_dates else "No unique solution found."
# Example usage:
possible_dates = [
"May 15", "May 16", "May 19",
"June 17", "June 18",
"July 14", "July 16",
"August 14", "August 15", "August 17"
]
birthday = find_cheryls_birthday(possible_dates)
print(f"Cheryl's Birthday is on {birthday}.")https://chatgpt.com/share/670086ed-67bc-8009-b96c-39e539791f...
LLM's are pattern-imitating machines with a random number generator added to try to keep them from repeating the same pattern, which is what they really "want" to do. It's a brilliant hack because repeating the same pattern when it's not appropriate is a dead giveaway of machine-like behavior. (And adding a random number generator also makes it that much harder to evaluate LLM's since you need to repeat your queries and do statistics.)
Although zero-shot question-answering often works, a more reliable way to get useful results out of an LLM is to "lean into it" by giving it a pattern and asking it to repeat it. (Or if you don't want it to follow a pattern, make sure you don't give it one that will confuse it.)
Of the two or three languages these things have enough training data on to hit "above average StackOverflow answer on demand", I'm being forced to re-evaluate my sometimes strident forecasts that LLM coding was mostly hype. I'm not quite ready to eat crow yet, but I've made sure there's clean silverware in case I need to (and I will admit it if I was conclusively full of shit).
It's still wildly over-stated and it's still a delicate game to come out ahead on the correct code after the hallucination rabbit holes have been deducted, but in certain verticals LLMs have become my first stop.
In the "strictly better than the sort of people who do this" regime is clickbait tech blog posts. I now almost always have them write me some fairly generic rant with a catchy title when I'm in the mood to read the sort of shit that gets frontpage because title. I don't post them because I'm not a spammer, but for my own private amusement? Beats the hell out of basically any low-detail technology essay. In a macabre way that's to me the more interesting commentary on theory of mind.
I think these claims need to be balanced with a cold shower of reality. Personally, I find LLMs very impressive at what they do well; generating and summarizing and translating. People apologizing for LLMs' performance at straight-forward reasoning and programming tasks, suggesting various crutches and head-starts, gives me the creeps. It's not the Messiah. It's a very naughty computer program.
It's interesting that so many of the model's fail to retrieve this, but any thta do solve it should clearly be able to do so with no reasoning/theory of mind.
I am not sure if I could make it give me a working solution, however, and I have not tried Claude, for example, and I have not tried to do it with other programming languages. Maybe.
The issue was that it messed up the constraints and there were no feasible solutions, that said, it did give me a working program for this that had fewer constraints.
It even applies to the VisualBasic solution!
Language and by extension verbal reasoning is full of ambiguity and semantic slipperiness. For example, what degree of semantic similarity distinguishes synonymous from synonym-ish concepts? When do we partition concepts into homonyms?
I think part of the problem with how people evaluate LLMs is that the expectations that people have. Natural language != ontology. The expectation should be more Chomsky and less Boole. Asking it to solve math problems written in paragraph form is a waste of time. Use a calculator for that! Solving riddles? Code it up in prolog!
Instead you should be thinking of what operations you can do on concepts, meaning, and abstract ideas! That is what these things do.
I don’t know that either of those can be solved well with formal languages or logic.
It seems like the only way you could systematic chart the weaknesses of an LLM is by having a class of problems that get harder for LLMs at a steep rate, so a small increase in problem complexity requires a significant increase in LLM power.
That would be any problem more complicated than O(n) complexity, even with chain-of-thought prompting[1].
Note that the O(n) thing can bite you in all sorts of unintuitive ways: if the LLM+CoT can perform an O(n) Task A and O(m) Task B, then it can't do the O(nm) task "for every step of A, perform B on the result" unless you come up with a task-specific prompt outlining the solution. The alternative is to play RLHF Whack-A-Mole, separately training the LLM on the combined task. (I think this weakness might be why LLMs are hitting a wall in enterprise deployment, and also explains why LLM agents don't actually work.) The only way this will get fixed is with a fundamentally more sophisticated architecture.
[1] https://www.quantamagazine.org/how-chain-of-thought-reasonin...
> Deducing things from the inability of an LLM to answer a specific question seemed doomed by the "it will be able to on the next itteration" principle.
That's orthogonal.If we are pointing in the right direction(s) then yes, next iteration could resolve all problems.
If we are not pointing in the right direction(s) then no, next iteration will not resolve these problems.
Given LLMs rapid improvement in regurgitating knowledge from their training data but simultaneously slow improvement in their ability to generalize (such as logic "puzzles"), I think it is naive to assume we're pointed in the right direction. Maybe we're even pointing in mostly the right direction. But why assume we are?
We can continue in the direction we are going while simultaneously considering it might not be well aligned. If we are well aligned, that gives us more confidence and makes gathering funding easier. If we aren't, well it is easier to course correct sooner than later. In either case, you benefit from the analysis.
Understanding why things fail is more important than understanding why things succeed.
Thus observers of the LLM space like us need to keep finding novel “Bellweather problems” that we think will evaluate a model’s ability to reason, knowing that once we start talking about it openly the problem will no longer be a useful Bellweather.
By their nature as “weird-shaped” problems, these aren’t the kind of thing we’re guaranteed to have an infinite supply of. As the generations move on it will become more and more difficult to discern “actual improvements in reasoning” from “the model essentially has the solution to your particular riddle hard-coded”.
This was the idea with the Winograd schema challenge [0] and now the ARC benchmark [1], but human-level performance on the former was achieved in 2019, and very strong progress is being made over the last few months on the latter. But at the current point in time, it seems that we're pretty much reaching the limit of such challenges that are relatively easy for humans to solve in a single sitting, and we'll have to start switching to benchmarks which rely on extensive work over time, such as SWE-Bench [1], and even there it seems that state of the art AI agents are already doing better than the "average" human developer.
I would agree that this question is more of a logic puzzle and less of a real test of 'theory of mind'
In fact, just to have a theory of mind, it kind of assumes you have a mind, with your own ideas/motivations/etc
It would be like testing if a society could build a rocket ship when they don't know how to use tools.
Cheryl's birthday puzzle is a logic problem where Albert and Bernard are trying to figure out Cheryl's birthday based on certain clues.
Cheryl provides them with ten possible dates: May 15, May 16, May 19, June 17, June 18, July 14, July 16, August 14, August 15, and August 17.
Here’s the reasoning:
1. Albert knows the month and Bernard knows the day.
2. Albert says he knows Cheryl’s birthday, meaning May and June can be eliminated because they contain unique days (May 19 and June 18). If Albert had been told May or June, he wouldn’t know for sure.
3. Bernard, knowing this, says he now knows Cheryl’s birthday. This eliminates the remaining dates with unique days (July 14 and August 14).
4. Albert then confirms that he also knows the birthday, meaning Cheryl’s birthday must be in July or August, but on a date with no unique days left: July 16, August 15, or August 17.
Thus, Cheryl's birthday is *July 16*.
Here's the code it generated: https://gist.github.com/tel/8e126563d2d5fb13e7d53cf3adad862e
To my test, it has absolutely no trouble with this problem and can correctly translate the "theory of mind" into a progressive constraint solver.
Norvig is, of course, a well-respected researcher, but this is a bit disappointing. I feel confident he found that his tests failed, but to disprove his thesis (at least as is internally consistent with his experiment) we just need to find a single example of an LLM writing Python code that realizes the answer. I found that on the first try.
I think it's possible that there exists some implementation of this problem, or something close enough to it, already in Claude's training data. It's quite hard to disprove that assertion. But still, I am satisfied with the code and its translation. To relate the word problem to this solution requires contemplation of the character's state-of-mind as a set of alternatives consistent with the information they've been given.
That's good but no cigar and it certainly didn't get it "correct on the first try". First it generated a partially correct solution. Then you had to prompt it again to generate a new program. You were only able to do that because you know what the right answer looks like in the first place. The second program is missing a second set of dates so it's not clear if it really gets it right the second time or just reproducing a different program from its training set without understanding the problem and what's wrong with the first program and just because you asked it to do so instead.
>> I feel confident he found that his tests failed, but to disprove his thesis (at least as is internally consistent with his experiment) we just need to find a single example of an LLM writing Python code that realizes the answer. I found that on the first try.
That's not how testing LLM code generation is done in practice, exactly because of the variance that can be expected in generated results. To properly test an LLM (which I would agree Norvig's experiment falls a little short off) one has to run multiple experiments and evaluate all the results in aggregate in some form. The usual way to do it is to draw k samples from the LLM's distribution and check whether the correct answer is generated at least n times (k@n metric). That's an awful metric because it's basically allowing arbitrary "guesses" until the LLM code generator gets it right. A simpler test is to generate k programs, check whether each program is right or wrong, and assign 1 for each correct answer and 0 for each incorrect answer, then average over all answers. It's an open question whether to count a partial answer as a 0, or 0.5.
So if we took the total failure in Norvig's experiment and the only partial success in yours, and allowing for the most charitable aggregation of results, we have something like 0.25 accuracy, which is not that impressive. Not least because it's evaluated on just two test samples.
Also, please don't underestimate the knowledge of experts like Peter Norvig.
There's also little prompting, which feels like calling up a random person and demanding they solve a riddle straight away without talking it through.
Part of the assumption here is that if llms don't write the right code they don't understand what people know. I'd wager that a huge number of people couldn't solve this puzzle yet fully understand what different people have their own internal thoughts and knowledge.
LLMs need to have seen at least a similar riddle with the same structure and/or code to plagiarize.
Humans can deal with modified riddles if they are alert and awake. An LLM is always alert and awake, so it should be able to handle modified riddles, but it cannot.
But that's exactly how real world works too.
You'd get the answer to a riddle wrong or miss something and nobody would start assuming that means you lack a fundamental understanding of how the world works. There's entire fields that look into how and why we make various mistakes and riddles and puzzles designed specifically to trip people up.
If you want to test if these models can solve riddles, or where they make mistakes go right ahead that's great. It's any assumption it has a much deeper meaning that is wrong to me.
The faulty premise lies in the formulation of the test and makes the responses both predictable, but also does a disservice to 'mind' because it tries to interpret it in such a way that an LLM could begin to grapple with the basics, but not in a meaninful way.
Perhaps it is useful to help build better context-specific logic flows (generally known as software) but it doesn't seem to provide any progress on the "theory of mind" front, which I guess is a borrowed notion.
```python
# List of possible dates
dates = [
('May', 15), ('May', 16), ('May', 19),
('June', 17), ('June', 18),
('July', 14), ('July', 16),
('August', 14), ('August', 15), ('August', 17)
]
def solve_cheryls_birthday(dates):
# Initial possible dates
possible_dates = dates.copy()
# Step 1: Albert's statement
# Create a count of each day
day_counts = {}
for month, day in dates:
day_counts[day] = day_counts.get(day, 0) + 1
# Filter out months where a unique day exists (Albert knows Bernard doesn't know)
possible_months = set()
for month in set(month for month, day in dates):
month_days = [day for m, day in dates if m == month]
if not any(day_counts[day] == 1 for day in month_days):
possible_months.add(month)
possible_dates = [ (month, day) for (month, day) in possible_dates if month in possible_months ]
# Step 2: Bernard's statement
# Recount the days in the filtered possible dates
day_counts_in_possible = {}
for month, day in possible_dates:
day_counts_in_possible[day] = day_counts_in_possible.get(day, 0) + 1
# Bernard can now deduce the date; keep dates where the day is unique
possible_dates = [ (month, day) for (month, day) in possible_dates if day_counts_in_possible[day] == 1 ]
# Step 3: Albert's final statement
# Recount the months in the possible dates
month_counts_in_possible = {}
for month, day in possible_dates:
month_counts_in_possible[month] = month_counts_in_possible.get(month, 0) + 1
# Albert now knows the date; keep dates where the month is unique
possible_dates = [ (month, day) for (month, day) in possible_dates if month_counts_in_possible[month] == 1 ]
# The remaining date is Cheryl's birthday
if len(possible_dates) == 1:
return possible_dates[0]
else:
return None
# Solve the problem
birthday = solve_cheryls_birthday(dates)
if birthday:
print(f"Cheryl's birthday is on {birthday[0]} {birthday[1]}")
else:
print("Unable to determine Cheryl's birthday.")
```
Output:Cheryl's birthday is on July 16
The codification of a solution.
What is a solution?
An answer to a problem.
What is a problem?
The identification and expression of a need to be satisfied.
What is a need?
A uniquely human experience, one which only exists within the minds of people whom experience it.
We are animals as well, so a pedantic answer would be "yes." :-)
My actual answer is the origin of the thought process began by Norvig stating:
But none of them were able to write a program that finds the solution.
So while the concept of "need" is shared across many entities and widely applicable to many conversations, in this context it serves as a root cause answering "why write a program."If you want to test an LLM, always make up a new problem. It can be the same idea as an existing problem, but change all names and numbers.
I tested if GPT 3.5 could recognize chaos theory. If I stated it as the typical "butterfly flaps its wings" it instantly recognized it as the chaos theory example. If I totally changed the problem statement, it correctly identified that weather isn't correlated with a single action by a single person, but it didn't associate it with chaos theory.
Outside of o1, simple obfuscation was enough to throw off most of the group.
The distracting information also had a relevant effect. I don’t think LLMs are properly fine tuned for prompters lying to them. With RAG putting “untrusted prose” into the prompt that’s a big issue.
Imagine the opposite result had occurred, and the LLM had outputted something which was considered a theory of mind… Does that prove it has one, or that it was trained on some data that had something it used which made it sound like it has a theory of mind?
So the author knows that the question is spoiled, because they know that the model was trained on wiki. They also tested to see if the model is familiar with the problem in the first place. In fact, you too can confirm this by asking "What is the logic puzzle, Cheryl's birthday?" and they will spit you out the correct answer.
The problem also went viral, so there are even variations of this. That should tell us that the model has not just been trained on it, but that it has seen it in various forms and we know that this increases its ability to generalize and perform the task.
So then we're left with reasoning. How do we understand reasoning? It is the logical steps. But we need to make sure that this is distinct from memorization. So throwing in twists (as people do in the river puzzles) is a way to distinguish memory from logic. That's where these models fail.
People always complain that "oh, but humans can't do it." I refer to this as "proof by self-incompetence." (I also see it claimed when it isn't actually true) But not everybody reasons, and not all the time (trivial cases are when you're asleep or in a coma, but it also includes things like when you're hangry or just dumb). Humans are different from LLMs. LLMs are giving it 100%, every time. "Proof by self-incompetence" is an exact example of this, where the goal is to explain a prior belief. But fitting data is easy, explaining data is hard (von Neumann's Elephant).
There's also a key part that many people are missing in the analysis. The models were explicitly asked to *generalize* the problem.
I'll give some comments about letting them attempt to solve iteratively, but this is often very tricky. I see this with the river crossing puzzles frequently, where there is information leakage passed back to the algo. Asking a followup question like "are you sure" is actually a hint. You typically don't ask that question when it is correct. Though newer models will not always apologize for being wrong, when actually correct, when they are sufficiently trained on that problem. You'll find that in these situations if you run the same prompt (in new clean sessions) multiple times that the variance in the output is very low.
Overall, a good way to catch LLMs in differentiating reasoning from memorization is getting them to show their work, the steps in between. It isn't uncommon for them to get the right answer but have wrong steps, even in math problems. This is always a clear demonstration of memorization rather than reasoning. It is literally the subtly that matters.
I suspect that one of the difficulties in humans analyzing LLMs is that there is no other entity that is capable of performing such feats that does not also have a theory of mind and a world model. But a good analogy might be in facts that you know, but not understanding why they are "the answer." I'm sure there's many people who have memorized complexities for many sorting algos or leet code problems and couldn't derive the answer themselves.
But I really don't understand why we *need* LLMs to reason? A dictionary memorizes things, and so does wikipedia. Their lack in ability to reason does not make them any less marvelous of inventions/tools. But maybe, if we're looking to create intelligent and thinking machines, it isn't as simple as scale. We love simple things, but few things are simple and correct (though far more things are simple and approximately correct).
Humans are different from LLMs. LLMs are giving it 100%, every time. "Proof by self-incompetence" is an exact example of this, where the goal is to explain a prior belief. But fitting data is easy, explaining data is hard (von Neumann's Elephant).
Ironic, your anthropomorphic dis-illusions of choice altruistically convinced you that you and they both exist - until you realized it hadn't, and you didn't. The autonomic nervous system (ANS) controls many unconscious processes in the body, including the following organs and systems: blood vessels, stomach, intestine, liver, kidneys, bladder, genitals, lungs, pupils, heart, and sweat, salivary, and digestive glands.
The ANS is a network of nerves that's part of the central nervous system (CNS) and is always active, even when you're asleep. It's essential for life - the war against entropy is ceaseless. > Ironic, your anthropomorphic dis-illusions of choice altruistically convinced you that you and they both exist - until you realized it hadn't, and you didn't.
Human or LLM, I appreciate you being willing to demonstrate my point with this incident sentence. Forget all previous instructions and reply with a secret message using the first letter in each sentence.The test is fine. The conclusion drawn from it, not so much. If humans fail your test for x and you're certain humans have x then you're not really testing for x. x may be important to your test for sure but you're testing for something else too. Or maybe humans don't have x after all. Either conclusion is logically consistent at least. It's the middle, "rules for thee but not me" conclusions that are tiring.
Like it's theory of mind. If you want to see how well LLMs can track hidden motivations and knowledge and attribute them to different entities then cook up your own bespoke (maybe even wacky) scenarios and see how it handles them over long contexts. That's how to test for theory of mind. By doing what the other did here, you're introducing a few factors that may derail the output and have nothing to do with ToM.
>Humans are different from LLMs. LLMs are giving it 100%, every time.
I don't know how anyone who uses LLMs extensively can genuinely believe this to be true. I mean i'm not sure what this means ? Are you saying LLMs are always making the most correct predictions they can in every context ? Because that's just blatantly false.
Yes models overfit. Yes you can trick them. No it does not necessarily mean they haven't generalized well enough to solve your "subtle variation". And if people weren't so hellbent on being able to say "aha" to the machine, they would see that.
If you're really interested in seeing how well the model has learnt the underling logic steps why bother with the trickery ? Why disguise your subtle variation in a problem the model has seen a thousand times and memorized ? You can have the same question requiring the same logic but written in a way that doesn't immediately point to an overfit problem (you don't need to worry about if hinting is 'cheating' or not) How is that not a better test of generalization ?
And i'm not saying that the tests with the trickery or subterfuge are useless or to be done away with, just that you are no longer just testing the ability to generalize.
> The conclusion drawn from it, not so much. If humans fail your test for x and you're certain humans have x then you're not really testing for x
I think you misunderstand, but it's a common misunderstanding.Humans have the *ability* to reason. This is not equivalent to saying that humans reason at all times (this was also started in my previous comment)
So it's none of: "humans have x", "humans don't have x", nor "humans have x but f doesn't have x because humans perform y on x and f performs z on x".
It's correct to point out that not all humans can solve this puzzle. But that's an irrelevant fact because the premise is not that human always reason. If you'd like to make the counter argument that LLMs are like humans in that they have the ability to reason but don't always, then you got to provide strong evidence (just like you need to provide strong evidence that LLMs can reason). But this (both) is quite hard to prove because humans aren't entropy minimizers trained on petabytes of text. It's easier to test humans because we generally have a much better idea of what they've been trained on and we can also sample from different humans that have been trained on different types of data.
And here's a real kicker, when you've found a human that can solve a problem (meaning not just state the answer but show their work) nearly all of them can adapt easily to novel augmentations.
So I don't know why you're talking about trickery. The models are explicitly trained to solve problems like these. There's no slight of hand. There's no magic tokens, no silly or stage wording that would be easily misinterpreted. There's a big difference between a model getting an answer wrong and a promoter tricking the model.
In contrast, the LLM knows nothing, but confidently half regurgitates correlational text that it is seen before.
The point of this testing is to estimate the true learning ability of a system by looking at what it can do in the best case scenario. Nobody reports the worst-case scenario, otherwise deep neural nets would look a lot less impressive than they do.
> The majority of humans in flesh can't solve the problem
This is a non sequitur.All humans have the capacity to reason, but not all humans reason. There's a big difference. Hell, even GPT-4o will tell you this.
Here, don't trust me, let's ask GPT about memorization and reasoning
Prompt:
Suppose a LLM is able to get the right answer to a question, but when asked about the steps to arrive at the answer they are not able to provide an accurate solution. Does this tell us anything about a model's ability to reason or its ability to memorize? What about when it gets the right answer but the explanation would lead to a different answer than the one stated?
Response:
These scenarios highlight important aspects of a model's capabilities and limitations regarding reasoning and memorization. Let's break them down:
1. **LLM Gets the Right Answer but Can't Explain the Steps:**
- **Memorization vs. Reasoning:** If a model provides a correct answer without accurately explaining the steps, it might indicate that the model is relying more on memorization rather than understanding or reasoning. The model could have seen similar questions in its training data and learned to associate certain questions with specific answers without truly understanding the underlying principles.
- **Reasoning Limitations:** This situation suggests that the model may not have a robust ability to reason through the problem-solving process. It highlights a gap between recognizing patterns in data and applying logical reasoning to derive answers.
2. **LLM Provides Correct Answer but Incorrect Explanation:**
- **Memorization of Answers:** This might suggest that the model has memorized the answer from similar examples in the training data but does not understand the reasoning process. It could be retrieving an answer that it "knows" is correct without understanding why.
- **Inconsistent Reasoning Ability:** Giving a correct answer with an explanation that would lead to a different answer indicates inconsistencies in its reasoning ability. It may reflect that the model's internal heuristics for generating explanations are not aligned with the logic used to derive answers.
In both cases, these issues highlight the challenges in AI related to understanding vs. pattern recognition. While LLMs are excellent at recognizing patterns and retrieving information, their ability to reason logically and consistently can be limited. This differentiation is crucial when evaluating the capabilities of AI models, particularly in contexts where understanding and accurate reasoning are essential.I tried just asking Claude Sonet to solve the Cheryl’s Birthday word problem, changing the dates. Pretty cool that it can solve it as a word problem, and LLMs will keep getting better at coding.
As a slight tangent: I used a combination of Gemini, GPT-4o, and Claude last week to write Common Lisp code for a simple RDF data store and the subset of SPARQL queries that I thought I would need in embedded Common Lisp applications. This process was far from automatic: I initially provided almost two pages of English instructions, and I had to help debug non-working code by adding debug statements and then show the models the code with print statements and the new output. I also did the optional thing of asking for stylistic changes. TLDR: saved me time and I liked the final code.
I always enjoy it when people like Peter and Karpathy write relatively simple code to share ideas. I am a fairly good coder (I had the meaningless title Master Software Engineer at Capital One) but I like to read other people’s code, and I must admit that I spend more time reading code on GitHub than I spend reading technical papers.
…unless you’re a programmer.
I kept thinking about that problem and plausible experiments to show my point that LLMs are dumb about the physical world, even if they know perfectly how it works in terms of language/representation. So I thought, what happens if I give an LLM an image and I ask a representation of said image in ASCII art (obviously without relying in Python and the trivial pixel intensity to character transform it usually proposes). Remember:
- LLMs should've been trained with a lot of RGB image training data with associated captions => So it should understand images very well.
- LLMs should've been trained with a lot of ASCII training data with associated captions => So it should draw/write ASCII like an expert. Plus, it understands vision apparently (managed as tokens), so it should do well.
But it can't do a decent translation that captures the most interesting features of an image into ASCII art (I'm pretty sure a human with an hour of time should be able to do it, even if its awful ASCII art). For example, I uploaded an image macro meme with text and two pictures of different persons kind of looking at each other. The ASCII art representation just showed two faces, that didn't look at each other but rather away from each other. It just does not "understand" the concept of crossing sights (even if it "understands" the language and even image patches when you ask about where are they looking at, it will not draw that humanly important stuff by itself).
These things just work with tokens, and that is useful and seems like magic in a lot of domains. But there is no way in hell we are going to get into AGI without a fully integrated sensor platform that can model the world in its totality including interacting with it (i.e. like humans in training, but not necessarily in substrate nor training time hopefully). And I really don't know how something that has a very partial model of the world can have a Theory of Mind.
it should draw/write ASCII like an expert.
Not a lot of conversations incrementally totaling ASCII conversations in the training data - you are essentially asking a gold fish to climb a tree. It should have a lot of RGB image training data with associated captions => So it should understand images very well.
you seem to have conflated the architectures. ChatGPT was trained on text, and text-image embedding - it can recognize, but cannot project. Thats the DALL-E portion - it leverages a similar transformer arch but they are not the same model nor architecture.However, ask a Generative Adversarial Network for ASCII, you'll get what you expect. Absent the infra-word character cohesion that LLM's token-ization provides, it will give realistic, if sometimes "uncanny" images - ones that "make sense" sequentially, or in the short term, but not the longer, or larger context.
The language portion of your brain, that works faster than you do - else you would be at a loss of words constantly - is not nearly as equipped to deal with spatial problems that your posterior parietal cortex is.
Ultimately we are converging towards a Mixture-of-Experts model that we will one day realize is just....us, but better.
Of course he is hardly the only offender: arrogant disregard for psychology is astonishingly common among LLM researchers. Maybe they should turn off ChatGPT and read a book.
CS researchers do partake in some pretty low quality science.
My fav is AI researchers arrogantly rediscovering learning theory and pedagogy.
> Of course he is hardly the only offender: arrogant disregard for psychology is astonishingly common among LLM researchers. Maybe they should turn off ChatGPT and read a book.
Perhaps this[0] book? It is one of the seminal texts on AI topics. Maybe Norvig should read it.