We stand now at the edge of a new epoch, reading now being replaced by AI retrieval. There is concern that AI is a crutch, the youth will be weakened.
My opinion: valid concern. No way to know how it turns out. No indication yet that use of AI is harming business outcomes. The meta argument "AGI will cause massive social change" is probably true.
People moving away from prideful principle to leverage new tech in the past doesn't guarantee that the same idea in the current context will pan out.
But as you say.. we'll see.
Oral tradition compared to writing is clearly less accurate. Speakers can easily misremember details.
Going from writing/documentation/primary sources to AI to be seems like going back to oral tradition, where we must trust the "speaker" - in this case the AI, whether they're truthful with their interpretation of their sources.
But one can speculate.
> No indication yet that use of AI is harming business outcomes.
Length scales to measure harm when it comes to policy/technology will typically require more time than we've had since LLMs really became prominent.
> The meta argument "AGI will cause massive social change" is probably true.
Agreed.
Basically, in the absence of knowing how something will play out, it is prudent to talk through the expected outcomes and their likelihoods of happening. From there, we can start to build out a risk-adjusted return model to the societal impacts of LLM/AI integration if it continues down the current trajectory.
IMO, I don't see the ROI for society of widespread LLM adoption unless we see serious policy shifts on how they are used and how young people are taught to learn. To the downside, we really run the risk of the next generation having fundamental learning deficiencies/gaps relative to their prior gen. A close anecdote might be how 80s/90s kids are better with troubleshooting technology than the generations that came both before and after them.
https://blogs.worldbank.org/en/education/From-chalkboards-to...
What a sad sentence to read in a discussion about cognitive lazyness. I think people should think, not because it improves business outcomes, but because it's a beautiful activity.
Card catalogs in the library. It was really important focus on what was being searched. Then there was the familiarity with a particular library and what they might or might not have. Looking around at adjacent books that might spawn further ideas. The indexing now is much more thorough and way better, but I see younger peers get less out of the new search than they could.
GPS vs reading a map. I keep my GPS oriented north which gives me a good sense of which way the streets are headed at any one time, and a general sense of where I am in the city. A lot of people just drive where they are told to go. Firefighters (and pizza delivery) still learn all the streets in their districts the old school way.
Some crutches are real. I've yet to meet someone who opted for a calculator instead of putting in the work with math who ended up better at math. It might be great for getting through math, or getting math done, but it isn't better for learning math (except to plow through math already learned to get to the new stuff).
So all three of these share the common element of "there is a better way now", but at the same time learning it the old way better prepares someone for when things don't go perfectly. Good math skills can tell you if you typoed on the calculator. Map knowledge will help with changes to traffic or street availability.
We see students right now using AI to avoid writing at all. That's great that they're are learning a tool which can help their deficient writing. At the same time their writing will remain deficient. Can they tell the tone of the AI generated email they're sending their boss? Can they fix it?
Utilizing a lively oral trad. at the same time as written is superior to relying on either alone. And it's the same with our current AI tools. Using them as a substitute for developing oral/written skills is a major step back. Especially right now when those AI tools aren't very refined.
Nearly every college student I've talked to in the past year is using chatgpt as a substitute for oral/written work where possible. And worse, as a substitute for oral/written skills that they have still not developed.
Latency: maybe a year or two for the first batch of college grads who chatgpt'd their way through most of their classes, another four for med school/law school. It's going to be a slow-motion version of that video-game period in the 80s after pitfall when the market was flooded with cheap crap. Except that instead of unlicensed Atari cartridges, it's professionals.
That's probably why the act of shifting from an oral to a written culture was deeply controversial and disruptive, but also somewhat natural. Though the texts we have are written and so they probably make the transition seem more smooth than it was really was. I don't know enough to speak to that.
Could you share a source for this? The research paper I found has a different hypothesis; it links the slow transition to writing to trust, not an "old-school's attitude towards writing". Specifically the idea that the institutional trust relationships one formed with students, for example, would ensure the integrity of one's work. It then concludes that "the final transition to written communications was completed only after the creation of institutional forms of ensuring trust in written communications, in the form of archives and libraries".
So essentially, anyone could write something and call it Plato's work. Or take a written copy of Plato's work and claim they wrote it. Oral tradition ensured only your students knew your work; and you trusted them to not misattribute it. Once libraries and archives came to exist though, they could act as a trustworthy source of truth where one could confirm wether some work was actually Plato or not, and so scholars got more comfortable writing.
[1] https://www.researchgate.net/publication/331255474_The_Attit...
So it’s not like “kids these days”, no. To be honest, I don’t know how generative AI tools, which arguably take away most of the “create” and “learn” parts, are relevant to the question of differences between different mediums and how those mediums influence how we create and learn. (There are ML-based tools that can empower creativity, but they don’t tend to be advertised as “AI” because they are a mostly invisible part of some creative tool.)
What is potentially relevant is how interacting with a particular kind of generative ML tool (the chatbot) for the purposes of understanding the world can be bringing some parts of human oral tradition (though lacking communication with actual humans, of course) and associated mental states.
* See https://en.wikipedia.org/wiki/Marshall_McLuhan#Movable_type and his most famous work
Not exactly.
We have accounts from figures who became famous by going against popular opinion, who aired those thoughts. It probably was not the mainstream belief, in that place, at that time. Don't try and judge Ancient Greece by Socrates or Plato - they were celebrities of the controversial.
And AI will make us lazier and reduce the amount of cognition we do; not that I'm arguing against using AI.
But the downsides must be made clear.
I think its obvious why it would be bad for people to stop thinking.
1. We need people to be able to interact with AI. What good is it if an AI develops some new cure but no one understands or knows how to implement it?
2. We need people to scrutinize an AI's actions.
3. We need thinking people to help us achieve further advances in AI too.
4. There are a lot of subjective ideas for which there are no canned answers. People need to think through these for themselves.
5. Also world of hollowed-out humans who can’t muster the effort to write a letters to their own kids terrifies me[0]
I could think of more, but you could also easily ask ChatGPT.
[0]: https://www.forbes.com/sites/maryroeloffs/2024/08/02/google-...
If you are not expected to remember everything like the ancient Greek were, you are not training your memory as much and it will be worse than if you did.
Now do I think it’s fair to say AI is to what reading/writing as reading/writing was to memorizing? No, not at all. AI is nothing near as revolutionary and we are not even close to AGI.
I don’t think AGI will be made in our lifetime, what we’ve seen now is nowhere near AGI, it’s parlor tricks to get investors drooling and spending money.
Why not force everyone to start from first principles then?
I think learning is tied to curiosity and curiosity is not tied to difficulty of research
i.e. give a curious person a direct answer and they will go on to ask more questions, give an incurious person a direct answer and they won't go on to ask more questions
We all stand on the shoulders of giants, and that is a _good_ thing, not bad
Forcing us to forgo the giants and claw ourselves up to their height may have benefits, but in my eyes it is way less effective as a form of knowledge
The compounding force of knowledge is awesome to behold, even if it can be scary
It's like the struggle that we've all had when learning our first programming language. If we weren't forced to wrestle with compilation errors, our brains wouldn't have adapted to the mindset that the computer will do whatever you tell it to do and only that.
There's a place for LLMs in learning, and I feel like it satisfies the same niche as pre-synthesized Medium tutorials. It's no replacement for reading documentation or finding answers for yourself though.
LLMs will definitely be a technology that widens the knowledge gap at the same time that it improves access to knowledge. Just like the internet.
30 years ago people dreamed about how smart everyone would be with humanity's knowledge instantly accessible. We've had wikipedia for a while, but what's the take-up rate of this infinite amount of information? Most people prefer to scroll rage-bait videos on their phones (content that doesn't give them knowledge or even make them feel better, just that makes them angry)
Of course it's amazing to hear every once in a while the guy who maintains a vim plugin by coding on his phone in Pakistan.... or whatever other thing that is enabled by the internet by people who suddenly have access to this stuff. That's not an effect of all humans on average, it's an effect on a few people who finally have a chance to take advantage of these tools.
I heard in a YouTube interview a physicist saying that LLMs are helping physics research just because any physicist out there can now ask graduate-level questions about currently published papers, that is, have access to knowledge that would have been hard to come by before, sharing knowledge across sub-domains of physics by asking ChatGPT.
This echoes sentiments from the 2010s centered around hiring. Companies generally don’t want to hire junior engineers and train them—this is an investment with risks of no return for the company doing the training. Basically, you take your senior engineers away from projects so they can train the juniors, and then the juniors now have the skills and credentials to get a job elsewhere. Your company ends up in the hole, with a negative ROI for hiring the junior.
Tragedy of the commons. Same thing to day, different mechanism. Are we going to end up with a shortage of skilled software engineers? Maybe. IMO, the industry is so incredibly wasteful in how engineers are allocated and what problems they are told to work on that it can probably deal with shortages for a long time, but that’s a separate discussion.
- given context c, i tried idea a, b and c. where there other options that I miss ?
- based on this plan, do you see missing efficiency ?
etc etc
i'm not seeking answers, i'm trying to avoid costly dead ends
A LOT of the time the things I ask LLMs for are to avoid metaphorically wading through a garbage dump looking for a specific treasure. Filtering through irrelevant data and nonsense to find what I'm looking for is not personal development. What the LLM gives back is often a very much better jumping off point for looking through traditional sources for information.
Specifically, asking a question and getting an answer is not a general path to learning. Being asked a question and you answering it is. Somewhat, this is regardless of if you are correct or not.
> you're going to learn much more with the latter approach than the former
that the downside is a lack of deep knowledge that would enable better solutions in the long term
When i have a question about any topic, and I ask Chatgpt, i usually chat about more things, coming up with questions based on the answer, and mostly stupid questions. I feel like I am taking in the information, analyzing, and then diving deeper because I am curious. This is based on how I learn about stuff. I know i need to check a few things, and that it's not fully accurate, but the conversation flows in a direction I like.
compared this to researching on the internet, there are some good aspects, but more often than not, I end up reading an opinionated post by someone (no matter the topic, if you go deep enough, you will land on an opinionated factual telling). That feels like someone decided what questions are important, what angles we need to look at, and what the conclusion should be. Yes, it is educational, but I am always left with lingering questions.
The difference is curiosity. If people are curious about a topic, they will learn. If not, they are happy with the answer. And that is not laziness. You cannot be curious about everything.
ChatGPT is in fact opinionated, it has numerous political positions ("biases") and holds some subjects taboo. The difference is that a single actor chooses the political opinions of the model that goes on to interact with many more people than a single opinion piece might.
You can also ask it to explain the subject like you’re 5, which might not feel appropriate when interacting with a human because that can feel burdensome.
All of this is heavily caveated by how dramatically wrong LLMs can be, though, and can be rendered moot if the individual in question is too trusting and/or isn’t aware of the tendency of LLMs to hallucinate, pull from bad training data, or match the wrong patterns.
The 'but' in that lies with how much freedom is given to the LLM. If constrained, its refusal to answer may become a somewhat triggering possibility.
So when you have a "curious" debate with ChatGPT what you're really doing is searching the internet through a filter, guided by your own and ChatGPT's biases about the subject, but still and always based on whatever you would have found by researching stuff on the internet.
You're still on the internet. It may feel like you've finally escaped but you haven't. The internet can now speak to you when you ask it, but it's still the internet.
The danger in ubiquitously available LLMs, which seemingly have an answer to any question, isn’t necessarily their existence.
The real danger lies in their seductive nature - over how tempting it becomes to immediately reach for the nearest LLM to provide an answer rather than taking a few moments to quietly ponder the problem on your own. That act of manipulating the problem in your head—critical thinking—is ultimately a craft. And the only way to become better at it is by practicing it in a deliberate, disciplined fashion.
I'll have a problem that I want to work on but getting started is difficult. Asking ChatGPT is almost frictionless, the next thing I know I'm working on the project, 8 hours go by and I'm done. When I get stuck on some annoying library installation, ChatGPT solves if for me so I don't get frustrated. It allows me to enter and maintain flow states better than anything else.
ChatGPT is a really good way of avoiding procrastination.
I get the point you're trying to make. However, quietly pondering the problem is only fruitful if you have the right information. If you don't, best case scenario you risk wasting time reinventing the wheel for no good reason. In this application, a LLM is just the same type of tool as Google: a way to query and retrieve information cor you to ingest. Like Google, the info you get from queries is not the end but the means.
As the saying goes, a month in the lab saves you a week in the library. I would say it can also save you 10 minutes with Claude/ChatGPT/Copilot.
Is hiring a private tutor also laziness?
If I were to reframe GP's point, it would be: having to figure out how to answer a question changes you a little. Over time, it changes you a lot.
Yes, of course, there is a perspective from which a month spent in the lab to answer a question that's well-settled in the literature is ~wasted. But the GP is arguing for a utility function that optimizes for improving the questioner.
Quietly pondering the problem with the wrong information can be fruitful in this context.
(To be pragmatic, we need both of these. We'd get nowhere if we had to solve every problem and learn every lesson from first principles. But we'd also get nowhere if no one were well-prepared and motivated to solve novel problems without prior art.)
Nearly all of learning relies on reinventing the wheel. Most personal projects involve reinventing wheels, but improving yourself by doing so.
"In the context of human-AI interaction, we define metacognitive laziness as learners’ dependence on AI assistance, offloading metacognitive load, and less effectively associating responsible metacognitive processes with learning tasks."
And they seem to define, implicitly, “metacognitive load” as the cognitive and metacognitive effort required for learners to regulate their learning processes effectively, particularly when engaging in tasks that demand active self-monitoring, planning, and evaluation.
The analogize metacognitive laziness to cognitive offloading, where we have our tools do the difficult congnitive tasks for us, which robs us of opportunities to develop and ultimately dependent on those tools.
This sounds like parents complaining when we use Google Maps instead of a folding map. Am I worse at reading a regular map? Possibly. Am I better off overall? Yes.
Describing it as "laziness" is reductive. "Dependence on [_____] assistance" is the point of all technology.
I will note two things though.
1. Not all technology creates "dependence". Google Maps removes the need of carrying bulky maps, or buy new ones to stay updated, but someone who knows how to read Google Maps will know how to read a normal map, even if they're not as quick at it.
2. The best technology isn't defined by the "dependence" it creates, or even the level of "assistance" it provides, but for what it enables. Fire enabled us to cook. Metalworking enabled us to create a wealth of items, tools and structures that wouldn't exist if we only had wood and stone. Concrete enabled us to build taller and safer. Etc.
It's still unclear what AI chatbots are enabling. Are LLM's big claim to fame allowing people to answer problem sets and emails with minimal effort? What does this unlock? There's a lot of talk about allowing better data analysis, saving time, and vague claims of an ai revolution, but until we see X, Y and Z, and can confidently say "yeah, X, Y and Z are great for mankind, and they couldn't have happened without chatbots", it's fair for people to keep complaining about the change and downsides AI chatbots are bringing about.
AI doesn’t provide directions, it navigates for you. You’re actively getting stupider every time you take an LLMs answer for granted, and this paper demonstrates that people are likely to take answers for granted.
On one hand, this reminds me of how all of the kids were going to be completely helpless in the real world because "no one carries a calculator in their pocket". Then calculators became something ~everyone has in their pocket (and the kids ended up just fine).
On the other hand, I believe in the value of "learning to learn", developing media literacy, and all of the other positives gained when you research and form conclusions on things independently.
The answer is probably somewhere in the middle: leveraging LLMs as a learning aid, rather than LLMs being the final stop.
I have recently seen GenZ perplexed by card games with addition and making change. For millennials, this is grade school stuff.
I'm not about to divide 54,432 by 7.6, even though I was taught how to. I'll pull out my phone.
On the other end, I'm not going to pull out my phone to figure out I owe you $0.35.
I think the point I was trying to make still stands.
That is not going away. Learning better prompts, learning when to ignore AI, learning how to take information and turn it into something practical. These new skills will replace the old.
How many of us can still...
- Saddle a horse
- Tell time without a watch
- Sew a shirt
- Create fabric to sew a shirt
- Hunt with primitive tools
- Make fire
We can shelter children from AI, or we can teach them how to use it to further themselves. Talk to the Amish if you want to see how it works out when you forgo anything that feels too futuristic. A respectable life, sure. But would any of us reading this choose it?
Yes, this is what I meant by the calculator part of my comment. You've got some other good examples.
>learning when to ignore AI, learning how to take information and turn it into something practical.
This is what I meant by using LLMs as a tool rather than an end.
I have some friends who use ChatGPT for everything. From doing work to asking simple questions. One of my friends wanted a bio on a certain musician and asked ChatGPT. It's a little frightening he couldn't, you know, read the Wikipedia page of this musician, where all of the same information is and there are sources for this material.
My mom said she used ChatGPT to make a "capsule wardrobe" for her. I'm thinking to myself (I did not say this to her)... you can't just like look at your clothes and get rid of ones you don't wear? Why does a computer need to make this simple decision?
I'm really not sure LLMs should ever be used as a learning aid. I have never seen a reason to use them over, you know, searching something online. Or thinking of your own creative story. If someone can make a solid use case as to why LLMs are useful I would like to hear.
This is like when CEOs hire outside consulting firms to do layoffs for them. Pinning the pain of loss on some scapegoat makes it more bearable.
I use ChatGPT (or Gemini) instead of web searches. You can blame the content and link farms that are top of the search results, and the search engines focusing on advertising instead of search, because we're the product.
Why your friend doesn't know about wikipedia is another matter, if i wanted a generic info page about some topic i'd go directly there. But if i wanted to know if Bob Geldof's hair is blue, I might ask a LLM instead of reading the whole wikipedia page.
I also ask LLMs for introductory info about programming topics i don't know about, because i don't want to go to google and end up on w3schools, geeksforgeeks and crap like that.
I don't really trust LLMs for advanced programming topics, you know, what people pay me for. But they're fine for giving me a function signature or even a small example.
Realistically my guess is that the bar for broad knowledge and ability to get to details quickly will increase. There's a lot of value in understanding multiple disciplines at a mediocre level if you can very quickly access the details when needed. Especially since learning speed tends to get slower and slower the deeper you go.
Also since every time I've needed to do something complicated, even if I knew the details it was important enough to double check my knowledge anyway.
We don't teach slide rules and log tables in school anymore. Calculators and computers have created a huge metacognitive laziness for me, and I teach calculus and have a PhD in statistics. I barely remember the unit circle except for multiples of pi/4 radians. I can do it in multiples of pi/6 but I'm slower.
But guess what? I don't think I'm a worse mathematician because I don't remember these things reflexively. I might be a little slower getting the answer to a trivial problem, but I can still find a solution to a complex problem. I look up integral forms in my pocket book of integrals or on Wolfram Alpha, because even if I could derive the answer myself I don't think I'd be right 100% of the time. So metacognitive laziness has set in for me already.
But I think as long as we can figure out how to stop metacognitive laziness before it turns into full-fledged brain-rot, then we'll be okay. We'll survive as long as we can still teach students how to think critically, and figure out how to let AI assist us rather than turn us into the humans on the ship from Wall-E. I'm a little worried that we'll make some short term mistakes (like not adapting our cirriculum fast enough), but it will work out.
Even outside of math and computers, when was the last time you primed a well pump or filled an oil lamp? All of these tasks have been abstracted away, freeing us to focus on ever-more-specialized pursuits. Those that are useful will too be abstracted away, and for the better.
But man I cringe when I see 18 year old students reach for a calculator to multiply something by .1.
Personally speaking, I find being able to ask ChatGPT continually more nuanced questions about an initial answer the one clear benefit over a Google search, where I have diminishing marginal returns on my inquisitiveness for the time invested over subsequent searches. The more precisely I am able to formulate my question on a traditional search engine, the harder it is for non-SEO optimized results to appear: it's either meant more for a casual reader with no new information, or is a very specialized resource that requires extensive professional background knowledge. LLMs really build that bridge to precisely the answers I want.
I've heard stories of junior engineers falling into this trap. They asked the chatbot everything rather than exposing their lack of knowledge to their coworkers. And if the chatbot avoids blatant mistakes, junior engineers won't recognize when the bot makes a subtle one.
If I am not motivated to find them and test my own knowledge, how do I change that motivation?
It is interesting that you describe this as "the answers you want" and not "the correct answer to the question I have"
Not criticising you in particular, but this does sound to me like this approach has a good possibility of just reinforcing existing biases
In fact the approach sounds very similar to "find a wikipedia article and then go dig through the sources to find the original place that the answers I want were published"
One thing I do have to be mindful of is asking the AI to check for alternatives, for dissenting or hypothetical answers, and sometimes I just ask it to rephrase to check for consistency.
But doing all of that still takes way less time than searching for needles buried by SEO optimized garbage and well meaning but repetitious summaries.
“Verify that” and then ChatGPT will do a real time search and I can read web pages. Occasionally, it will “correct itself” once it does a web search
There was a story a couple days ago about a neural network built on a single photonic chip. I fed the paper to ChatGPT and was able to use it to develop a much more meaningful and comprehensive understanding of what the chip actually delivered, how it operated, the fundamental operating principles of core components and how it could be integrated into a system.
The fact that I now have a tireless elucidator on tap to help explore a topic (hallucination caveats notwithstanding) actually increases my motivation to explore dense technical information and understanding of new concepts.
The one area where I do think it is detrimental is my willingness to start writing content on a provebial blank sheet of paper. I explore the topic with ChatGPT to get a rough outline, maybe some basic content and then take it from there.
The more youngsters skip the hassle of banging their heads on some topic the less able they will be to learn at later age.
There's more to learning than getting information, it's also about processing it (which we are offloading to LLMs). In fact I'd say that the whole point of going through school is to learn how to process and absorb information.
That might be the cognitive laziness.
This is a pretty big caveat to the goal of
> develop a much more meaningful and comprehensive understanding
Which is still my biggest issue with LLMs. The little I use of them, the answers are still confidently wrong a lot of the time. Has this changed?
Your comment seems like a good example of metacognitive laziness: not bothering to formulate your own definition from the examples in the abstract and the meaning of the words themselves. Slothful about the the process of thinking for yourself.
The writer has the responsibility to be clear.
So metacognitive lazyness would be the lack of such processes
> When using AI in learning, learners should focus on deepening their understanding of knowledge and actively engage in metacognitive processes such as evaluation, monitoring, and orientation, rather than blindly following ChatGPT's feedback solely to complete tasks efficiently.
- I realized about 20y-25y ago that I could run a Web search and find out nearly any fact, probably one-shot but maybe with 2-3 searches' worth of research
- About 10-15y ago I began to have a connected device in my pocket that could do this on request at any time
- About 5y ago I explicitly *stopped* doing it, most of the time, socially. If I'm in the middle of a conversation and a question comes up about a minor fact, I'm not gonna break the flow to pull out my screen and stare at it and answer the question, I'm gonna keep hanging out with the person.
There was this "pub trivia" thing that used to happen in the 80s and 90s where you would see a spirited discussion between people arguing about a small fact which neither of them immediately had at hand. We don't get that much anymore because it's so easy to answer the question -- we've just totally lost it.
I don't miss it, but I have become keenly aware of how tethered my consciousness is to facts available via Web search, and I don't know that I love outsourcing that much of my brain to places beyond my control.
And work on learning some trivia purely to help you out with memory.
A good example, but imagine the days of our ancestors:
Remember that game we used to play, where we'd find out who could see birds from the farthest distance? Yeah, glasses ruined that.
The anecdotes from practitioners using GenAI in this way suggest it’s a good tool for experienced developers because they know what to look out for.
Now we admit folks who don’t know what they’re doing and are in the process of learning. They don’t know what to look out for. How does this tech help them? Do they know to ask what a use-after-free is or how cache memory works? Do they know the names of the algorithms and data structures? Do they know when the GenAI is bullshitting them?
Studies such as this are hard but important. Interesting one here even though the sample is small. I wonder if anyone can repeat it.
Anecdote from a friend who teaches CS: this year a large number of students started adding unnecessary `break` instructions to their C code, like so:
while (condition) {
do_stuff();
if (!condition) {
break;
}
}
They asked around and realized that the common thread was ChatGPT - everyone who asked how loops work got a variation of "use break() to exit the loop", so they did.Given that this is not how you do it in CS (not only it's unnecessary, but it also makes your formal proofs more complex) they had to make a general one-time exception and add disclaimers in exams reminding them to do it "the way you were taught in class".
Well - they know that break is not a function and you don't. Thanks ChatGPT.
The exercise was to implement binary search given the textbook specification without any errors. An algorithm they had probably implemented in their first-year algorithms course at the very least. The students could write any tests they liked and add any assertions they thought would be useful. My colleague verified each submission against a formal specification. The majority of submission contained errors.
For a simple algorithm that a student at that level could be reasonably expected to know well!
Now... ChatGPT and other LLM-based systems, as far as I understand, cannot do formal reasoning on their own. It cannot tell you, with certainty, that your code is correct with regards to a specification. And it can't tell you if your specification contains errors. So what are students learning using these tools?
(This might work best if you have one LLM critique the code generated by another LLM, eg bouncing back and forth between Claude and ChatGPT)
You can know enough in X to allow you to do Y together with X, which you might not have been able to before.
For example, I'm a programmer, but horrible at math. I want to develop games, and I technically could, but all the math stuff makes it a lot harder sometimes to make progress. I've still managed to make and release games, but math always gets in the way. I know exactly how I want it to behave and work, but I cannot always figure out how to get there. LLMs help me a lot with this, where I can isolate those parts into small black boxes that I know they give me the right thing, but not 100% sure about how. I know when the LLM gives me the incorrect code, because I know what I'm looking for and why, only missing the "how" part.
Basically like having 3rd party libraries you don't fully understand the internals of, but can still use granted you understand the public API, except you keep in your code base and pepper it with unit tests.
No, which is why people who don't pick up on the nuances of programming - no matter how often they use LLMs - will never be capable programmers.
And well, let me put it this way: deepseek-r1 won't be replacing anyone anytime soon. It generates a massive amount of texts, mostly nonsensical and almost always terribly, horribly wrong. But inexperienced devs or beginners, especially beginners, will be confused and will be led down the wrong path, potentially outsourcing rational thought to something that just sounds good, but actually isn't.
Currently, over-reliance on the ramblings of a statistical model seems detrimental to education and ultimately the performance of future devs. As the probably last generation of old school software engineers, who were trained on coffee and tears of frustration, who had to really work code and architecture themselves, golden times might lie ahead, because someone will have to fix the garbage produced en masse by llms.
Are you considering the full "reasoning" it does when you're saying this? AFAIK, they're meant to be "rambling" like that, exploring all sorts of avenues and paths before reaching a final conclusive answer that is still "ramble-like". I think the purpose seems to be to layer something on top that can finalize the answer, rather than just taking whatever you get from that and use it as-is.
> Currently, over-reliance on the ramblings of a statistical model seems detrimental to education and ultimately the performance of future devs. As the probably last generation of old school software engineers, who were trained on coffee and tears of frustration, who had to really work code and architecture themselves, golden times might lie ahead, because someone will have to fix the garbage produced en masse by llms.
I started coding just before Stack Overflow got popular, and remember the craze when it did get popular. Blogposts about how Stack Overflow will create lazy devs was all over the place, people saying it was the end of the real developer. Not arguing against you or anything, I just find it interesting how sentiments like these keeps repeating over time, just minor details that change.
2. Leonard Euler criticized the use of logarithm tables in calculating: in his 1748 "Introductio in analysin infinitorum" he insisted on deriving logarithms from first principles
3. William Thomson (Lord Kelvin) initially dismissed mechanical calculators, stating in an 1878 lecture at Glasgow University that they would make students "neglect the cultivation of their reasoning powers"
4. Henry Ford in his autobiography "My Life and Work" (1922) quoted a farmer who told him in 1907 that gasoline tractors would "make boys lazy and good for nothing" and they'd "never learn to farm"
5. In 1877, the New York Times published concerns from teachers about students using pencils with attached erasers, claiming it would make them "careless" because they wouldn't have to think before writing. The editorial warned it would "destroy the discipline of learning"
6. In "Elements of Arithmetic," (1846) Augustus De Morgan criticized the use of pre-printed multiplication tables, saying students who relied on them would become "mere calculative mechanism" instead of understanding numbers
7. In his 1906 paper "The Menace of Mechanical Music," John Philip Sousa attacked the phonograph writing that it would make people stop learning instruments because "the infant will be taught by machinery" and musical education would become "unnecessary"
8. In his 1985 autobiography "Surely You're Joking, Mr. Feynman!" Richard Feynman expressed concern about pocket calculators and students losing the ability to estimate and understand mathematical relationships
I could go on (Claude wrote 15 of them!). Twenty years from now (assuming AI hasn't killed us all) we'll look back and think that working with an LLM isn't the crutch people think it is now.
Write a out a list of statements about how each generation thinks the next is lazy because they have it easy. For example, people who had to memorize trig or log tables would think those who had books to refer to were lazy. Those who used slide rules thought calculator-users were lazy. People who plowed with a horse thought early tractors were cheating. etc. etc. List as many examples as you can up to 50, leaning toward the mental rather than physical, but including both, and give specifics rather than generalities. My examples above are at the edge of what's acceptable; try to do better than I did.
That got me a bunch of abstractions like: "Librarians who memorized the Dewey Decimal System dismissed those who used card catalogs"
So I replied:
Sorry, I should have been clearer: this should be real-world examples, with cites if possible. As one example, your point about photographers is no good unless some specific manual photographer actually said something about light meter users -- e.g. "Ansel Adams once said that..." and it needs to be not-made-up.
That got me the first three. After I confirmed that those were good I got 4-8. I followed that with:
more please. it's okay to add in a few "XYZ is supposed to have said that..." as long as they aren't completely made up, and they are in the minority.
That got me the rest.
Maybe I'm trying to read and understand it too quickly, but I don't see anything in the abstract that supports that strong conclusion.
> The results revealed that: (1) learners who received different learning support showed no difference in post-task intrinsic motivation; (2) there were significant differences in the frequency and sequences of the self-regulated learning processes among groups; (3) ChatGPT group outperformed in the essay score improvement but their knowledge gain and transfer were not significantly different. Our research found that in the absence of differences in motivation, learners with different supports still exhibited different self-regulated learning processes, ultimately leading to differentiated performance.
The ChatGPT group performed better on essay scores, they showed no deficit in knowledge gain or transfer, but they showed different self-regulated learning processes (not worse or better, just different?).
If anything, my own conclusion from the abstract would be that ChatGPT is helpful as a learning tool as it helped them improve essay scores without compromising knowledge learning. But again, I only read the abstract, maybe they go into more details in the paper that make the abstract make more sense.
Some kids might pickup a calculator and then use it to see geometric growth, or look for interesting repeating patterns of numbers.
Another kid might just use it to get their homework done faster and then run outside and play.
The second kid isn't learning more via the use of the tool.
So the paper warns that the use of LLMs doesn't necessarily change what the student is interested in and how they are motivated. That we might need to put in checks for how the tool is being used into the tool to reduce the impact of scenario 2.
From a learning perspective, it can also be a short cut to getting something explained in several different ways until the concept "clicks".
However, I agree that that doesn’t really seem to be a negative over other methods.
That's the most convoluted conclusion I've ever seen.
> What is particularly noteworthy is that AI technologies such as ChatGPT may promote learners' dependence on technology and potentially trigger “metacognitive laziness”.
Calculator laziness is long known. It doesn't cause meta- but specific- laziness.
I tend to learn asking questions, I did this using Anki cards for years (What is this or that?) and find the answer on the back of the index card. Questions activate my thinking more than anything, and of course my attempt at answering the question in my own terms.
My motto is: Seek first to understand, then to be understood (Covey). And I do this in engaging with people or a topic—-by asking questions.
Now I do this with LLMs. I have been exploring ideas I would never have explored hadn’t there been LLMs, because I would not have had the to research material for learning, read it, create material in a Q&A session for me.
I even use LLMs to convert an article into Anki cards using Obsidian, Python, LLMs, and the Anki app.
Crazy times we are in.
This is very well-studied: https://en.wikipedia.org/wiki/Testing_effect [not a high-quality article, but should give an overview]
Humans are lazy by nature, they seek shortcuts.
So given the chance to go rote learning for years for an education which in most cases is simply a soon to be forgotten certification vs watching TikTok while letting ChatGPT do the lifting - this is all predictable, even without Behavioral Design, Hooked etc.
And that usually the benefits rise with IQ level - nothing new here, that’s the very definition of IQ.
Learning and academia is hard, and even harder for those with lower IQ scores.
A fool with a tool is still a fool and vice versa.
Motivation seems also at an all time low. Why put in hours when a prompt can works wonders?
Reading a book is a badge of honor nowadays more than ever.
This is not obvious to me, and certainly is not the "definition" of IQ. There are tools that become less useful the more intelligent you are, such as multiplication tables. IQ is defined by a set of standardized tests that attempt to quantify human intelligence, and has some correlations with social, educational and professional performance, but it's not clear why it would help with use of AI tools.
Would you argue that having books/written words also made people more lazy and be able to remember less? Because some people argued (at the time) that having written words would make humanity less intellectual as a whole, but I think consensus is that it led to the opposite.
Most folks are projecting what the title says into their own emotion space and then riffing on that.
The authors even went so far as to boil the entire paper down into bullet points, you don't even need the pdf.
Yeah, or the abstract which is a bit vague.
And yes indeed, their ability to answer basic questions about coding on the same exam has drastically dropped versus last year.
There is a "plato" story on how he laments the invention of writing because now people don't need to memorize speeches and stuff.
I think there is a level of balance. Writing gave us enough efficiencies that the learned laziness made us overall more effective.
The internet in 2011 made us a bit less effective. I am not gonna lie; I spent a lot more time being able to get resources, whereas I would have to struggle on my own to solve a problem. You internalize one more than the other, but is it worth the additional time every time?
I worry about current students learning through LLMs just like I would worry about a student in 2012 graduating in physics when such a student had constant access to wolfram alpha.
Metacognition is really how the best of the best can continue to be at their best.
And if you don't use it, you lose it.
I’m also a skeptic of students using and relying on ChatGPT, but I’m cautious about using this abstract to come to any conclusions without seeing the full paper especially given that they’re apparently using “metacognitive laziness” in a specific technical way we don’t know about if we haven’t read the paper.
I'm not surprised if this will make some lazier since you don't need to do the legwork of reading, but how many don't read only the headlines of articles before they share articles?
You can interrogate it at least. "Are you sure that's the correct answer? Re-think from the beginning without any assumptions" and you'll get a checklist you can mentally/practically go through yourself to validate.
Now, "Claude, fix that for me".
It has "AI" in the title, so it's a hot take.
Ridiculous that academic work on the technology of education is behind a paywall and not open access. Stinks.
I understand it is a bit apples to oranges, but I'm curious peoples take.
I think a comparison with calculators is possible, but the degree to which calculators are capable of assisting us is so incomparably smaller that the comparison would be meaningless.
Smart phones changed society a lot more than calculators did and now AI is starting to do the same, albeit in a more subtle manner.
Treating AI like it's just a calulator seems naïve/optimistic. We're still reeling from the smart phone revolution and have not solved many of the issues it brought upon its arrival.
I have a feeling the world has become a bit cynical and less motivated to debate how to approach these major technological changes. It's been too many of them in too short of a time and now everyone has a whatever attitude towards the problems these adcancements introduce.
I'm sure my friends will RUSH to read the article now...
Even if the computer is doing all the thinking, it's still a tool. Do you know what to ask it? Can you spot a mistake when it messes up (or you messed up the input)? Can you simplify the problem and figure out what the important parts of the problem are? Do you even know to do any of that?
Sure, thinking machines will sometimes be autonomous and not need you to touch them. But when that's the case, your job won't be to just nod along to everything the computer says, you won't have a job anymore and you will need to find a new job (probably one where you need to prompt and interpret what the AI is doing).
And yes, there will be jobs where you just act as an actuator for the thinking machine. Ask an Amazon warehouse worker how great a job that is :/
Everything is the same as with calculators.
It’s not to say we shouldn’t do our best to understand and provide guardrails, but the kids will be fine.
"People have been complaining about this for thousands of years" is a potent counterargument to a lot of things, but it can't be applied to things that really didn't exist even a decade ago.
Moreover, the thing that people miss about "people have been complaining about this for thousands of years" is that the complaints have often been valid, too. Cultures have fallen. Civilizations have collapsed. Empires have disintegrated. The complaints were not all wrong!
And that's on a civilization-scale. On a more mundane day-to-day scale, people have been individually failing for precisely the same reasons people were complaining about for a long time. There have been lazy people who have done poorly or died because of it. There have been people who refused to learn who have done poorly or died because of it.
This really isn't an all-purpose "just shrug about it and move on, everything's been fine before and it'll be fine again". It hasn't always been fine before, at any scale, and we don't know what impact unknown things will have.
To give a historical example... nay, a class of historical examples... there are several instances of a new drug being introduced to a society, and it ripping through that society that had no defenses against it. Even when the society survived it, it did so at great individual costs, and "eh, we've had drugs before" would not have been a good heuristic to understand the results with. I do not know that AIs just answering everything is similar, but at the moment I certainly can't prove it isn't either.
Most people my age will tell you that they stopped reading as a teenager because of the effect of smartphones. I was a veracious reader and only relearnt to read last year after 10 years since I got my first smartphone as an older teenager. These things are impactful and have affected a lot of people's potential. And also made our generation very prone to mental health issues - something that is really incredibly palpable if you are within gen z social circles like I am. It's disastrous and cannot be overstated. I can be very sure I would be smarter and happier if technology had stagnated at the level it was at when I was a younger child/teen. The old internet and personal computers, for example, only helped me explore my curiosity. Social media and smartphones have only destroyed it. There are qualitative differences between some technological advancements.
Not to mention the fact that gen alpha are shown to have terrible computer literacy because of the ease of use, discouragement of customisation and corporate monopoly over smartphones. This bucks the trend that happened from gen x to gen z of generations become more and more computer native. Clearly, upwards trends in learning due to advancements in technology can be reversed. They do not always go up.
If kids do not learn independent reasoning because of reliance on LLMs, yes, that will make people stupider. Not all technology improves things. I watched a really great video recently where someone explained the change in the nature of presidential debates through the ages. In the Victorian times, they consisted of hours-long oratory on each side, with listeners following attentively. In the 20th century the speeches gradually became a little shorter and more questions were added to break things up. In most recent times, every question has started to come with a less than a minute answer, simpler vocabulary, little hard facts or statistics etc. These changes map very well to changes in the depth at which people were able to think due to the primary information source they were using. There is a good reason why reading is still seen as the most effective form of deep learning despite technological advancement. Because it is.
Maybe we'll end up as a society of a few elites who still know how to research, think, and/or write with LLMs digesting that and regurgitating it for the masses.
An example, I have no clue about React. I do know why I don’t like to use React and why I have avoided it over the years. I describe to some ML tool the difficulties I’ve had learning React and using it productively .. and voila, it plots a chart through the knowledge that, kinda, makes me want to learn React and use it.
It’s like, the human ability to form an ontology in the face of mystery even if it is in accurate or faulty, allows the AI to take over and plot an ontological route through the mystery into understanding.
Another thing I realized lately, as ML has taken over my critical faculties, is that it’s really only useful for things that are already known by others. I can’t ask ML to give me some new, groundbreaking idea about something - everything it suggests has already been thought, somewhere, by a real human - and this its not new or groundbreaking. It’s just contextually - in my own local ontological universe - filling in a mystery gap.
Pretty fun times we’re having, but I do fear for the generations that will know and understand no other way than to have ML explain things for them. I don’t think we have the ethics tools, as cultures and societies, to prevent this from becoming a catastrophe of glib, knowledge-less folks, collapsing all knowledge into a raging dumpster fire of collective reactivity, but I hope someone is training a model, somewhere, to rescue us from this, somehow ..
--Socrates on writing
I guess that is the curse of evolution/specialization.
..."laziness"...
In the battle cry of the philosopher: DEFINE YOUR TERMS!!
What they really mean: new and different. Outside-the-box. "Oh no, how will we grade this?!?" a threat to our definition and control of knowledge.