I recently worked on something very complex I don't think I would have been able to tackle as quickly without AI; a hierarchical graph layout algorithm based on the Sugiyama framework, using Brandes-Köpf for node positioning. I had no prior experience with it (and I went in clearly underestimating how complex it was), and AI was a tremendous help in getting a basic understanding of the algorithm, its many steps and sub-algorithms, the subtle interactions and unspoken assumptions in it. But letting it write the actual code was a mistake. That's what kept me from understanding the intricacies, from truly engaging with the problem, which led me to keep relying on the AI to fix issues, but at that point the AI clearly also had no real idea what it was doing, and just made things worse.
So instead of letting the AI see the real code, I switched from the Copilot IDE plugin to the standalone Copilot 365 app, where it could explain the principles behind every step, and I would debug and fix the code and develop actual understanding of what was going on. And I finally got back into that coding flow again.
So don't let the AI take over your actual job, but use it as an interactive encyclopedia. That works much better for this kind of complex problem.
Writing code has just typically been how I've needed to solve those problems.
That has increasingly shifted to "just" reviewing code and focusing on the architecture and domain models.
I get to spend more time on my actual job.
Solve enough problems relying on AI writing the code as a black box, and over time your grasp of coding will worsen, and you wont be undestanding what the AI should be doing or what it is doing wrong - not even at the architectural level, except in broad strokes.
One ends like the clueless manager type who hasn't touched a computer in 30 years. At which point there will be little reason for the actual job owners to retain their services.
Computer programming on the whole relying on the canned experience of the AI data set, producing more AI churn as ratio of the available training code over time, and plateuing both itself and AI, with the dubious future of reaching Singularity its only hope out of this.
Because they didn't understand the architecture or the domain models otherwise.
Perhaps in your case you do have strong hands-on experience with the domain models, which may indeed have shifted you job requirements to supervising those implementing the actual models.
I do wonder, however, how much of your actual job also entails ensuring that whoever is doing the implementation is also growing in their understanding of the domain models. Are you developing the people under you? Is that part of your job?
If it is an AI that is reporting to you, how are you doing this? Are you writing "skills" files? How are you verifying that it is following them? How are you verifying that it understands them the same way that you intended it to?
Funny story-- I asked a LLM to review a call transcript to see if the caller was an existing customer. The LLM said True. It was only when I looked closer that I saw that the LLM mean "True-- the caller is an existing customer of one of our competitors". Not at all what I meant.
Yes, and there's often a benefit to having a human have an understanding of the concrete details of the system when you're trying to solve problems.
> That has increasingly shifted to "just" reviewing code
It takes longer to read code than to write code if you're trying to get the same level of understanding. You're gaining time by building up an understanding deficit. That works for a while, but at some point you have to go burn the time to understand it.
You're like 836453th person to say this. It's not untrue, but many of us will take writing over reviewing any day. Reviewing is like the worst part of the job.
But I think the cognitive debt framing is useful: reading and approving code is not the same as building the mental model you get from writing, probing, and breaking things yourself. So the win (more time on problem solving) only holds if you're still intentionally doing enough of the concrete work to stay anchored in the system.
That said, if you're someone like me, I don't always need to fully master everything, but I do need to stay close enough to reality that I'm not shipping guesses.
[0] https://alisor.substack.com/p/i-never-really-wrote-code-now-...
Air quotes and more and more general words. The perfect mercenari’s tools.
The buck stops somewhere for most of us. We have jobs, we are compelled to do them. But we care about how it is done. We care whether doing it in a certain will give us short term advantages but hinder us in the long term. We care if the process feels good or bad. We care if it feels like we are in control of the process or if we are just swimming in a turbulent sea. We care about how predictable the tools we use. Whether we can guess that something takes a month and not be off by weeks.
We might say that we are the perfect pragmatists (mercenaris); that we only care about the most general description of what-is-to-be-done that is acceptable to the audience, like solving business problems, or solving technical problems, or in the end—as the pragmatist sheds all meaning from his burdensome vessel—just solving problems. But most of us got into some trade, or hobby, or profession, because we did concrete things that we concretely liked. And switching from keyboards to voice dictation might not change that. But seemingly upending the whole process might.
It might. Or it may not. Certainly could go in more than one direction. But to people who are not perfect mercenaries or business hedonists[1] these are actual problems or concerns. Not nonsense to be dismissed with some “actual job” quip, which itself is devoid of meaning.
The people framing this as "cognitive debt" are measuring the wrong thing. You're not losing the ability to think - you're shifting what you think about. That's not a bug, it's the whole point.
The place you need to get to is understanding that you are being asked to ensure a problem is solved.
You’re only causing a larger problem by “solving” issues without both becoming an SME and ensuring that knowledge can be held by the organization, at all levels that the problem affects (onboarding, staff, project management, security, finance, auditors, c-suite.)
I do not want to be a supervisor of AI agents. I do not want to engineer prompts, I want to engineer software.
If you spend all your time on that, you might actually lose the ability to actually do it. I find a lot of "non core" tasks are pretty important for skill building and maintenance.
Some people learn from rote memorization, some people learn through hands on experience. Some people have "ADHD brains". Some people are on the spectrum. If you visit Wikipedia and check out Learning Styles, there's like eight different suggested models, and even those are criticized extensively.
It seems a sort of parochial universalism has coalesced, but people should keep in mind we don't all learn the same.
ETA: I'd also like to say learning from LLMs are vastly similar, and some ways more useful, than finding blogs on a subject. A lot of time, say for Linux, you'll find instructions that even if you perform them to a tee, something goes pear shaped, because of tiny environment variables or a single package update changes things. Even Photoshop tutorials are not free of this madness. I'm used to mostly correct but just this side of incorrect instructions. LLMs are no different in a lot of ways. At least with them I can tailor my experience to just what I'm trying to do and spend time correcting that versus loading up a YT video trying to understand why X doesn't work. But I can understand if people don't get the same value as I do.
if you're a consultant/contractor that's bid a fixed amount for a job: you're incentivised to slop out as much as possible to hit the complete the contract as quickly as possible
and then if you do a particularly bad job then you'll be probably kept on to fix up the problems
vs. an permanent employee that is incentivised to do the job well, sign it off and move onto the next task
Trade offs around "room to do more of other things" are an interesting and recurring theme of these conversations. Like two opposites of a spectrum. On one end the ideal process oriented artisan taking the long way to mastery, on the other end the trailblazer moving fast and discovering entirely new things.
Comparing to the encyclopedia example: I'm already seeing my own skillset in researching online has atrophied and become less relevant. Both because the searching isn't as helpful and because my muscle memory for reaching for the chat window is shifting.
Yes, that is my experience. I have done some C# projects recently, a language I am not familiar with. I used the interactive encylopedia method, "wrote" a decent amount of code myself, but several thousand lines of production code later, I don't I know C# any better than when I started.
OTOH, it seems that LLMs are very good at compiling pseudocode into C#. And I have always been good at reading code, even in unfamiliar languages, so it all works pretty well.
I think I have always worked in pseudocode inside my head. So with LLMs, I don't need to know any programming languages!
If I understand a problem and AI is just helping me write or refactor code, that’s all good. If I don’t understand a problem and I’m using AI to help me investigate the codebase or help me debug, that’s okay too. But if I ever just let the AI do its thing without understanding what it’s doing and then I just accept the results, that’s where things go wrong.
But if we’re serious about avoiding the trap of AI letting us write working code we don’t understand, then AI can be very useful. Unfortunately the trap is very alluring.
A lot of vibe coding falls into the trap. You can get away with it for small stuff, but not for serious work.
If I can explain briefly what our issue is: we've got a really complex graph, and need to show it in a way that makes it easy to understand. That by itself might be a lost cause already, but we need it fixed. The problem is that our graph has cycles, and dagre is designed for DAGs; directed acyclic graphs. Fortunately it has a step that removes cycles, but it does that fairly randomly, and that can sometimes dramatically change the shape of the graph by creating unintentional start or end nodes.
I had a way to fix that, but even with that, it's still really hard to understand the graph. We need to cut it up into parts, group nodes together based on shared properties, and that's not something dagre does at all. I'm currently looking into cola with its constraints. But I'll take another look at elk.
With me it has been the opposite, perhaps because I was anti-AI before and because I know it is gonna make mistake.
My most intense AI usage:
Goal: Homelab is my hobby and I wanted to setup a private tracker torrent via Proton VPN, fully.
I am used to tools such Ansible and Linux operating system, but there were like 3 different tools to manage the torrents, plus a bunch of firewall rules so in case Provon VPN drops, everything stops working instead of using my real IP Address snitching me to my ISP.
I wanted everything to be as automated as possible, Ansible, so if everything catches on fire, I can run Ansible playbook and bring everything back online.
The whole setup took me 3 nights and I couldn't stop thinking about it during the day, like how can I solve this or that, the solution Perplexity/ChatGPT gave me broke something else so how could I solve that, etc.
I am using these tools more like a Google Search alternative than AI per se, I can see when it made mistakes because I know what I am asking it to help me with, homelab. I don't wanna to just copy and paste, and ironically, I have learned a ton about Promox ( where I run my virtual containers and virtual machine ). I always say that I don't wanna just answers, show me how did you get to that conclusion so I can learn it myself.
As long as you are aware that this is a tool and that it makes mistakes the same way as somebody's reply in any forum, you are good and should still feel motivated.
If you are using AI tools just for copy/paste expecting things to work without caring to understand what is actually happening (companies and IT teams worldwide), then you have a big problem.
- why bother, ask the llm
- relief.. i can let the llm relay me while i rest a bit and resume with some progress done
- inspiration.. the llm follows my ideas and open weird roads i was barely dreaming of (like asking random 'what if we try to abstract the issue even more' and get actual creative ideas)
but then there day to day operations and deadlines
Claude Code seems to be a much better paradigm. For novel implementations I write code manually while asking it questions. For things that I'm prototyping I babysit it closely and constantly catch it doing things that I don't want it to do. I ask it questions about why it built things certain ways and 80% of the time it doesn't have a good answer and redoes it the way that I want. This takes a great deal of cognitive engagement.
Rule nombre [sic] uno: Never anthropomorphize the LLM. It's a giant pattern-matching machine. A useful one, but still just a machine. Do not let it think for you because it can't.
Don't outsource the thinking to the AI, is what I mean. Don't trust it, but use it to talk to, to shape your thoughts, and to provide information and even ideas. But not the solution, because that has never worked for me for any non-trivial problem.
Funny - that's the hard part for me. I have yet to figure out what to use it for, since it seems to take longer than any other method of performing my tasks. Especially with regards to verifying for correctness, which in most cases seems to take as long or longer than just having done it myself, knowing I did it correctly.
I think you not asking questions about the code is the problem (in so far it still is a problem). But it certainly has gotten easy not to.
But while I was able to understand it enough to steer the conversation, I was utterly unable to make any meaningful change to the code or grasp what it was doing. Unfortunately, unlike in the case you described, chatting with the LLM didn’t cut it as the domain is challenging enough. I’m on a rabbit hunt now for days, picking up the math foundations and writing the code at a slower pace albeit one I can keep up with.
And to be honest it’s incredibly fun. Applied math with a smart, dedicated tutor and the ability to immediately see results and build your intuition is miles ahead of my memories back in formative years.
I am sorry for being direct but you could have just kept it to the first part of that sentence. Everything after that just sounds like pretentious name dropping and adds nothing to your point.
But I fully agree, for complex problems that require insight, LLMs can waste your time with their sycophancy.
Seriously though, I appreciated it because my curiosity got the better of me and I went down a quick rabbit hole in Sugiyama, comparative graph algorithms, and learning about the node positioning as a particular dimension of graph theory. Sure nothing ground breaking, but it added a shallow amount to my broad knowledge base of theory that continues to prove useful in our business (often knowing what you don't know is the best initiative for learning). So yeah man, lets keep name dropping pretentious technical details because thats half the reason I surf this site.
And yes, I did use ChatGPT to familiarize myself with these concepts briefly.
Where I'm skeptical of this study:
- 54 participants, only 18 in the critical 4th session
- 4 months is barely enough time to adapt to a fundamentally new tool
- "Reduced brain connectivity" is framed as bad - but couldn't efficient resource allocation also be a feature, not a bug?
- Essay writing is one specific task; extrapolating to "cognition in general" seems like a stretch
Where the study might have a point:
Previous tools outsourced partial processes - calculators do arithmetic, Google stores facts. LLMs can potentially take over the entire cognitive process from thinking to formulating. That's qualitatively different.
So am I ideologically inclined to dismiss this? Maybe. But I also think the honest answer is: we don't know yet. The historical pattern suggests cognitive abilities shift rather than disappear. Whether this shift is net positive or negative - ask me again in 20 years.
[Edit]: Formatting
They were arguably right. Pre literate peole could memorise vast texts (Homer's work, Australian Aboriginal songlines). Pre Gutenberg, memorising reasonably large texts was common. See, e.g. the book Memory Craft.
We're becoming increasingly like the Wall E people, too lazy and stupid to do anything without our machines doing it for us, as we offload increasing amounts onto them.
And it's not even that machines are always better, they only have to be barely competent. People will risk their life in a horribly janky self driving car if it means they can swipe on social media instead of watching the road - acceptance doesn't mean it's good.
We have about 30 years of the internet being widely adopted, which I think is roughly similar to AI in many ways (both give you access to data very quickly). Economists suggest we are in many ways no more productive now than when Homer Simpson could buy a house and raise a family on a single income - https://en.wikipedia.org/wiki/Productivity_paradox
Yes, it's too early to be sure, but the internet, Google and Wikipedia arguably haven't made the world any better (overall).
It seems more likely that there were only a handful of people who could. There still are a handful of people who can, and they are probably even better than in the olden times [1] (for example because there are simply more people now than back then.)
[1] https://oberlinreview.org/35413/news/35413/ (random first link from Google)
As for the productivity paradox, this discounts the reality that we wouldn't even be able to scale the institutions we're scaling without the tech. Whether that scaling is a good thing is debatable.
I can't stress this enough, Homer Simpson is a fictional character from a cartoon. I would not use him in an argument about economics any more than I would use the Roadrunner to argue for road safety.
I think they were right that something was lost in each transition.
But something much bigger was also gained, and I think each of those inventions were easily worth the cost.
But I'm also aware that one cost of the printing press was a century of very bloody wars across Europe.
But it’s a complete waste of time. What is the point spending years memorising a book?
You seem like the kind of person that would still be eating rotten carcasses on the plains while the rest of us are sitting around a fire.
> We're becoming increasingly like the Wall E people, too lazy and stupid to do anything without our machines doing it for us, as we offload increasing amounts onto them.
You're right about the first part, wrong about the second part.
Pre-Gutenberg people could memorize huge texts because they didn't have that many texts to begin with. Obtaining a single copy cost as much as supporting a single well-educated human for weeks or months while they copied the text by hand. That doesn't include the cost of all the vellum and paper which also translated to man-weeks of labor. Rereading the same thing over and over again or listening to the same bard tell the same old story was still more interesting than watching wheat grow or spinning fabric, so that's what they did.
We're offloading our brains onto technology because it has always allowed us to function better than before, despite an increasing amount of knowledge and information.
> Yes, it's too early to be sure, but the internet, Google and Wikipedia arguably haven't made the world any better (overall).
I find that to be a crazy opinion. Relative to thirty years ago, quality of life has risen significantly thanks to all three of those technologies (although I'd have a harder time arguing for Wikipedia versus the internet and Google) in quantifiable ways from the lowliest subsistence farmers now receiving real time weather and market updates to all the developed world people with their noses perpetually stuck in their phones.
You'd need some weapons grade rose tinted glasses and nostalgia to not see that.
In addition to these base skills, I also have specialized skills adapted to the modern world, that is my job. Combined with the internet and modern technology I can get to a level of proficiency that no one could get to in the ancient times. And the best part: I am not some kind of genius, just a regular guy with a job.
And I still have time to swipe on social media. I don't know what kind of brainless activities the ancient Greeks did, but they certainly had the equivalent of swiping on social media.
The general idea is that the more we offload to machines, the more we can allocate our time to other tasks, to me, that's progress, that some of these tasks are not the most enlightening doesn't mean we did better before.
And I don't know what economist mean by "productivity", but we can certainly can buy more stuff than before, it means that productivity must have increased somewhere (with some ups and downs). It may not appear in GDP calculations, but to me, it is the result that counts.
I don't count home ownership, because you don't produce land. In fact, that land is so expensive is a sign of high global productivity. Since land is one of the few things that we need and can't produce, the more we can produce the other things we need, the higher the value of land is, proportionally.
Pre literate peole HAD TO memorise vast texts
Computers are much better at remembering text.
People will risk their and others' lives in a horribly janky car if it means they can swipe on social media instead of watching the road - acceptance doesn't mean it's good.
FTFY
That said:
TV very much is the idiot box. Not necessarily because of the TV itself but rather whats being viewed. An actual engaging and interesting show/movie is good, but last time I checked, it was mostly filled with low quality trash and constant news bombardment.
Calculators do do arithmetic and if you ask me to do the kind of calculations I had to do in high school by hand today I wouldnt be able to. Simple calculations I do in my head but my ability to do more complex ones diminished. Thats down to me not doing them as often yes, but also because for complex ones I simply whip out my phone.
I got scared by how awfully my juniour (middle? 5-11) school mathematics had slipped when helping my 9 year old boy with his homework yesterday.
I literally couldn't remember how to carry the 1 when doing subtractions of 3 digit numbers! Felt literally idiotic having to ask an LLM for help. :(
Literacy, books, saving your knowledge somewhere else removes the burden of remembering everything in your head. But they don't come into effect into any of those processes. So it's an immensely bad metaphor. A more apt one is the GPS, that only leaves you with practice.
That's where LLMs come in, and obliterate every single one of those pillars on any mental skill. You never have to learn a thing deeply, because it's doing the knowing for you. You never have to practice, because the LLM does all the writing for you. And of course, when it's wrong, you're not wrong. So nothing you learn.
There are ways to exploit LLMs to make your brain grow, instead of shrink. You could make them into personalized teachers, catering to each student at their own rhythm. Make them give you problems, instead of ready-made solutions. Only employ them for tasks you already know how to make perfectly. Don't depend on them.
But this isn't the future OpenAI or Anthropic are gonna gift us. Not today, and not in a hundred years, because it's always gonna be more profitable to run a sycophant.
If we want LLMs to be the "better" instead of the "worse", we'll have to fight for it.
Yes, I wrote this comment under someone else's comment before, but it seems to apply to yours even better.
That said, these kinds of studies are important, because they reveal that some cognitive changes are evidently happening. Like you said, it's up to us to determine if they're positive or negative, but as is probably obvious to many, it's difficult to argue for the status quo.
If it's a negative change, teachers have to go back to paper-and-pen essay writing, which I was personally never good at. Or they need to figure out stable ways to prevent students from using LLMs, if they are to learn anything about writing.
If it's a positive change, i.e., we now have more time to do "better" things (or do things better), then teachers need to figure out substitutes. Suddenly, a common way of testing is now outdated and irrelevant, but there's no clear thing to do instead. So, what do they do?
Here’s the key difference for me: AI does not currently replace full expertise. In contrast, there is not a “higher level of storage” that books can’t handle and only a human memory can.
I need a senior to handle AI with assurances. I get seniors by having juniors execute supervised lower risk, more mechanical tasks for years. In a world where AI does that, I get no seniors.
Shift to what? This? https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...
a) serious, but we live on different planets
b) serious with the idea, tongue-in-check in the style and using a lot of self-irony
c) an ironic piece with some real idea
d) he is mocking AI maximalists
[1] https://www.nytimes.com/2025/01/15/books/review/open-socrate...
TV is the uber idiot box, the overlord of the army of portable smart idiot boxes.
Yes, but also the extra wrinkle that this whole thing is moving so fast that 4 months old is borderline obsolete. Same into the future, any study starting now based on the state of the art on 22/01/2026 will involve models and potentially workflows already obsolete by 22/05/2026.
We probably can't ever adapt fully when the entire landscape is changing like that.
> Previous tools outsourced partial processes - calculators do arithmetic, Google stores facts. LLMs can potentially take over the entire cognitive process from thinking to formulating. That's qualitatively different.
Yes, but also consider that this is true of any team: All managers hire people to outsource some entire cognitive process, letting themselves focus on their own personal comparative advantage.
The book "The Last Man Who Knew Everything" is about Thomas Young, who died in 1829; since then, the sum of recorded knowledge has broadened too much for any single person to learn it all, so we need specialists, including specialists in managing other specialists.
AI is a complement to our own minds with both sides of this: Unlike us, AI can "learn it all", just not very well compared to humans. If any of us had a sci-fi/fantasy time loop/pause that let us survive long enough to read the entire internet, we'd be much more competent than any of these models, but we don't, and the AI runs on hardware which allows it to.
For the moment, it's still useful to have management skills (and to know about and use Popperian falsification rather than verification) so that we can discover and compensate for the weaknesses of the AI.
Were they? It seems that often the fears came true, even Socrates’
It hugely enhanced synthetic and contextual memory, which was a huge development.
AI has the potential to do something similar for cognition. It's not very good at it yet, but externalised cognition has the potential to be transformative in ways we can't imagine - in the same way Socrates couldn't imagine Hacker News.
Of course we identify with cognition in a way we didn't do with rote memory. But we should possibly identify more with synthetic and creative cognition - in the sense of exploring interesting problem spaces of all kinds - than with "I need code to..."
He may have been right... Maybe our minds work in a different way now.
But now? I almost never enter a new phone number anywhere. Maybe someone shares a contact with me, and I tap to add it to my contact list. Or I copy-paste a phone number. Even some people that I contact frequently, I have no idea what their phone number is, because I've never needed to "know" it, I just needed to have it in my contact list.
I'm not sure that this is a bad thing, but definitely is a thing.
Ah, well, more memory space for other stuff, eh? I suppose. But like what? I could describe other scenarios, in which I used to have more facts and figures memorized, but simply don't any more, because I don't need to. While perhaps my memory is freed up to theoretically store more other things, in practice, there's not much I really "need" to store.
Even if no longer memorizing phone numbers isn't especially bad, I'm starting to think that no longer memorizing anything might not be a great idea.
I think a better framing would be "abusing (using it too much or for everything) any new tool/medium can lead to negative effects". It is hard to clearly define what is abuse, so further research is required, but I think it is a healthy approach to accept there are downsides in certain cases (that applies for everything probably).
What do you mean? All of them were 100% right. Novels are brain softening, TV is an idiot box, and writing destroys memory. AI will destroy the minds of people who use it much.
Writing did eliminate the need for memorization. How many people could quote a poem today? When oral history was predominant, it was necessary in each tribe for someone to learn the stories. We have much less of that today. Writing preserves accuracy much more (up to conquerors burning down libraries, whereas it would have taken genocide before), but to hear a person stand up and quote Desiderata from memory is a touching experience to the human condition.
Scribes took over that act of memorization. Copying something lends itself to memorization. If you have ever volunteered extensively for project Gutenberg you can also witness a similar experience: reading for typos solidifies the story into your mind in a way that casual writing doesn't. In losing scribes we lost prioritization of texts and this class of person with intimate knowledge of important historical works. With the addition of copyright we have even lost some texts. We gained the higher availability of works and lower marginal costs. The lower marginal costs led to...
Pulp fiction. I think very few people (but I would be disappointed if it was no one) would argue that Dan Brown's da Vinci Code is on the same level as War and Peace. From here magazines were created, even cheaper paper, rags some would call them (or use that to refer to tabloids). Of course this also enabled newspapers to flourish. People started to read things for entertainment, text lost its solemnity. The importance of written word diminished on average as the words being printed became more banal.
TV and the internet led to the destruction of printed news, and so on. This is already a wall of text so I won't continue, but you can see how it goes:
Technology is a double edged sword, we may gain something but we also can and did lose some things. Whether it was progress or not is generally a normative question that often a majority agrees with in one sense or another but there are generational differences in those norms.
In the same way that overuse of a calculator leads to atrophy of arithmetic skills, overuse of a car leads to atrophy of walking muscles, why wouldn't overuse of a tool to write essays for you lead to atrophy of your ability to write an essay? The real reason to doubt the study is because its conclusion seems so obvious that it may be too easy for some to believe and hide poor statistical power or p-hacking.
I also find exhausting the Socrates reference that's ALWAYS brought up in these discussions. It is not the same. Losing the collective ability to recite a 10000 words poem by heart because of books it's not the same thing as stopping to think because an AI is doing the thinking for you.
We keep adding automation layers on top of the previous ones. The end goal would be _thinking_ of something and have it materialized in computer and physical form. That would be the extreme. Would people keep comparing it to Socrates?
To be fair, I think this one is true. There's a lot of great stuff you can watch on TV, but I'd argue that TV is why many boomers are stuck in an echo chamber of their own beliefs (because CNN or fox news or whatever opinion-masquerading-as-journalism channel is always on in the background). This has of course been exacerbated by social media, but I can't think of many productive uses of TV other than sesame Street and other kids shows.
Still is.
Critical thinking, forming ideas, writing, etc, those are too stuff that can atrophy if not used.
For example, a lot of people can't locate themselves without a GPS today.
To be frank I see it really similar to our muscles: don't want to lose it? Use it. Whether that is learning a language, playing an instrument or the task llms perform.
In my opinion, they've almost always been right.
In the past two decades, we've seen the less-tech-savvy middle managers who devalued anything done on computer. They seemed to believe that doing graphic design or digital painting was just pressing a few buttons on the keyboard and the computer would do the job for you. These people were constantly mocked among online communities.
In programmers' world, you have seen people who said "how hard it could be? It's just adding a new button/changing the font/whatever..."
And strangely, in the end those tech muggles were the insightful ones.
1: https://www.catharsisinsight.com 2: https://ashleyjuavinett.com
How about some more info on what their main conclusions are?
The hosts condemn the study’s "bafflingly weak" logic and ableist rhetoric, and advise skepticism toward "science communicators" who might profit from selling hardware or supplements related to their findings: one of the paper's lead authors, Nataliya Kosmyna, is associated with the MIT Media Lab and the development of AttentivU, a pair of glasses designed to monitor brain activity and engagement. By framing LLM use as creating a "cognitive debt," the researchers create a market for their own solution: hardware that monitors and alerts the user when they are "under-engaged". The AttentivU system can provide haptic or audio feedback when attention drops, essentially acting as the "scaffold" for the very cognitive deficits the paper warns against. The research is part of the "Fluid Interfaces" group at MIT, which frequently develops Brain-Computer Interface (BCI) systems like "Brain Switch" and "AVP-EEG". This context supports the hosts' suspicion that the paper’s "cognitive debt" theory may be designed to justify a need for these monitoring tools.
"Your Brain On Chat GPT" Paper Analysis
In this transcript, neuroscientist Ashley and psychologist Cat critically analyze a controversial paper titled "Your Brain On Chat GPT" that claims to show negative brain effects from using large language models (LLMs).
Key Issues With the Paper:
Misleading EEG Analysis:
The paper uses EEG (electroencephalography) to claim it measures "brain connectivity" but misuses technical methods EEG is a blunt instrument that measures thousands of neurons simultaneously, not direct neural connections The paper confuses correlation of brain activity with actual physical connectivity Poor Research Design:
Small sample size (54 participants with many dropouts) Unclear time intervals between sessions Vague instructions to participants Controlled conditions don't represent real-world LLM use Overstated Claims:
Invented terms like "cognitive debt" without defining them Makes alarmist conclusions not supported by data Jumps from limited lab findings to broad claims about learning and cognition Methodological Problems:
Methods section includes unnecessary equations but lacks crucial details Contains basic errors like incorrect filter settings Fails to cite relevant established research on memory and learning No clear research questions or framework The Experts' Conclusion:
"These are questions worth asking... I do really want to know whether LLMs change the way my students think about problems. I do want to know if the offloading of cognitive tasks changes my own brain and my own cognition... We need to know these things as a society, but to pretend like this paper answers those questions is just completely wrong."
The experts emphasize that the paper appears designed to generate headlines rather than provide sound scientific insights, with potential conflicts of interest among authors who are associated with competing products.
My guess is the commenters who didn't like it had other reasons than the content itself.
Unfortunately, its also being used by a lot of people who also think theyre smarter than they are to confirm their pre-existing biases with bad research.
Im not saying ChatGPT doesnt make people stupid. It very well might (my hypothesis is that it just accelerates cognition change; decline for many, incline for some). But this garbage is not how you prove it.
Literacy, books, saving your knowledge somewhere else removes the burden of remembering everything in your head. But they don't come into effect into any of those processes. So it's an immensely bad metaphor. A more apt one is the GPS, that only leaves you with practice.
That's where LLMs come in, and obliterate every single one of those pillars on any mental skill. You never have to learn a thing deeply, because it's doing the knowing for you. You never have to practice, because the LLM does all the writing for you. And of course, when it's wrong, you're not wrong. So nothing you learn.
There are ways to exploit LLMs to make your brain grow, instead of shrink. You could make them into personalized teachers, catering to each student at their own rhythm. Make them give you problems, instead of ready-made solutions. Only employ them for tasks you already know how to make perfectly. Don't depend on them.
But this isn't the future OpenAI or Anthropic are gonna gift us. Not today, and not in a hundred years, because it's always gonna be more profitable to run a sycophant.
If we want LLMs to be the "better" instead of the "worse", we'll have to fight for it.
Yes, this is one of my favorite prompting styles.
If you're stuck on a problem, don't ask for a solution, ask for a framework for addressing problems of that type, and then work through it yourself.
Can help a lot with coming unstuck, and the thoughts are still your own. Oftentimes you end up not actually following the framework in the end, but it helps get the ball rolling.
There is no free lunch, if you use writing to "scaffold" your learning, you trade learning speed for a limited "neural pathways" budget that could connect two useful topics. And when you stop practicing your writing (or coding, as reported by some people who stopped coding due to AI) you feel that you are getting dumber. Since you scaffolded your knowledge of a topic with writing or coding, rather than doing the difficult work of learning it from more pervasive conceptions.
The best thing AI taught us is to not tie your knowledge to some specific task. It's overly reactionary to recommended task/action based education (even from an AI) in response to AI.
My wife had a similar experience, she had some college project where they had to drive up and down some roads and write about it, it was a group project, and she bought a map, and noticed that after reading the map she was more knowledgeable about the area than her sister who also grew up in the same area.
I think AI is a great opportunity for learning more about your subjects in question from books, and maybe even the AI themselves by asking for sources, always validate your intel from more authoritative sources. The AI just saved you 10 minutes? You can spend those 10 minutes reading the source material.
"+4 and then -2 and then +6 and then -3. Aha! All makes sense! Cannot repeat the digit differences, and need to be whole numbers, so going to the next higher even number, which is 6, which is 3 when halved!"
And then I am kinda proud my brain still works, even if the found "pattern" is hilariously arbitrary.
What the druids/piests were really decrying was that people spent less time and attention on them. Religion was the first attention economy.
Funny enough, the reason he gave against books has now finally been addressed by LLMs.
this actually does include a crazy amount of long form latex expositions on a bunch of projects im having a blast iterating on. i must be experiencing what its almost like not having adhd
The study shows that the brain is not getting used. We will get stupid in the same way that people with office jobs get unhealthy if they don't deliberately exercise.
It certainly hasn't inhibited learning either. The most recent example is shaders. I started by having it just generate entire shaders based on descriptions, without really understanding the pipeline fully, and asking how to apply them in Unity. I've been generally familiar with Unity for over a decade but never really touched materials or shaders. The generated shaders were shockingly good and did what I asked, but over time I wanted to really fine tune some of the behavior and wound up with multiple passes, compute shaders, and a bunch of other cool stuff - and understanding it all on a deeper level as a result.
I haven't been diagnosed with ADHD or anything but i also haven't been tested for it. It's something I have considered but I think it's pretty underdiagnosed in Spain.
That must be how normal people feel.
One of my favorite things is that I no longer feel like I need to keep up with "framework of the year"
I came up over a decade ago, places I worked were heavy on Java and Spring. Frontends were Jquery back then. Since then I've moved around positions quite a bit, many different frameworks, but typically service side rendered MVC types and these days I work as an SRE. The last 5 years I've fiddled with frontend frameworks and SPAs but never really got into it. I just don't have it in me to learn ANOTHER framework.
I had quite a few projects, all using older patterns/frameworks/paradigms. Unfortunately these older paradigms don't lend themselves to "serverless" architecture. So when I want to actually run and deploy something I've gotta deploy it to a server (or ecs task). That shit starts to cost a bit of money, so I've never been able to keep projects running very long... typically because the next idea comes up and I start working on that and decide to spend money on the new things.
I've been working at a cloud native shop the last 7 years now. Damn, you can run shit CHEAP in AWS if you know what you're doing. I know what I'm doing for parts of that, using dynamodb instead of rds, lambdas instead of servers. But I could never get far enough with modern frontend frameworks to actually migrate my apps to these patterns.
Well, now it's easy.
"Hey Claude, look at this repo here, I want to move it to AWS lambdas + apigw + cloudfront. Break the frontend out into a SPA using vue3. I've copied some other apps and patterns {here} so go view those for how to do it"
And that's just the start.
I never thought I'd get into game development but it's opened that up to me as well (though, since I'm not an artist professionally I have issues getting generative AI to make assets, so I'm stuck plodding along in aseprite and photoshop make shit graphics lol). I've got one simple game like 80% done and ideas for the next one.
I never got too far down mobile development either. But one of the apps I made it could be super useful to have a mobile app. Describe the ux/ui/user flow, tell it where to find the api endpoints, and wham bam, android app developed.
Does it make perfect code one shot? Sometimes, but not often, I'll have to nudge it along. Does it make good architectural decisions? Not often on its own, again, I'l nudge it, or even better, I'll spin up another agent to do code reviews and feed the reviews back into the agent building out the app. Keep doing that loop until I feel like the code review agent is really reaching or being too nitpicky.
And holy shit, I've been able to work on multiple things at the same time this way. Like completely different domains, just have different agents running and doing work.
btw, I have a couple of questions just out of curiosity: What tools do you use besides Claude? Do you have a local or preferred setup? and do you know of any communities where discussion about LLM/general AI tool use is the focus, amongst programmers/ML engineers? Been trying to be more informed as to what tools are out there and more up to date on this field that is progressing very quickly.
It’s cheap, easy, and quite effective to passively learn the maps over the course of time.
My similar ‘hack’ for LLMs has been to try to “race” the AI. I’ll type out a detailed prompt, then go dive into solving the same problem myself while it chews through thinking tokens. The competitive nature of it keeps me focused, and it’s rewarding when I win with a faster or better solution.
https://www.nature.com/articles/s41598-020-62877-0
This is rather scary. Obviously, it makes me think of my own personal over-reliance on GPS, but I am really worried about a young relative of mine, whose car will remain stationary for as long as it takes to get a GPS lock... indefinitely.
Not sure how that maps onto LLM use, I have avoided it almost completely because I've seen coleagues start to fall into really bad habits (like spending days adjusting prompts to try and get them to generate code that fixes an issue that we could have worked through together in about two hours), I can't see an equivalent way to not just start to outsource your thinking...
I have to visit a place several times and with regularity to remember it. Otherwise, out it goes. GPS has made this a non-issue; I use it frequently.
For me, however, GPS didn't cause the problem. I was driving for 5 or 6 years before it became ubiquitous.
I saw this first hand with coworkers. We would have to navigate large builds. I could easily find my way around while others did not know to take a left or right hand turn off the elevators.
That ability has nothing to do with GPS. Some people need more time for their navigation skills to kick in. Just like some people need to spend more time on Math, Reading, Writing, ... to be competent compared to others.
For this experience I am not sure, whether people really don't know regularly taken routes, or they just completely lack the confidence in their familiarity with it.
It's amazing to see how he navigates the city. But however amazing it is, he's only correct perhaps 95 times out of 100. And the number will only go down as he gets older. Meanwhile he has the 99.99% correct answer right in the front panel.
another thing ive done a few times for long journeys is to write down on paper a list of the road numbers and then beside each number write the distance that needs to be travelled on that road. just do the route in an app before you leave and copy the details from that. having only the list to work off definitely forces you to keep your brain more active
The kids are using ChatGPT for simple maths...
On a side note, the most hilarious part of it was when I asked gemini to do something for me in Google Sheets and it kept refering to it as Excel. Even after I corrected it.
Earlier, I had to only keep my phone away and not open Instagram while studying. Now, even thinking can be partially offloaded to an automated system.
Accumulation of cognitive debt when using an AI assistant for essay writing task - https://news.ycombinator.com/item?id=44286277 - June 2025 (426 comments)
And asbestos and lead paint was actually useful.
As LLM use normalizes for essay writing (email, documentation, social media, etc), a pattern emerges where everyone uses an LLM as an editor. People only create rough drafts and then have their "editor" make it coherent.
Interestingly, people might start using said editor prompts to express themselves, causing an increased range in distinct writing styles. Despite this, vocabulary and semantics as a whole become more uniform. Spelling errors and typos become increasingly rare.
In parallel, people start using LLMs to summarize content in a style they prefer.
Both sides of this gradually converge. Content gets explicitly written in a way that is optimized for consumption by an LLM, perhaps a return to something like the semantic web. Authors write content in a way that encourages a summarizing LLM to summarize as the author intends for certain explicit areas.
Human languages start to evolve in a direction that could be considered more coherent than before, and perhaps less ambiguous. Language is the primary interface an LLM uses with humans, so even if LLM use becomes baseline for many things, if information is not being communicated effectively then an LLM would be failing at its job. I'm personifying LLMs a bit here but I just mean it in a game theory / incentive structure way.
An interesting visual exercise to see latent information structure in language is to pixelize a large corpus as bit map by translating the characters to binary then run various transforms on it and what emerges is not a picture of random noise but a fractal like chaos of "worms" or "waves." This is what LLMs are navigating in their high dimensional latent space. Words are not just arbitrary symbols but objects on a connected graph.
I'm so grateful for AI and always use it to help get stuff done while also documenting the rational it takes to go from point A to B.
Although it has failed many times, I've had ZERO problems backtracking, debugging its thinking, understand what it has done and where it has failed.
We definitely need to bring back courses on "theory of knowledge" and the "Art of problem" solving etc.
Back when it came out, it was all the rage at my company and we were all trying it for different things. After a while, I realized, if people were willing to accept the bullshit that LLMs put out, then I had been worrying about nothing all along.
That, plus getting an LLM to write anything with meaning takes putting the meaning in the prompt, pushed me to finally stop agonizing over emails and just write the damn things as simply and concisely as possible. I don't need a bullshit engine inflating my own words to say what I already know, just to have someone on the other end use the same bullshit engine to remove all that extra fluff to summarize. I can just write the point straight away and send it immediately.
You can literally just say anything in an email and nobody is going to say it's right or wrong, because they themselves don't know. Hell, they probably aren't even reading it. Most of the time I'm replying just to let someone know I read their email so they don't have to come to my office later and ask me if I read the email.
Every time someone says the latest release is a "game changer", I check back out of morbid curiosity. Still don't see what games have changed.
A general education should focus on structure, all mental models built shall reinforce one another. For specific recommendations, completely replace the current Euler inspired curricula with one based on category theory. Strive to make all home and class work multimedia, multi-discipline presentations. Seriously teach one constructed meta-language from kindergarten. And stop passing along students who fail, clearly communicate the requirements.
I believe this is vital for students. Think about Student-AI interaction. Does this thing the AI is telling me fit with my understanding of the world, if it does they will accept it. If the student can think structurally the mismatch will be as obvious as a square peg in a round hole. A simple check for an isomorphism. Essentially expediting a proof certificate of the model output.
I don't know that the same makes as much sense to evaluate in an essay context, because it's not really the same. I guess the equivalent would be having an existing essay (maybe written by yourself, maybe not) and using AI to make small edits to it like "instead of arguing X, argue Y then X" or something.
Interestingly I find myself doing a mix of both "vibing" and more careful work, like the other day I used it to update some code that I cared about and wanted to understand better that I was more engaged in, but also simultaneously to make a dashboard that I used to look at the output from the code that I didn't care about at all so long as it worked.
I suspect that the vibe coding would be more like drafting an essay from the mental engagement POV.
Jeremy Howard argues that we should use LLMs to help us learn, once you let it reason for you then things go bad and you start getting cognitive debt. I agree with this.
Thinking everything ML produces is just shorting the brain.
I see AI wars as creating coherent stories. Company X starts using ML and they believe what was produced is valid and can grow their stock. Reality is that Company Y poised the ML and the product or solution will fail, not right away but over time.
The study seems interesting, and my confirmation bias also does support it, though the sample size seems quite small. It definitely is a little worrisome, though framing it as being a step further than search engine use makes it at least a little less concerning.
We probably need more studies like this, across more topics with more sample size, but if we're all forced to use LLMs at work, I'm not sure how much good it will do in the end.
That said, I also think it is important to not get an overly negative takeaway from the study. Many of the findings are exactly what you would expect if AI is functioning as a form of cognitive augmentation. Over time, you externalize more of the work to the tool. That is not automatically a bad thing. Externalization is precisely why tools increase productivity. When you use AI, you can often get more done because you are spending less cognitive effort per unit of output.
And this gets to what I see as the study's main limitation. It compares different groups on a fixed unit of output, which implicitly assumes that AI users will produce the same amount of work as non-AI users. But that is not how AI is actually used in the real world. In practice, people often use AI to produce much more output, not the same output with less effort. If you hold output constant, of course the AI group will show lower cognitive engagement. A more realistic scenario is that AI users increase their output until their cognitive load is similar to before, just spread across more work. That dimension is not captured by the experimental design.
We're heading toward AI-first systems whether we like it or not. The interesting question isn't "does AI reduce brain connectivity for essay writing" - it's how we redesign education, work, and products around the assumption that everyone has access to powerful AI. The people who figure out how to leverage AI for higher-order thinking will massively outperform those still doing everything manually.
Cognitive debt is real if you're using AI to avoid thinking. But it's cognitive leverage if you're using AI to think faster and about bigger problems.
Over-reliance on calculators does make you worse at math. I (shamefully) skated through Calculus 3 by just typing everything into my TI-89. Now as an adult I have no recollection of anything I did in that class. I don't even remember how to use the TI-89, so it was basically a complete waste of my time. But I still remember the more basic calculus concepts from all the equations I solved by hand in Calc 1 and 2.
I'm not saying "calculators bad" but misusing them in the learning process is a risk.
And yet people complain that management is out of touch, MBA driven businesses are out of touch, PE firms are out of touch, designers are out of touch with product, look at the touch screen cars (made by people who have never driven one) with reality. I can't even.
"Exactly!"
This is very different from, say, writing an essay I'm gonna publish on my blog under my own name. I would be MUCH more interested in an experiment that isolates people working on highly cognitively demanding work that MATTERS to them, and seeing what impact LLMs do (or don't) have on cognitive function. Otherwise, this seems like a study designed to confirm a narrative.
What am I missing
If you give up your hands-on interaction with a system, you will lose your insight about it.
When you build an application yourself, you know every part of it. When you vibe code, trying to debug something in there is a black box of code you've never seen before.
That is one of the concerns I have when people suggest that LLMs are great for learning. I think the opposite, they're great for skipping 'learning' and just get the results. Learning comes from doing the grunt work.
I use LLMs to find stuff often, when I'm researching or I need to write an ADR, but I do the writing myself, because otherwise it's easy to fall into the trap of thinking that you know what the 'LLM' is talking about, when in fact you are clueless about it. I find it harder to write about something I'm not familiar with, and then I know I have to look more into it.
this doesn't seem like a clear problem. perhaps people can accomplish more difficult tasks with LLM assistance, and in those more difficult tasks still see full brain engagement?
using less brain power for a better result doesn't seem like a clear problem. it might reveal shortcomings in our education system, since these were SAT style questions. I'm sure calculator users experience the same effects vs mental mathematics
Yes, you will be vulnerable should you lose access to AI at some point, but the same goes for a limb. You will adapt.
I'm very curious to see if we start to see things like this as a new skill, requiring a different cognitive style that's not measured in studies like this.
It's a tool, and this study at most indicates that we don't use as much brain power for the specific tasks of coding but do they look into for instance maintenance or management of code?
As that is what you'll be relegated to when vibe coding.
If you are feeling over reliant on these tools then I quickfix that's worked me is to have real conversations with real people. Organise a coffee date if you must.
Seems to focus only on the first part and not on the other end of it.
Similar mess with can be found in `Figure 34.`, with an added bonus of "DO NOT MAKE MISTAKES!" and "If you make a mistake you'll be fined $100".
Also, why are all of these research papers always using such weak LLMs to do anything? All of this makes their results very questionable, even if they mostly agree with "common intuition".
It also goes against the main ethos of the AI sect to "stress-test" the AI against everything and everyone, so there's that.
The last one I saw was about smartphone users who do a test and then quit their phone for a month and do the test again and surprisingly do better the second time. Can anyone tell me why they might have paid more attention, been more invested, and done better on the test the second time round right after a month of quitting their phone?
- Socrates on Writing.
If that’s true, then maybe we could leverage what we know about good management of human subordinates and apply it to AI interaction, and vice versa.
I have actually been improving in other fields instead like design and general cleanliness of the code, future extensability and bug prediction.
My brain is not 'normal' either so your mileage might vary.
So, is it ok for coding? :-)
As long as you're vetting your results just like you would any other piece of information on the internet then it's an evolution of data retrieval.
for me, it's purely a research tool that I can ask infinite questions to
We find that people having to perform mental arithmetics as opposed to people using calculators exhibited more neural activities. They were also able to recall the specific numbers in the equations more.
... So what?
The consequence of making anything easier is of course that the person and the brain is less engaged in the work, and remembers less.
This debate about using technology for thinking has been ongoing for literally millennia. It is at least as old as Socrates, who criticized writing as harming the ability to think and remember.
>>And now, since you are the father of writing, your affection for it has made you describe its effects as the opposite of what they really are. In fact, it will introduce forgetfulness into the soul of those who learn it: they will not practice using their memory because they will put their trust in writing, which is external and depends on signs that belong to others, instead of trying to remember from the inside, completely on their own. You have not discovered a potion for remembering, but for reminding; you provide your students with the appearance of wisdom, not with its reality. Your invention will enable them to hear many things without being properly taught, and they will imagine that they have come to know much while for the most part they will know nothing. And they will be difficult to get along with, since they will merely appear to be wise instead of really being so.”[0]
To emphasize: 'instead of trying to remember from the inside, completely on their own ... not a potion for remembering, but for reminding ... the appearance of wisdom, not its reality.'
There is no question this is a true dichotomy and trade-off.
The question is where on the spectrum we should put ourselves.
That answer is likely different for each task or goal.
For learning, we should obviously be working at a lower level, but should we go all the way to banning reading and writing and using only oral inquiry and recitation?
OTOH, a peer software engineer manager with many Indians in his group said he was constantly trying to get them to write down more of their plans and documentation, because they all wanted to emulate the great mathematician Ramanujan who did much of his work all in his head, and it was slowing down the SE's work.
When I have an issue with curing a particular polymer for a project, should I just get the answer from the manufacturer or search engine, or take the sufficient chemistry courses and obtain the proprietary formulas necessary to derive all the relevant reactions in my head? If it is just to deliver one project, obviously just get the answer and move on, but if I'm in the business of designing and manufacturing competing polymers, I should definitely go the long route.
As always, it depends.
[0] https://newlearningonline.com/literacies/chapter-1/socrates-...
Incidentally how I feel about React regardless of LLMs. Putting Claude on top is just one more incomprehensible abstraction.
A door has been opened that cant be closed and will trap those who stay too long. Good luck!
I do use them, and I also still do some personal projects and such by hand to stay sharp.
Just: they can't mint any more "pre-AI" computer scientists.
A few outliers might get it and bang their head on problems the old way (which is what, IMO, yields the problem-solving skills that actually matter) but between:
* Not being able to mint any more "pre-AI" junior hires
And, even if we could:
* Great migration / Covid era overhiring and the corrective layoffs -> hiring freezes and few open junior reqs
* Either AI or executives' misunderstandings of it and/or use of it as cover for "optimization" - combined with the Nth wave of offshoring we're in at the moment -> US hiring freezes and few open junior reqs
* Jobs and tasks junior hires used to cut their teeth on to learn systems, processes, etc. being automated by AI / RPA -> "don't need junior engineers"
The upstream "junior" source for talent our industry needs has been crippled both quantitatively and qualitatively.
We're a few years away from a _massive_ talent crunch IMO. My bank account can't wait!
Yes, yes. It's analogous to our wizzardly greybeard ancestors prophesying that youngsters' inability to write ASM and compile it in their heads would bring end of days, or insert your similar story from the 90s or 2000s here (or printing press, or whatever).
Order of "dumbing down" effect in a space that one way or another always eventually demands the sort of functional intelligence that only rigorous, hard work on hard problems can yield feels completely different, though?
Just my $0.02, I could be wrong.
/s
This is a non-study.
There's a famous satirical study that "proved" parachutes don't work by having people jump from grounded planes. This study proves AI rots your brain by measuring people using it the dumbest way possible.
I want a life of leisure. I don’t want to do hard things anymore.
Cognitive atrophy of people using these systems is very good as it makes it easier to beat them in the market, and it’s easier to convince them that whatever slop work you submitted after 0.1 seconds of effort “isn’t bad, it’s certainly great at delving into the topic!”
Also, monkey see, monkey speak: https://arxiv.org/abs/2409.01754
I hope you’re being facetious, as otherwise that’s a selfish view which will come back to bite you. If you live in a society, what other do and how they behave affects you too.
A John Green quote on public education feels appropriate:
> Let me explain why I like to pay taxes for schools even though I personally don’t have a kid in school. It’s because I don’t like living in a country with a bunch of stupid people.