(See also the famous Pascal quote “This would have been a shorter letter if I had the time”).
P.s. for reference I’ve asked an LLM to compress what I wrote above. Here is the output:
When I have a murky idea that’s hard to articulate, I find it helpful to ramble—typing or dictating a stream of semi-structured thoughts—and then ask an LLM to summarize. It often captures what I mean, but more clearly and effectively.
Done, now ai is just lossy prettyprinting.
It's much more useful for answering questions that are public knowledge since it can pull from external sources to add new info.
Ideally there's some selection done, and the fact you're receiving it means it's better than a mean answer. But sometimes they haven't even read the LLM output themselves :-(
In my experiments with LLMs for writing code, I find that the code is objectively garbage if my prompt is garbage. If I don't know what I want, if I don't have any ideas, and I don't have a structure or plan, that's the sort of code I get out.
I'd love to hear any counterpoints from folks who have used LLMs lately to get academic or creative writing done, as I haven't tried using any models lately for anything beyond helping me punch through boilerplate/scaffolding on personal programming projects.
I pointed this out a few weeks ago with respect to why the current state of LLMs will never make great campaign creators in Dungeons and Dragons.
We as humans don't need to be "constrained" - ask any competent writer to sit quietly and come up with a novel story plot and they can just do it.
https://news.ycombinator.com/item?id=43677863
That being said - they can still make AMAZING soundboards.
And if you still need some proof, crank the temperature up to 1.0 and pose the following prompt to ANY LLM:
Come up with a self-contained single room of a dungeon that involves an
unusual puzzle for use with a DND campaign. Be specific in terms of the
puzzle, the solution, layout of the dungeon room, etc. It should be totally
different from anything that already exists. Be imaginative.
I guarantee 99% of the returns will return a very formulaic physics-based puzzle response like "The Resonant Hourglass", or "The Mirror of Acoustic Symmetry", etc.https://old.reddit.com/r/singularity/comments/1andqk8/gemini...
At the same time, however, the people who need to use an LLM for this are going to be the worst at identifying the output’s weaknesses, eg just as I couldn’t write Spanish text, I also couldn’t evaluate the quality of a Spanish translation that an LLM produced. Taken to an extreme, then, students today could rely on LLMs, trust them without knowing any better, and grow to trust them for everything without knowing anything, never even able to evaluate their quality or performance.
The one area that I do disagree with the author, though, is coding. As much as I like algorithms code is written to be read by computers and I see nothing wrong with computers writing it. LLMs have saved me tons of time writing simple functions so I can speed through a lot of the boring legwork in projects and focus on the interesting stuff.
I think Miyazaki said it best: “I feel… humans have lost confidence“. I believe that LLMs can be a great tool for automating a lot of boring and repetitive work that people do every day, but thinking that they can replace the unique perspectives of people is sad.
For the structure, they are barely useful: Writing is about having such a clear understanding, that the meaning remains when reduced to words, so that others may grasp it. The LLM won't help much with that, as you say yourself.
They’re great at proofreading. They’re also good at writing conclusions and abstracts for articles, which is basically synthesising the results of the article and making it sexy (a task most scientists are hopelessly terrible at). With caveats:
- all the information needs to be in the prompt, or they will hallucinate;
- the result is not good enough to submit without some re-writing, but more than enough to get started and iterate instead of staring at a blank screen.
I want to use them to write methods sections, because that is basically the exact same information repeated in every article, but the actual sentences need to be different each time. But so far I don’t trust them to be accurate with technical details. They’re language models, they have no knowledge or understanding.
LLMs may seem like magic buy they aren't. They operate within the confines of the context they're given. The more abstract the context, the more abstract the results.
I expect to need to give a model at least as much context as a decent intern would require.
Often asking the model "what information could I provide to help you produce better code" and then providing said information leads to vastly improved responses. Claude 3.7 sonnet in Cline is fairly decent at asking for this itself in plan mode.
More and more I find that context engineering is the most important aspect of prompt engineering.
I commented in another thread. We're using image and video diffusion models for creative:
https://www.youtube.com/watch?v=H4NFXGMuwpY
Still not a fan of LLMs.
I personally tend not to use AI this way. When it comes to writing, that's actually the exact inverse of how I most often use AI, which is to throw a ton of information at it in a large prompt, and/or use a preexisting chat with substantial relevant context, possibly have it perform some relevant searches and/or calculations, and then iterate on that over successive prompts before landing on a version that's close enough to what I want for me to touch up by hand. Of course the end result is clearly shaped by my original thoughts, with the writing being a mix of my own words and a reasonable approximation of what I might have written by hand anyway given more time allocated to the task, and not clearly identifiable as AI-assisted. When working with AI this way, asking to "read the prompt" instead of my final output is obviously a little ridiculous; you might as well also ask to read my browser history, some sort of transcript of my mental stream of consciousness, and whatever notes I might have scribbled down at any point.
I think you are overestimating the people who submit this slop. It’s more like “here’s my assignment, what’s the answer”
I think that's the answer:
LLMs are primarily useful for data and text translation and reduction, not for expansion.
An exception is repetitive or boilerplate text or code where a verbose format is required to express a small amount of information.
Note the preamble, FAQs, and that all of the winning entries are now neural networks.
Part of my performance review is indirectly using bloat to seem sophisticated and thorough.
I agree, it's weird for parents to say, "Jump through these hoops, and for every dollar you earn grinding sesame for some company, we'll give you an additional two."
Working and educating yourself is decent and dignified, no? Is this a bad deal?
Its not like professors get real training either, but the guy doesn't seem to have gotten any real pedagogy.
I guess that I'm driving at that this guy is awfully young and the essay was a hot take. We should judge it accordingly.
The issue, IMO, is that some people throw in a one-shot, short prompt, and get a generic, boring output. "Garbage in, generic out."
Here's how I actually use LLMs:
- To dump my thoughts and get help organizing them.
- To get feedback on phrasing and transitions (I'm not a native speaker).
- To improve tone, style (while trying to keep it personal!), or just to simplify messy sentences.
- To identify issues, missing information, etc. in my text.
It’s usually an iterative process, and the combined prompt length ends up longer than the final result. And I incorporate the feedback manually.
So sure, if someone types "write a blog post about X" and hits go, the prompt is more interesting than the output. But when there are five rounds of edits and context, would you really rather read all the prompts and drafts instead of the final version?
(if you do: https://chatgpt.com/share/6817dd19-4604-800b-95ee-f2dd05add4...)
Ironically enough, as I was reading your post this is what convinced me it was written by ChatGPT.
FWIW, the initial draft you gave to chatgpt is better than what you posted.
I think you missed the point of the article. They did not mean it literally: it's a way to say that they are interested in what you have to say.
And that is the point that is extremely difficult to make students understand. When a teacher asks a student to write about a historical event, it's not just some kind of ceremony on the way to a degree. The end goal is to make the student improve in a number of skills: gathering information, making sense of it, absorbing it, being critical about what they read, eventually building an opinion about it.
When you say "I use an LLM to dump my thoughts and get help organising them", what you say is that you are not interested in improving your ability to actually absorb information. To me, it says that you are not interested in becoming interesting. I would think that it is a maturity issue: some day you will understand.
And that's what the article says: I am interested in hearing what you have to say about a topic that you care about. I am not interested into anything you can do to pretend that you care or know about it. If you can't organise your thoughts yourself, I don't believe that you have reached a point where you are interesting. Not that you will never get there; it just takes practice. But if you don't practice (and use LLMs instead), my concern is that you will never become interesting. This time is wasted, I don't want to read what your LLM generated from that stuff you didn't care to absorb in the first place.
- To "Translate to language XYZ", and that is not sometimes strightforward and needs iterating like "Translate to language <LANGUAGE> used by <PERSON ROLE> living in <CITY>" and so on.
And the author is right, I use it as 2nd-language user, thus LLM produces better text than myself. However I am not going to share the prompt as it is useless (foreign language) and too messy (bits of draft text) to the reader. I would compare it to passing a book draft thru editor and translator.
In that way, the prompt is more interesting, and I can’t tell you how many times I’ve gone to go write a prompt because I dunno how to write what I wanna say, and then suddenly writing the prompt makes that shit clear to me.
In general, I’d say that AI is way more useful to compress complex ideas into simple ones than to expand simplistic ideas in to complex ones.
If I spend 1 hour and write 500 words of prompt to then attach X additional rows of data (e.g. Rows from a table) and the LLM returns X rows of perfect answers. It shouldn't matter that the output ratio is worse than if I had typed those characters myself.
The important thing is whether within that 1 hour (+ few minutes of LLM processing) I managed to get the job done quicker or not.
It's similar to programming, using LLMs is not necessarily to write better code than I personally could but to write good enough code much faster than I ever would.
Bingo. It can be a rubber duck that echoes your mistakes back. Unfortunately, as other commenters have pointed out, the prompt may not be as interesting/iterative as we might suppose: "Here's the assignment, what's the answer".
The world will be consumed by AI.
Once upon a time only the brightest (and / or richest) went to college. So a college degree becomes a proxy for clever.
Now since college graduates get the good jobs, the way to give everyone a good job is to give everyone a degree.
And since most people are only interested in the job, not the learning that underpins the degree, well, you get a bunch of students that care only for the pass mark and the certificate at the end.
When people are only there to play the game, then you can't expect them to learn.
However, while 90% will miss the opportunity right there in front of them, 10% will grab it and suck the marrow. If you are in college I recommend you take advantage of the chance to interact with the knowledge on offer. College may be offered to all, but only a lucky few see the gold on offer, and really learn.
That's the thing about the game. It's not just about the final score. There's so much more on offer.
Then fail to actually learn anything and apply for jobs and try to cheat the interviewers using the same AI that helped them graduate. I fear that LLMs have already fostered the first batch of developers who cannot function without it. I don't even mind that you use an LLM for parts of your job, but you need to be able to function without it. Not all data is allowed to go into an AI prompt, some problems aren't solvable with the LLMs and you're not building your own skills if you rely on generated code/configuration for the simpler issues.
I don't. I think the world is falling into two camps with these tools and models.
> I now circle back to my main point: I have never seen any form of create generative model output (be that image, text, audio, or video) which I would rather see than the original prompt. The resulting output has less substance than the prompt and lacks any human vision in its creation. The whole point of making creative work is to share one’s own experience
Strong disagree with Clayton's conclusion.
We just made this with AI, and I'm pretty sure you don't want to see the raw inputs unless you're a creator:
https://www.youtube.com/watch?v=H4NFXGMuwpY
I think the world will be segregated into two types of AI user:
- Those that use the AI as a complete end-to-end tool
- Those that leverage the AI as tool for their own creativity and workflows, that use it to enhance the work they already do
The latter is absolutely a great use case for AI.
Because those who recruit based on the degree aren't worth more than those who get a degree by using LLMs.
Maybe it will force a big change in the way students are graded. Maybe, after they have handed in their essay, the teacher should just have a discussion about it, to see how much they actually absorbed from the topic.
Or not, and LLMs will just make everything worse. That's more likely IMO.
To actually teach this, you do something like this:
"Here's a little dummy robot arm made out of Tinkertoys. There are three angular joints, a rotating base, a shoulder, and an elbow. Each one has a protractor so you can see the angle.
1. Figure out where the end of the arm will be based on those three angles. Those are Euler angles in action. This isn't too hard.
2. Figure out what the angles should be to touch a specific point on the table. For this robot geometry, there's a simple solution, for which look up "two link kinematics". You don't have to derive it, just be able to work out how to get the arm where you want it. Is the solution unambiguous? (Hint: there may be more than one solution, but not a large number.)
3. Extra credit. Add another link to the robot, a wrist. Now figure out what the angles should be to touch a specific point on the table. Three joints are a lot harder than two joints. There are infinitely many solutions. Look up "N-link kinematics". Come up with a simple solution that works, but don't try too hard to make it optimal. That's for the optimal controls course.
This will give some real understanding of the problems of doing this.
(I know jack all about robotics but that sounds like a pretty common assignment, the kind an LLM would regurgitate someone else's homework.)
As long as LLM output is what it is, there is little threat of it actually being competitive on assignments. If students are attentive enough to paraphrase it into their own voice I'd call it a win; if they just submit the crap that some data labeling outsourcer has RLHF'd into a LLM, I'd just mark it zero.
If you’re not willing to cross out an entire assignment and return it to the student who handed it in with “ChatGPT nonsense, 0” written in big red letters at the top of it, you should ask yourself what is the point of your assignments in the first place.
But I get it, university has become a pay-to-win-a-degree scheme for students, and professors have become powerless to enforce any standards or discipline in the face of administrators.
So all they can do is give the ChatGPT BS the minimum passing grade and then philosophize about it on their blog (which the students will never read).
I would have thought that giving 0s to correct solutions would lead to successful complaints/appeals.
It's been incredibly blackpilling seeing how many intelligent professionals and academics don't understand this, especially in education and academia.
They see work as the mere production of output, without ever thinking about how that work builds knowledge and skills and experience.
Students who know least of all and don't understand the purpose of writing or problem solving or the limitations of LLMs are currently wasting years of their lives letting LLMs pull them along as they cheat themselves out of an education, sometimes spending hundreds of thousands of dollars to let their brains atrophy only to get a piece of paper and face the real world where problems get massively more open-ended and LLMs massively decline in meeting the required quality of problem solving.
Anyone who actually struggles to solve problems and learn themselves is going to have massive advantages in the long term.
Using an LLM to do schoolwork is like taking a forklift to the gym.
If all we were interested in was moving the weights around, you’d be right to use a tool to help you. But we’re doing this work for the effect it will have on you. The reason a teacher asks you a question is not because they don’t know the answer.
I'm there for the degree. If I wanted to learn and engage with material, I could save $60,000 and do that online for free, probably more efficiently. The purpose of a class writing exercise is to get the university to give me the degree, which I cannot do by actually learning the material (and which, for classes I care about, I may have already done without those exercises), but can only do by going through the hoops that professors set up and paying the massive tuition cost. If there were a different system where I could just actually learn something (which probably wouldn't be through the inefficient and antiquated university system) and then get a valid certificate of employability for having actually learned it, that would be great. Unfortunately, however, as long as university professors are gatekeepers of the coveted certificate of employability, they're going to keep dealing with this incentive issue.
When I was a kid and got an assignement for writing an essey about "why good forces prevailed in Lords of the Rings" as a gate check to see if I actually read the novel I had three choices: (a) read the novel and write the essey myself (b) find an already written essey - not an easy task in pre-internet era but we had books with esseys on most common topics you could "copy-paste" - and risk that the professor is familiar with the source or someone else used the same source (c) ask class mate to give me their essey as a template and rephrase it as my own
A and C would let me learn about the novel and let me polish my writing skills.
Today I can ask ChatGPT to write me a 4 pages essay about a novel I've never heard of and call it a day. There's no value gained in the process.
That's a simple example. The problem is that the same applies to programming. Novice programmer will claim that LLM give them power to take on hard tasks and programm in languages they were not familiar before. But they are not gaining any skill nor knowledege from that experience.
If I ask google maps to plot me a directions from Prague to Brussels it will yield a list of turns that will guide me to my destinations, but by any means I can't claim I've learned topography of Germany in the process.
I agree that this situation that the author outlines is unsatisfactory but it's mostly the fault of the education system (and by extension the post author). With a class writing exercise like the author describes, of course the students are going to use an LLM, they would be stupid not to if their classmates are using it.
The onus should be on the educators to reframe how they teach and how they test. It's strange how the author can't see this.
Universities and schools must change how they do things with respect to AI, otherwise they are failing the students. I am aware that AI has many potential and actual problems for society but AI, if embraced correctly, also has the potential to transform the educational experience in positive ways.
1. Students are a captive audience. They don't want to be there. It's the law that makes them be there. Even once you're beyond mandatory education this holds true: they were just carried into further education by momentum. They didn't realize they had a real choice or what alternatives were available.
2. A lot of the skills you build in classes aren't useful to you. I spent a lot of time in my English (second) language classes, but it was my use of the internet that really taught me the language. The later years of English classes was just busywork.
In my native language classes I had to write a fair number of essays. The only time this was useful was the final exam of that class. I haven't written a "real" essay since. Even if I did, it would probably be in English and use a different style - something taught to me by forum posts.
Exactly. I tend to think that the role of a teacher is to get the students to realise what learning is all about and why it matters. The older the students get, the more important it is.
The worst situation is a student finishing university without having had that realisation: they got through all of it with LLMs, and probably didn't learn how to learn or how to think critically. Those who did, on the other hand, didn't need the LLMs in the first place.
I figured this out in high school. It can’t be all that uncommon of a thought that if you are already in school and paying and given time to learn, you might as well do so?
My high-school age daughter told me how her small private school solved this problem:
They brought back oral exams.
There aren't a lot of other good options. Written take-home work and online tests have always been fertile ground for cheating. Another benefit of oral exams: you learn to communicate under stress.
I myself went to college to get the meal ticket, not to learn. But since the system was entirely exam based, I was forced to learn.
Looking forward towards is, but I fear that might be wishful thinking in part.
Also pre LLMs I have seen too many deep thinkers fail and pretenders succed. I don't see how LLMs can change that. Unless we all collectivly grow tired if pretenders and fakers amd value deep understanding. I just see not many indication of that.
Different courses and universities vary in teaching quality greatly. Often the examination criteria is loosely correlated with knowledge or skill and students end up studying 'around' the examination process, rather than learning for the sake of it or for enjoyment.
Someone mentioned verbal exams - this is the way to do it, but I only had a pleasure to do very few in my years. Probably because it's seen as 'too time consuming' or 'time wasteful', so the shift is then to the student to waste their time instead.
And then you get the occasional course with a lecturer everyone just pays 100% attention to and engages with, where you almost don't need an exam in the first place.
'The problem of LLMs in academia' is a symptom. You get what you measure.
So, unfortunately the student's behavior is somewhat rational given the incentive structure they operate in.
Even if one won’t need that specific know how after exams - just realization how much one can memorize and trying out some approaches to optimize it is where people grow/learn.
And so often also on wildly tangential subjects that are purely academical artifacts.
Cheating with LLMs is the inevitable conclusion of being a subject to a dragged out must-have education system that mostly just cheats the students of their time and money. That's the friendly way to put it.
I jumped through all the fucking hoops and now I'm paid handsomely, at every corner of the road leading here you see some pompous academic wankers with more medals than a photoshopped North Korean general.
But don't worry, worst case scenario, all of the kids growing up in this environment that are actually learning will build structures to exploit the prompters, I suspect the present situation where prompters can accidentally find themselves in real jobs is transient and building better filters will become survival imperative for businesses and institutions.
The current situation is that people need to pass exams, get certain GPA's, etc. to have opportunities unlocked to them. Education today is largely about collecting these "stamps" that open doors and not about actual learning.
My students however don't understand that the importance is on the process, not the result. My colleagues do.
this isn't the root of the problem
the root of the problem is that higher education has become, for the most part, an exercise in getting a piece of paper, so that you can check a box on a form or pass first level screening for a job
those who use the tools to accelerate their learning will do so and others who use it just to get by will see their skills atrophy and become irrelevant.
> how many intelligent professionals and academics don't understand this
Mastery of a discipline does not imply any pedagogical knowledge, despite anything one of my childhood heroes, Richard Feynman, might have claimed.
Despite frequent claims otherwise, in my experience and sampling of PhDs and Masters of different sorts and grad students working toward those degrees, an advanced degree does not teach anyone how to lead or teach. This is true of even some of the folks I knew studying Education itself who were a little too focused on their own research to understand anything "so simple."
> cheat themselves out of an education
What's "an education," though? For some people, education is focused on how to learn. For others, it's focused on some kind of certification to get a job. Some of us see value in both. And I'm sure there are other minority opinions as well. We, as a society, can't agree. The only thing we can seem to agree on in the US is that college should be expensive and saddle students with ridiculous debt.
"No worthy use of an LLM involves other human beings reading its output."
If you use a model to generate code, let it be code nobody has to read: one-off scripts, demos, etc. If you want an LLM to prove a theorem, have it generate some Coq and then verify the proof mechanically. If you ask a model to write you a poem, enjoy the poem, and then graciously erase it.
> Since this is a long thread and we're including a wider audience, I thought I'd add Copilot's summary...
Someone called them out for it, several others defended it. It was brought up in one team's retro and the opinions were divided and very contentious, ranging from, "the summary helped make sure everyone had the same understanding and the person who did it was being conscientious" to "the summary was a pointless distraction and including it was an embarrassing admission of incompetence."
Some people wanted to adopt a practice of not posting summaries in the future but we couldn't agree and had to table it.
Nobody will call you a lazy and incompetent coward for taking the default option: Hit reply-all, write your one-sentence response above all 50 quoted emails, hit send.
No, this is just the de-facto "house style" of ChatGPT / GPT models, in much the same way that that that particular Thomas Kinkade-like style is the de-facto "house style" of Stable Diffusion models.
You can very easily tell an LLM in your prompt to respond using a different style. (Or you can set it up to do so by telling it that it "is" or "is roleplaying" a specific type-of-person — e.g. an OP-ED writer for the New York Times, a textbook author, etc.)
People just don't ever bother to do this.
I feel like for most of my audiences it provides the proper anchor points for effective skimming while still giving me room to include further detail and explanation so that it's there as desired by the reader.
(And responding to my sibling comment, I also use em dashes and semicolons all the time. Has my brain secretly always been an LLM??)
https://chatgpt.com/share/6817c9f4-ed48-8010-bc3e-58299140c8...
In the real world I would at least remove the em dashes. It’s a dead give away for LLM generated text.
Every day I'm made more aware of how terrible people are at identifying AI-generated output, but also how obsessed with GenAI-vestigating things they don't like or wouldn't buy because they're bad.
I'm going to call out what I see as the elephant in the room.
This is brand new technology and 99% of people are still pretty clueless at properly using it. This is completely normal and expected. It's like the early days of the personal computer. Or Geocities and <blink> tags and under construction images.
Even in those days, incredible things were already possible by those who knew how to achieve them. The end result didn't have to be blinking text and auto-playing music. But for 99% it was.
Similarly, with current LLMs, it's already more than possible to use them in effective ways, without obscuring meaning or adding superfluous nonsense. In ways whose results have none of the author's criticisms apply. People just don't know how to do it yet. Many never will, just like many never learnt how to actually use a PC past Word and Excel. But many others will learn.
I've used LLM before to document command-line tools and APIs I've made; they aren't the final product since I also tweaked the writing and fixed misunderstandings from the LLM. I don't think the author would appreciate the original prompts, where I essentially just dump a lot of code and give instructions in bullet point form on what to output.
These generated documentation are immensely useful, and I use them all the time for myself. I prefer the documentation to reading the code because finding what I need at a glance is not trivial nor is remembering all the conditions, prerequisites, etc.
That being said, the article seems to focus on a use case where LLM is ill-suited. It's not suited for writing papers to pretend you wrote a paper.
> I say this because I believe that your original thoughts are far more interesting
Looking at the example posted, I'm not convinced that most people's original thoughts on gimbal lock will be more interesting than a succinct summary by an LLM.
> I believe that the main reason a human should write is to communicate original thoughts.
in fairness to the students, how does the above apply to school work?
why does a student write, anyway? to pass an assignment, which has nothing to do with communicating original thoughts-- and whose fault is that, really?
education is a lot of paperwork to get certified in the hopes you'll get a job. it's as bereft of intelectual life as the civil service examinations in imperial china. original thought doesn't enter the frame.
The most obvious ChatGPT cheating, like that mentioned in this article, is pretty easy to detect.
However, a decent cheater will quickly discover ways to conduce their LLM into producing text that is very difficult to detect.
I think if I was in the teaching profession I'd just leave, to be honest. The joy of reviewing student work will inevitably be ruined by this: there is 0 way of telling if the work is real or not, at which point why bother?
There always was a bunch of realistic options to not actually do your submitted work, and AI is merely makes it easier, more detectable and more scalable.
I think it moves the needle from 40 to 75, which is not great, but you'd already be holding your nose at student work half of the time before AI, so teaching had to be about more than that (and TBH it was, when I was in school teachers gave no fuck about submitted work if they didn't validate it by some additional face to face or test time)
Do you have any examples of this? I've never been able to get direct LLM output that didn't feel distinctly LLM-ish.
I might argue you couldn't really tell if it was "real" before LLMs, either. But also, reviewing work without some accompanying dialogue is probably rarely considered a joy anyway.
Talk to the student, maybe?
I have been an interviewer in some startups. I was not asking leetcode questions or anything like that. My method was this: I would pretend that the interviewee is a new colleague and that I am having coffee with them for the first time. I am generally interested in my colleagues: who are they, what do they like, where do they come from? And then more specifically, what do they know that relates to my work? I want to know if that colleague is interested in a topic that I know better, so that I could help them. And I want to know if that colleague is an expert in a topic where they could help me.
I just have a natural discussion. If the candidate says "I love compilers", I find this interesting and ask questions about compilers. If the person is bullshitting me, they won't manage to maintain an interesting discussion about compilers for 15 minutes, will they?
It was a startup, and the "standard" process became some kind of cargo culting of whatever they thought the interviews at TooBigTech were like: leetcode, system design and whatnot. Multiple times, I could obviously tell in advance that even if this person was really good at passing the test, I didn't think it would be a good fit for the position (both for the company and for them). But our stupid interviews got them hired anyway and guess what? It wasn't a good match.
We underestimate how much we can learn by just having a discussion with a person and actually being interested in whatever they have to say. As opposed to asking them to answer standard questions.
For example if you already have a theory of your code, and you want to make some stuff that is verbose but trivial. It is just more efficient to explain the theory to an LLM and extract the code. I do like the idea of storing the underlying prompt in a comment.
Same for writing. If you truly copy paste output, it's obviously bad. But if you workshop a paragraph 5 or 6 times that can really get you unstuck.
Even the euler angles example. That output would be a good starting point for an investigation.
This so much. A writing exercise sharpens your mind, it forces you to think clearly through problems, gives you practice in both letting your thoughts flow onto paper, and in post-editing those thoughts into a coherent structure that communicates better. You can throw it away afterwards, you'll still be a better writer and thinker than before the exercise.
I genuinely believe I had many excellent learning experiences at university, and I can assure you none of them were the times I had to re-write course info and hand it back to them in order to check off a box.
Maybe, if one student does something they might be wrong, but if 90% of students do something, perhaps the assignment is wrong? Doubling down and saying “we’ll force them to do it by hand then!” Is rather blindly missing the point here no?
Those classes are what taught me how to study and really internalize the material. Helped me so much later in college too. I really can't imagine how kids these days are doing it.
Yeah, to recycle a comment [0] from a few months back:
> Yeah, one of their most "effective" uses is to counterfeit signals that we have relied on--wisely or not--to estimate deeper practical truths. Stuff like "did this person invest some time into this" or "does this person have knowledge of a field" or "can they even think straight." [...]we might have to cope by saying stuff like: "Fuck it, personal essays and cover letters are meaningless now, just put down the raw bullet-points."
In other words, when the presentation means nothing, why bother?
Asking students for regurgitated info and then being annoyed because they supplied generic regurgitated info is somewhat telling an attitude no?
Pre-AI, homework was often copied and then individuals just crammed for the tests.
AI is not the problem for these students, it's that many students are only in it for the diploma.
If it wasn't AI it would just be copying the assignment from a classmate or previous grad.
And I imagine the students who really want to learn are still learning because they didn't cheat then, and they aren't letting AI do the thinking for them now.
The question is: Should we limit AI to keep the old way of learning, or use AI to make the process better? Instead of fixing small errors like grammar, students can focus on bigger ideas like making arguments clearer or connecting with readers. We need to teach students to use AI for deeper thinking by asking better questions.
We need to teach students that asking the right questions is key. By teaching students to question well, we can help them use AI to improve their work in smarter ways. The goal isn’t to go back to old methods for iterating but change how we iterate altogether.
> We need to teach students to use AI for deeper thinking by asking better questions.
Same thing here: the whole point of learning critical thinking is that you don't need to ask someone/something else. Teaching you how to ask the LLM to do it for you is not the same as teaching you how to actually do it.
In my opinion, we need to make students realise that their goal is to learn how to do it themselves (whatever it is). If they need an LLM to do it, then they are not learning. And if they are not learning, there is no point in going to school, they can go work in a field.
> You only have to read one or two of these answers to know exactly what’s up: the students just copy-pasted the output from a large language model, most likely ChatGPT. They are invariably
This is validating. Your imitation completely fooled me (I thought it really was ChatGPT and expected to be told as much in an entirely unsurprising "reveal") and the subsequent description of the style is very much in agreement with how I'd characterize it.
In previous discussions here, people have tried to convince me that I can't actually notice these obvious signs, or that I'm not justified in detecting LLM output this way. Well, it may be the case that all these quirks derive from the definitely-human training data in some way, but that really doesn't make them Turing-test-passing. I can remember a few times that other people showed me LLM prose they thought was very impressive and I was... very much not impressed.
> When someone comments under a Reddit post with a computer-generated summary of the original text, I honestly believe that everyone in the world would be better off had they not done so. Either the article is so vapid that a summary provides all of its value, in which case, it does not merit the engagement of a comment, or it demands a real reading by a real human for comprehension, in which case the summary is pointless. In essence, writing such a comment wastes everyone’s time.
I think you've overlooked some meta-level value here. By supplying such a comment, one signals that the article is vapid to other readers who might otherwise have to waste time reading a considerable part of the article to come to that conclusion. But while it isn't as direct as saying "this article is utterly vapid", it's more socially acceptable, and also more credible than a bald assertion.
I believe that it has improved my writing productivity somewhat, especially when I'm tired and not completely on the ball. Although I don't usually reach for this most of the time (e.g. not for this comment).
Using LLMs to achieve this is just another step in the evolution of a broken education system. The fix? IMO, make the exams for the courses delayed by one semester. So during the exam study-period, the students have to 'catch up' on the lectures they had a few months ago.
Maybe the problem is that the professor doesn't want to read the student work anyway, since it's all stuff he already knows. If they managed to use their prompts to generate interesting things, he'd stop wanting to see the prompts.
The hardest hit industry by AI has been essay writing services.
If anything, it seems they're noticing because the AI is doing a worse job.
I agree with the broader point of the article in principle. We should be writing to edify ourselves and take education seriously because of how deep interaction with the subject matter will transform us.
But in reality, the mindset the author cites is more common. Most accounting majors probably don't have a deep passion for GAAP, but they believe accounting degrees get good jobs.
And when your degree is utilitarian like that, it just becomes a problem of minimizing time spent to obtain the reward.
I can’t be the only student who had both the experience of wonderful learning moments, AND could see a badly designed assignment a mile off and wasn’t motivated to give such a thing my full attention no?
As a side note, if you want the prompt, simply ask for it in the assignment. Asking students for one thing and then complaining when you don’t get another is insanity.
Whether it be writing or computer programming, or exercising, for that matter, if you aren't willing to put in the work to achieve your goals, why bother?
At first, I thought they didn't care. However, it was so pervasive that it couldn't be the only explanation. I was forced to conclude they trusted ChatGPT more than themselves to argue their case... (Some students did not care, obviously.)
It can be used as a personal tutor. How awesome is it to have a tutor always available to answer almost any question from any angle to really help you understand? Yes, AI won't get everything right 100%, but for students who are still learning basics, it's fair to assume that having an AI tutor can yield far better results than having no tutor at all.
It can also be used as a tool for doing mundane work, so you can focus more on the interesting and creative work. Kind of like a calculator or a spreadsheet. Would math majors become better mathematicians if they had to do all calculations by hand?
I think instead of banning AI, education needs to reform. Teaching staff should focus less time on giving lectures and grading papers (those things can be recorded and automated) and more time on ORAL EXAMS where they really probe student's knowledge and there's no possibility of cheating.
Students can and should use AI to help them prepare. E.g. don't ask AI to write an essay for you, write it yourself and ask it to critique it. Don't ask it to give you answers for a test, ask it to ask you questions on the topic and find gaps in your knowledge. Etc.
And about oral exams ... I agree that these are amazing. In seventies and eighties when I was in school, most of exams were oral. But our society is really afraid of these, because oral exams are always subjective.
The is especially the case when you are about to complain about style, since that can easily be adjusted, by simply telling the model what you want.
But I think there is a final point that the author is also wrong about, but that is far more interesting: why we write. Personally I write for 3 reasons: to remember, to share and to structure my thoughts.
If an LLM is better then me at writing (and it is) then there is no reason for me to write to communicate - it is not only slower, it is counterproductive.
If the AI is better at wrangling my ideas into some coherent thread, then there is no reason for me to do it. This one I am least convinced about.
AI is already much better than me at strictly remembering, but computers have been that since forever, the issue is mostly convinient input/output. AIs makes this easier thanks to speech to text input.
[0]: See eg. https://www.oneusefulthing.org/p/centaurs-and-cyborgs-on-the....
This is especially true for students.
This is ridiculous. Even if the author has never typed a single character into a prompt box, he can still come to perfectly valid conclusions about the technology just by observing patterns in the outputs that are shoved into his face.
"I wish these astrophysicists had stated up front that they've never created a galaxy. How can they have a well-formed opinion on cosmic structures if they only ever observe them?"
The school should be drilling into students, at orientation, what some school-wide hard rules are regarding AI.
One of the hard rules is probably that you have to write your own text and code, never copy&paste. (And on occasions when copy&paste is appropriate, like in a quote, or to reuse an off-the-shelf function, it's always cited/credited clearly and unambiguously.)
And no instructors should be contradicting those hard rules.
(That one instructor who tells the class on the first day, "I don't care if you copy&paste from AI for your assignments, as if it's your own work; that just means you went through the learning exercise of interacting with AI, which is what I care about"... is confusing the students, for all their other classes.)
Much of society is telling students that everything is BS, and that their job is to churn BS to get what they want. Early "AI' usage popular practices so far looks to be accelerating that. Schools should be dropping a brick wall in front of that. Well, a padded wall, for the students who can still be saved.
I wish it was only at stoplights. But then just a few days ago, I witnessed a totally unnecessary accident. Left-lane got green, and someone in the straight lane noticed the movement but didn't look up and drove right into the car in front of them...
I almost had headaches after intense thinking of problems and ways to solve them in lambda prolog. That was the most interesting and satisfying to physically feel the effect of high focus combined with applying what was a new logic.
Computer science at the university, taught me how to learn and explore new ideas. I might sound like my grandpa who told me when I was 8yo that using calculator would lead to people not able to count... and here I am saying that LLM might lead to people who do not know how to write.
Actually, I am a bit concern that we might produce more text in the short term because it is becoming cheap to write tons of documentation with LLMs. But those feel like death by Terms and Conditions, i.e. text that no one reads. So not only we would lose our ability to write, but we can seriously affect our ability to read. Sure LLM can summarize as well, but then we lose the nuances.
Nature is lazy, but should we be lazy and delegate our ability to think (read/write), to a software ? Think about it :)
Even when no errors are introduced in the process, the outcome is always bad: 3 full paragraphs of text with bullets and everything where the actual information is just the original 1-2 sentences that the model was prompted with.
I never am happy reading one of those; it's just a waste of time. A lot of the folks doing it are not native English speakers. But for their use case, older tools like Grammarly that help improve the English writing are effective without the problematic decompression downsides of this class of LLM use.
Regardless of how much LLMs can be an impactful tool for someone who knows how to use one well, definitely one of the impacts of LLMs on society today is that a lot of people think that they can improve their work by having an LLM edit it, and are very wrong.
(Sometimes, just telling the LLM to be concise can improve the output considerably. But clearly many people using LLMs think the overly verbose style it produces is good.)
EDIT: Not a jab at the author per se, more that it's a third or fourth time I see this particular argument in the last few weeks, and I don't recall seeing it even once before.
More than communicate, I would say to induce thoughts.
I write poetry here and there (on paper, just for me). I like how exploration through lexical and syntactic spaces can be intertwined with semantics and pragnatic matters. More importantly, I appreciate how careful thoughts are playing with attention and other uncharted thoughts. The invisible side effects on mental structures happening in the creation of expression can largely outweight the importance of what is left as an artefact publicly visible.
For a far more trivial example, we can think about how notes in the margin of a book can radically change the way we engage with the reading. Even a careful spare word highlight can be a world of difference in how we engage with the topic. It's the very opposite of "reading" a few pages before realizing that not a single thought percolated into consciousness as it was wandering on something else.
AI usage is a lot higher in my work experience among people who no longer code and are now in business/management roles or engineers who are very new and didn't study engineering. My manager and skip level both use it for all sorts of things that seem pointless and the bootcamp/nontraditional engineers use it heavily. Our college hires we have who went through a CS program don't use it because they are better and faster than it for most tasks. I haven't found it to be useful without an enormous prompt at which point I'd rather just implement the feature myself.
As it turns out, a well written ticket makes a pretty good input into an LLM. However, it has the added benefit of having my original thought process well documented, so sometimes I go through the process of writing a ticket / subtask, even if I ended up giving it to an AI tool in the end.
I actually don't think that it is good at that. I have heard of language teachers trying to use it to teach the language (it's a model language, it should be good at it, right?) and realised that it isn't good at that.
Of course I understand the point of your message, which is that you feel your teachers were not helpful and I have empathy for that.
I don't understand this either. I use it a lot, but I never just use what an LLM says verbatim. It's so incredibly obvious it's not written by a human. Most of the time I write an initial draft, ask Claude to check it and improve it, and then I might touch up a few sentences here and there.
> Vibe coding; that is, writing programs almost exclusively by language-model generation; produces an artifact with no theory behind it. The result is simple: with no theory, the produced code is practically useless.
Maybe I still don't know what vibe coding is, but for the few times when I _can_ use an LLM to write code for me, I write a pretty elaborate instruction on what I want, how it should be written, ... Most of the time I use it for writing things I know it can do and seem tedious to me.
I have to admit I was a bit surprised how bad LLMs are at the continue this essay task. When I read it in the blog I suspected this might have been a problem with the prompt or the using one of the smaller variants of Gemini. So I tried it with Gemini 2.5 Pro and iterated quite a bit providing generic feedback without offering solutions. I could not get the model to form a coherent well reasoned argument. Maybe I need to recalibrate my expectations of what LLMs are capable, but I also suspect that current models have heavy guardrails, use a low temperature and have been specifically tuned for problem solving and avoid hallucinations as much as possible.
IMO the core problem is that in many cases this typical belief holds true.
I went to university to get a degree for a particular field of jobs. I'd generously estimate that about half of my classes actually applied to that field or jobs. The other half were required to make me a more "well rounded student" or something like that. But of course they were just fluff to maximize my tuition fees.
There was no university that offered a more affordable program without the fluff. After all, the fluff is a core part of the business model. But there isn't much economic opportunity without a diploma so students optimize around the fluff.
I couldn’t agree more with the sentiment of this article.
Writing yourself, _writing manually_ is much nicer, to hear your unfiltered thoughts, than condensing them through an LLM, and get average-sounding sentences with no soul. To me, LLM writing is soulless. I even started to turn to Grammarly and Copilot, as these were a mere distraction to the actual task at hand: writing. Instead of writing, I was constantly grammar fixing, and ultimately, nothing got done. I love the gym-analogy https://news.ycombinator.com/item?id=43888803 gave.
Off-topic, but on your parenthetical about SMS "wrong number" texts "(That scam doesn’t even make sense...)", part of why they do it is what's called "warming up" their sending number so that it's seen as legit by carriers and SMS filters. They're also seeing whether you're a real person who responds, in which case they can come back later with a more sophisticated scam (or re-sell you number, which is now more valuable, to another scammer for that purpose).
But you're right that it doesn't make much sense as a text you might receive naturally. Best thing is to not reply so that you're not feeding the beast.
Incidentally, I used ChatGPT to refresh my memory about how this works, and in its initial response it got it backwards, saying that "warming up" is what it does to your number. You can't trust these things one bit! Your post calls it "automated irresponsibility", and I like that.
I’d argue that making students give generic regurgitated info as an assignment is the actual issue. Make a good assignment…
As always, I reject wholeheartedly what this skeptical article has to say about LLMs and programming. It takes the (common) perspective of "vibe coders", people who literally don't care what code says as long as something that runs comes out the other side. But smart, professional programmers use LLMs in different ways; in particular, they review and demand alterations to the output, the same way you would doing code review on a team.
The implication there is that this is acceptable to pass a robotics class, and potentially this gives them more information about students' comprehension to further improve their instruction and teaching ("...that they have some kind of internal understanding to share").
On that second point, I have yet to see someone demonstrate a "smart, professional programmer use LLMs" in a way where it produces high quality output in their area of expertise, while improving their efficiency and thus saving time for them (compared to them just using a good, old IDE)!
Yes I know the subject area for which I write assessments and know if what is generated is factually correct. If I’m not sure, I ask for web references using the web search tool.
https://chatgpt.com/share/6817c46d-0728-8010-a83d-609fe547c1...
> I didn’t realize how much that could throw things off until I saw an example where the object started moving in a strange way when it hit that point.
Would feel off, because why change the person? And even if it's intented, then I'd say it's not formal to do in an assignement.
To play devil's advocate original code alienates you from many programming jobs. This was true before LLMs, and remains true now. Many developers abhor original code. They need frameworks or packages from Maven, NPM, pip, or whatever. They need to be told exactly what to do in the code, but copy/paste is better, and a package that already does it for you is better still. In these jobs, yes, absolutely let a computer write it for you (or at least anybody that is an untrusted outside stranger). Writing the code yourself will often alienate you from your peers and violate some internal process.
if you can one-shot an answer to some problem, the problem is not interesting.
the result is necessary, but not sufficient. how did you get there? how did you iterate? what were the twists and turns? what was the pacing? what was the vibe?
no matter if with encyclopedia, google, or ai, the medium is the message. the medium is you interacting with the tools at your disposal.
record that as a video with obs, and submit it along with the result.
for high stakes environments, add facecam and other information sources.
reviewers are scrubbing through video in an editor. evaluating the journey, not the destination.
And reviewing video would be a nightmare.
Let's be real... Multi-modal LLMs are scrubbing through the journey :P
That part caught my attention. As an English-as-a-second-language speaker myself, I find it so difficult to develop any form of "taste" in English the same way I have in my mother tongue. A badly written sentence in my mother tongue feels painful in a sort of physical way, while bad English usually sound OK to me, especially when asserted in the confident tone LLMs are trained in. I wish I could find a way to develop such sense for the foreign languages I currently use.
Libraries are still in every campus, often with internet access.
Traditional media have transitioned to become online content media farms. The NYT Crossword puzzle is now online. Millions of people do Wordle every day online.
This is just kickback. Every paradigm shift needs kickback in order to let the dust settle and for society to readjust and find equilibrium again.
People say “I saved so much time on perf this year with the aid of ChatGPT,” but ChatGPT doesn’t know anything about your working relationship with your coworker… everything interesting is contained in the prompt. If you’re brain dumping bullet points into an LLM prompt, just make those bullets your feedback and be done with it? Then it’ll be clear what the kernel of feedback is and what’s useless fluff.
How about an emoji like library designed exclusively for LLMs, so we can quickly condense context and mood without having to write a bunch of paragraphs, or the next iteration of "txt" speech for LLMs. What does the next step of users optimising for LLMs look like?
I miss the 80's/90's :-(
Am I alone with this?
No wonder OpenAI accidentally made LLMs even more sycophantic if this is what people want from them.
There's so much bad writing of valuable information out there. The major sins being: burying the lede, no or poor sectioning, and just generally verbose.
In some cases, like in EULAs and patents that's intentional.
The punchline? Bullet point 3 was wrong (it was a PL assignment and I'm 99% sure the AI was picking up on the word macro and regurgitating facts abut LISP). 0 points all around, better luck next time.
I wish to communicate four points of information to you. I’ll ask ChatGPT to fluff those up into multiple paragraphs of text for me to email.
You will receive that email, recognize its length and immediately copy and paste it into ChatGPT, asking it to summarize the points provided.
Somewhere off in the distance a lake evaporates.
The worst was the answer to the question "How can we utilize AI to greater effect in our work?". A nice open-ended question where they had a beautiful opportunity to show off how knowledgeable and forward thinking they are, right? Especially considering they're the ones behind the massive AI push our product has gone with as of late.
"You can ask it to write emails for you!" Was the one and only thing these multi-milli/billionaires could come up with. Our core product itself is literally an email interface, and we have an AI email generation feature built in...
I had to turn my webcam off because I genuinely laughed out loud at that response for how insanely elementary and useless it was as an answer. It also showed me these people do literally nothing other than answer emails - and even then they're too bloody lazy and give so little of a shit they can't even do that part themselves.
It's the old joke of the teacher who wants students to tried their best and that failure doesn't matter. But when the student follows the process to the best of their ability and fails they are punished while the student who mostly follows the process and then fudges their answer to the correct one is rewarded.
Personally, I’ve been enjoying using ChatGPT to explore different themes of writing. It’s fun. In my case the goal is specifically to produce artifacts of text that’s different from what I’d normally produce.
I think I is good in two way, one is which you use if as a small helper (basic questions, auto completion...)
Two for getting started on something that you have no idea on ( not to teach you but just give you an idea of what it's and resources to learn more)
If you use AI all that is important is your ability to specify the problem, of course, as it always has been, you can just reiterate faster.
What about using LLMs to refine or sharpen your existing work? Similar to a Rubber ducky? If you're intentional about maintaining and understanding the theory behind the work, I've found it a useful tool.
Simply blaming models is an easy way out and creates little value - Maybe changing the medium and exercise to which it transfers could be a thing?
It's time to get creative.
Since there is no "interdiction" to use any LLM, perhaps it should be mandatory to include the prompt as well when used. Feels like that could be the seed that sparks the curiosity..
It helps me spot the bits that feel flat or don’t add much, so I can cut or rework them—while still getting the benefit of the LLM’s idea generation.
I wish there was some way to do the same for programming. Imagine a classroom full of machines with no internet connection, just a compiler and some offline HTML/PDF documentation of languages and libraries.
It feels like we are getting to this weird situation where we just use LLMs as proxies, and the long, boring text is just for LLMs to talk to each other.
For example:
Person A to LLM A: Give me my money.
LLM A to LLM B: Long formal letter.
LLM B to Person B : Give me my money.
Hopefully, nothing is lost in translation.
1. “When copying another person’s words, one doesn’t communicate their own original thoughts, but at least they are communicating a human’s thoughts. A language model, by construction, has no original thoughts of its own; publishing its output is a pointless exercise.”
LLMs, having being trained using the corpus of the web, I would argue communicate other human’s thoughts particularly well. Only in exercising an avoidance of plagiarism are the thoughts of other human’s evolved into something closer to “original thought” for the would-be plagarizer. But yes, at least a straight copy/paste retains the same rhetoric as the original human.
2. I’ve seen a few advertisements recently leverage “the prompt” as a means to resonate visual appeal.
i.e a new fast food delivery service starting their add with some upbeat music and a visual presentation of somebody typing into a LLM interface, “Where’s the best sushi around me?” And then cue the advertisement for the product they offer.
The very first time I enjoyed talking to someone in another language, I was 21. Then an exchange student, I had a pleasant and interesting discussion with someone in that foreign language. On the next day, I realised that I wouldn't have been able to do that without that foreign language. I felt totally stupid: I had been getting very good grades in languages for years at school without ever caring about actually learning the language. And now, it was obvious, but all that time was lost; I couldn't go back and do it better.
A few years earlier, I had this great history teacher in high school. Instead of making us learn facts and dates by heart, she wanted us to actually get an general understanding of a historical event. Actually internalise, absorb the information in such a way that we could think and talk about it. And eventually develop our critical thinking. It was confusing at first, because when we asked "what will the exam be about", she wouldn't say "the material in those pages". She'd be like "well, we've been talking about X for 2 months, it will be about that".
Her exams were weird at first: she would give us articles from newspapers and essentially ask what we could say about them. Stuff like "Who said what, and why? And why does this other article disagree with the first one? And who is right?". At first I was confused, and eventually it clicked and I started getting really good at this. Many students got there as well, of course. Some students never understood and hated her: their way was to learn the material by heart and prove it to get a good grade. And I eventually realised this: those students who were not good at this were actually less interesting when they talked about history. They lacked this critical thinking, they couldn't make their own opinion or actually internalise the material. So whatever they would say in this topic was uninteresting: I had been following the same course, I knew which events happened and in which order. With the other students were it "clicked" as well, I could have interesting discussion: "Why do you think this guy did this? Was it in good faith or not? Did he know about that when he did it? etc".
She was one of my best teachers. Not only she got me interested in history (which had never been my thing), but she got me to understand how to think critically, and how important it is to internalise information in order to do that. I forgot a lot of what we studied in her class. I never lost the critical thinking. LLMs cannot replace that.
I like reading and writing stories. Last month, I compared the ability of various LLMs to rewrite Saki's "The Open Window" from a given prompt.[1] The prompt follows the 13-odd attempts. I am pretty sure in this case that you'd rather read the story than the prompt.
I find the disdain that some people have for LLMs and diffusion models to be rather bizarre. They are tools that are democratizing some trades.
Very few people (basically, those who can afford it) write to "communicate original thoughts." They write because they want to get paid. People who can afford to concentrate on the "art" of writing/painting are pretty rare. Most people are doing these things as a profession with deadlines to meet. Unlike you are GRRM, you cannot spend decades on a single book waiting for inspiration to strike. You need to work on it. Also, authors writing crap/gold at a per-page rate is hardly something new.
LLMs are probably the most interesting thing I have encountered since I did the computer. These puritans should get off of their high horse (or down from their ivory tower) and join the plebes.
[1] Variations on a Theme of Saki (https://gist.github.com/s-i-e-v-e/b4d696bfb08488aeb893cce3a4...)
Perhaps that's good, perhaps that's bad, but it certainly doesn't really allow him to see much of the appeal... yet
Forcing people to do these things supposedly results in a better, more competitive society. But does it really? Would you rather have someone on your team who did math because it let them solve problems efficiently, or did math because it’s the trick to get the right answer?
Writing is in a similar boat as math now. We’ll have to decide whether we want to force future generations to write against their will.
I was forced to study history against my will. The tests were awful trivia. I hated history for nearly a decade before rediscovering that I love it.
History doesn’t have much economical value. Math does. Writing does. But is forcing students to do these things the best way to extract that value? Or is it just the tradition we inherited and replicate just because our parents did?
Having spent about two decades reading other humans' "original thoughts", I have nothing else to say here other than: doubt.
Pithy and succinct takes time.
The goal is to make something legible, but the reality is we are producing slop. I'm back to writing before my brain becomes lazy.
There's too much information in the World for it to matter, I think is the underlying reason.
As an example, most enterprise communication nears the levels of noise in its content.
So, why not let a machine generate this noise, instead?
Yes, totally. Unfortunately, it takes time and maturity to understand how this is completely wrong, but I feel like most students go through that belief.
Not sure how relevant it is, but it makes me think of two movies with Robin Williams: Dead Poet's Society and Will Hunting. In the former, Robin's character manages to get students interested in stuff instead of "just passing the exams". In the later, I will just quote this part:
> Personally, I don’t give a shit about all that, because you know what? I can’t learn anything from you I can’t read in some fuckin’ book. Unless you wanna talk about you, who you are. And I’m fascinated. I’m in.
I don't give a shit about whether a student can learn the book by heart or not. I want the student to be able to think on their own; I want to be able to have an interesting discussion with them. I want them to think critically. LLMs fundamentally cannot solve that.
Exploring a concept-space with LLM as tutor is a brilliant way to educate yourself. Whereas pasting the output verbatim, passing it as one’s own work, is tragic: skipping the only part that matters.
Vibe coding is fun right up to the point it isn’t. (Better models get you further.) But there’s still no substitute for guiding an LLM as it codes for you, incrementally working and layering code, committing to version control along the way, then putting the result through both AI and human peer code reviews.
Yet these all qualify as “using AI”.
We cannot get new language for discussing emerging distinctions soon enough. Without them we only have platitudes like “AI is a powerful tool with both appropriate and inappropriate uses and determining which is which depends on context”.
we agree. mixus makes that easy — across teams, classes, and communities.
That said, I myself am increasingly reading long texts written by LLMs and learning from them. I have been comparing the output of the Deep Research products from various companies, often prompting for topics that I want to understand more deeply for projects I am working on. I have found those reports very helpful for deepening my knowledge and understanding and for enabling me to make better decisions about how to move forward with my projects.
I tested Gemini and ChatGPT on “utilizing Euler angles for rotation representation,” the example topic used by the author in the linked article. I first ran the following metaprompt through Claude:
Please prepare a prompt that I can give to a reasoning LLM that has web search and “deep research” capability. The prompt should be to ask for a report of the type mentioned by the sample “student paper” given at the beginning of the following blog post: https://claytonwramsey.com/blog/prompt/ Your prompt should ask for a tightly written and incisive report with complete and accurate references. When preparing the prompt, also refer to the following discussion about the above blog post on Hacker News: https://news.ycombinator.com/item?id=43888803
I put the the full prompt written by Claude at the end of the Gemini report, which has some LaTex display issues that I couldn’t get it to fix:https://docs.google.com/document/d/1sqpeLY4TWD8L4jDSloeH45AI...
Here is the ChatGPT report:
https://chatgpt.com/share/681816ff-2048-8011-8e0f-d8cbad2520...
I know nothing about this topic, so I cannot evaluate the accuracy or appropriateness of the above reports. But when I have had these two Deep Research models produce similar reports on topics I understand better, they have indeed deepened my understanding and, I hope, made me a bit wiser.
The challenge for higher education is trying to decide when to stick to the traditional methods of teaching—in this case, having the students learn through the process of writing on their own—and when to use these powerful new AI tools to promote learning in other ways.
The kids these days got everything...
Back in HS literature class, I had to produce countless essays on a number of authors and their works. It never once occurred to me that it was anything BUT an exercise in producing a reasonably well written piece of text, recounting rote-memorized talking points.
Through-and-through, it was an exercise in memorization. You had to recall the fanciful phrases, the countless asinine professional interpretations, brief bios of the people involved, a bit of the historical and cultural context, and even insert a few verses and quotes here and there. You had to make the word count, and structure your writing properly. There was never any platform for sharing our own thoughts per se, which was sometimes acknowledged explicitly, and this was most likely because the writing was on the wall: nobody cared about these authors or their works, much less enjoyed or took interest in anything about them.
I cannot recount a single thought I memorized for these assignments back then. Passed these with flying colors most usually, but even for me, this was just pure and utter misery. Even in hindsight, the sheer notion that this was supposed to make me think about the subject matter at hand borders on laughable. It took astronomical efforts to even retain all the information required - where would I have found the power in me to go above and beyond, and meaningfully evaluate what was being "taught" to me in addition to all this? How would it have mattered (in specifically the context of the class)? Me actually understanding these topics and pondering about them deeply is completely inobservable through essay writing, which was the sole method of grading. If anything, it made me biased against doing so, as it takes a potentially infinite extra time and effort. And since there was approximately no way for our teacher to make me interested in literature either, he had no chance at achieving such lofty goals with me, if he ever actually aimed for them.
On the other side of the desk, he also had literal checklists. Pretty sure that you do too. Is that any environment for an honest exchange of thoughts? Really?
If you want to read people's original thoughts, maybe you should begin with not trying to coerce them into producing some for you on demand. But that runs contrary to the overarching goal here, so really, maybe it's the type of assignment that needs changing. Or the framework around it. But then academia is set in its ways, so really, there's likely nothing you can specifically do. You don't deserve to have to sift through copious amounts of LLM generated submissions; but the task of essay writing does, and you're now the one forced to carry this novel burden.
LLMs caught incumbent pedagogical practices with their pants down, and it's horrifying to see people still being in denial of it, desperately trying to reason and bargain their ways out of it, spurred on by the institutionally ingrained mutual-hostage scenario that is academia. *
* Naturally, I have absolutely zero formal relation to the field of pedagogy (just like the everyday practice of it in academia to my knowledge). This of course doesn't stop me from having an unreasonably self-confident idea on how to achieve what you think essay writing is supposed to achieve though, so if you want a terrible idea or two, do let me know.
(AI slop). If it's not worth writing, it's not worth reading.
Perfect.
Really? The example used was for a school test. Is there really much original thought in the answer? Do you really want to read the students original thought?
I think the answer is no in this case. The point of the test is to assess whether the student has learned the topic or not. It isn’t meant to share actual creative thoughts.
Of course, using AI to write the answer is contrary to the actual purpose, too, but it isn’t because you want to hear the students creativity, but because it is failing to serve its purpose as a demonstration of knowledge.
Relying on that to automatically detect their use makes no sense.
From a teaching perspective, if there is any expectation that artificial intelligence is going to stick, we need better teachers. Ones that can come up with exercises that an artificial intelligence can't solve, but are easy for humans.
But I don't expect that to happen. I expect instead text to become more irrelevant. It already has lost a lot of its relevancy.
Can handwriting save us? Partially. It won't prevent anyone from copying artificial intelligence output, but it will make anyone that does so think about what is being written. Maybe think "do I need to be so verborragic?".