That's AI.
In my case, I find the value with LLMs with respect to writing is consolidation. Use it to make outlines, not writing. One example is I record voice memos when driving or jogging and turn them into documents that can be the basis for all sorts of things. End of the day it saves me alot of time and arguably makes me more effective.
AI goes bad because it’s not smart, and it will pretend that it is. Figure out the things it does well for your scenario and exploit it.
It is not a reason to accept stupidity or incompetence. We should reject these things and demand better.
now imagine he can be scaled indefinitely
you thought software was bad today?
imagine Microsoft Teams in 5 years time
When I use Claude for code, for example, I am not asking it to write my code. I'm asking it to review what I have written and either suggest improvements or ways to troubleshoot a problem I am having. I also don't always follow its advice, either, but that depends on how much I understand the reply. Sometimes it outputs something that makes sense based on my current skill level, sometimes it proposes things that I know nothing about, in which case I ask it to break it down further so I can go search the Internet for more info and see if I can learn more, which pushes the limits of my skill level.
It works well, since my goal is to improve what I bring to the table and I have learned a lot, both about coding and about prompt engineering.
When I talk to other people, they accuse me of having the AI do all the work for me because that's how they approach their use of it. They want the AI to produce the whole project, as opposed to just using it as a second brain to offload some mental chunking. That's where Gen AI fails and the user spends all their time correcting convoluted mistakes caused by confabulation, unless they're making a simple monolithic program or script, but even then there's often hiccups.
Point is, Gen AI is a great tool, if you approach it with the right mindset. The hammer does not build the whole house, but it can certainly help.
It works but it simply not what most people want. If you love to code then you just abstracted away the most fun parts and have to only do the boring parts now. If you love to manage, well managing actual humans and seeing them grow and become independent is much more fulfilling.
On a side note, I feel like prompting and context management is something that is easier for me personally as a person with ADHD as I am already used to working with forms of intelligence that are different to my own. I am used to having to explicitly state my needs. My neurotypical co-workers get frustrated that the LLM can't read their minds and always tell me that it should know what they want. When it nudge them to give it more context and explain better what they need they often resist and say they shouldn't have to. Of course I am stereotyping a bit here but still an interesting observation.
Prompting is indeed a skill. Though I believe the skill ceiling will lower once tools get better so I wouldn't bank too much on it. What is going to be valuable for a long time is probably general software architecture skills.
People keep writing this sentence as if they aren't talking to the most tool-ed up group of humans in history.
I have no problems learning tools, from chorded key shortcuts to awk/sed/grep to configuring all three of my text editors (vim, sublime, and my IDE) to work for their various tasks.
Hell, I have preferred ligature fonts for different languages.
Sometimes tools aren't great and make your life harder, and it's not because folks aren't willing to learn the tool.
1) people who reject it completely for whatever reason. 2) people who use it lazily and produce a lot of garbage (lets be honest, this is probably going to happen a lot which is why maybe group #1 hates this future. reminds me of the outsourcing era) 3) people who selectively use it to their advantage.
no point in groups 1 and 3 trying to convince each other of anything.
If LLMs eventually become useful to me, I’ll adopt LLMs, I suppose. Until then, we’ll, fool me once…
AI is like the absolute worst outsourced devs I've ever worked with - enthusiastically saying "yes I can do that" to everything and then delivering absolute garbage that takes me longer to fix/convince them to do right than it would have taken for me to just do it myself.
I personally think an LLM could help with some of this, and this is something I've been thinking about the past few days. But I'd have to build a pipeline and figure out a way to make it amplify what I like about my voice rather than have me speak through its voice.
I used to have a sort of puritanical view of art. And I think a younger version of myself would have been low key horrified at the amount of work in great art that was delegated to assistants. E.g. a sculptor (say Michelangelo) would typically make a miniature to get approval from patrons and the final sculpture would be scaled up. Hopefully for major works, the master was closely involved in the scaling up. But I would bet that for minor works (or maybe even the typical work) assistants did a lot of the final piece.
The same happens (and has always happened) with successful authors. Having assistants do bits here or there. Maybe some research, maybe some corrections, maybe some drafts. Possibly relying on them increasingly as you get later in your career or if you're commercially successful enough to need to produce at greater scale.
I think LLMs will obviously fit into these existing processes. They'll also be used to generate content that is never checked by a human before shipping. I think the right balance is yet to be seen, and there will always be people who insist on more deliberate and slower practices over mass production.
[0] Aside from internet comments of course, which are mostly stream of consciousness.
https://en.wikipedia.org/wiki/David_(Michelangelo)#Process
Maybe later he got lazier. I haven't really heard of famous authors using assistants for drafts instead of research (I don't mean commercial authors like Stephen King).
Even research many authors simply could not afford.
I don't see where the article says he worked alone on David. It does seem that he used a miniature (bozzetto) and then scaled up with a pointing machine. One possibility is he made the miniature and had assistants rough out the upscaled copy before doing the fine work himself. Essentially, using the assistants to do the work you'd do on a band saw if you were carving out of wood.
> I haven't really heard of famous authors using assistants for drafts instead of research (I don't mean commercial authors like Stephen King).
Restricting to non-commercial authors would narrow it down since hiring assistants to write drafts probably only makes financial sense if the cost of the assistant is less than the cost of your time it would take drafting.
Alexander Dumas is maybe a bit higher brow than Stephen King
> He founded a production studio, staffed with writers who turned out hundreds of stories, all subject to his personal direction, editing, and additions. From 1839 to 1841, Dumas, with the assistance of several friends, compiled Celebrated Crimes, an eight-volume collection of essays on famous criminals and crimes from European history. https://en.wikipedia.org/wiki/Alexandre_Dumas
But in general I agree, drafts are often the heart of the work and it's where I'd expect masters to spend a lot of their time. Similarly with the statue miniatures.
I like the perspective of "choices" during creation. It is an essential principle of the real art that it is a result of thousands/millions of deliberate choices. This is what we admire on the art. If you use mostly machine (or other kind of ways that decide instead and for you) for creation, you as an creator simply do less choices.
In this case, you delegate many of your experienced/crazy/hard decisions to the model (which is based on such decision made already by other artists but combines them in a random way). It is like decompressing JPG – some things are just hallucinated by machine.
From the perspective of pure human creativity, the result is thin, diluted. Even it seems like deliberate. In my opinion art lovers will seek for the dense art made by human, maybe asking even more for some kind of "proof" of the human-based process. What do you think?
When I was in high school I really leaned on friends for edits. Not just because of the changes they would make (though they often did make great suggestions), but for the changes I would make to their changes after. That’s what would inevitably turn my papers from a B into an A. It’s basically the same thing in principle. I need to see something written in a way I would not write it or I start talking in circles/get too wordy. And yes this comment is an example of that haha
Some technology is just capital trying to find growth in new markets and doesn't represent a fundamental value add.
VR/AR is tech that is nowhere near ready so its premature to make judgements there. I am bullish on XR.
In terms of value-add, many times technology needs to be invented first before we decide whether its worthwhile. Nobody asked for computers in every home, smartphones in every pocket etc.
I expect this will be around the time that websites are no longer a thing and we see companies directly pumping information into AI agents which are then postured as the only mechanism for receiving certain information.
As an example, imagine Fandango becoming such a powerful movie agent that theaters no longer need websites. You don't ask it questions. Instead, it notifies YOU based on what it knows about your schedule, your preferences, your income, etc. Right around 5pm it says "Hey did you know F1 is showing down the street from you at Regal Cinema in IMAX tonight at 7:30? That will give you time to finish your 30 minute commute and pickup your girlfriend! Want me to send her a notification that you want to do this?"
People install a litany of agents on their smartphones, and they train their agents based on their personal preferences etc, and the agents then become the advertisers directly feeding relevant and timely information to you that maximizes your spend.
MCP will probably kill the web as we know it.
It will be controlling and disempowering - manipulative personality-profiled "suggestions" with a much higher click rate than anything we have today.
And the richer you are, the more freedom you'll have to opt out and manage your own decisions.
This. I need access to banking , maps and 2FA. If I could use a dumb phone, with just a camera, GPS and whatsapp, I would use it.
Not to take any words it gives but read what it says and decide if those things are true, if so, make edits. I am not saying it is a great editor but it is better than any other resource he has access to as a teenager. Yeah better than me or his mom
- Writing groups. They often have sessions that provide feedback and also help writers find/build a sense of community. Your son would also get to listen to other writers talk about their work, problems they’ve run into and overcome, and other aspects of their craft.
- School (sometimes library) writing workshops. This helps students develop bonds with their peers and helps both students: the ones giving feedback are learning to be better editors.
Both of these offer a lot of value in terms of community building and also getting feedback from people vested in the the craft of writing.
I formed the habit of exporting entire chats to Markdown and found them useless. Whatever I found of useful from a given response either sparked a superseding thought of my own or was just a reiteration of my own intuitive thoughts.
I've moved from ChatGPT to Claude. The results are practically the same as far as I can tell (although my gut tells me I get better code from Claude) but the I think Anthropic have a better feel for response readability. Sometimes processing a ChatGPT response is like reading a white paper.
Other than that, LLMs get predictable to me after a while and I get why people suspect that they're starting to plateau.
But I still don't like that the same model struggles w/ my projects...
We are a publisher which succeeded due to the highest-quality translations. Our readers appreciated it and ask for it. Czech language is very rich and these machines are not able to make the most of it. The non-fiction sphere needs a lot of fact-checking e.g. in local and field terminology too. So even we can imagine the process of translation could be technically shortened by machine translation, it would probably ruin our reputation in a long term.
At least for now...
Now consider someone further down the scale - someone at the 75th, 50th or 25th percentile. The output of an LLM very quickly goes from "much worse than what I could produce" to "as good as anything I could produce" to "immeasurably better than anything I could hope to ever produce".
Perhaps LLMs can move someone's results from the 25th percentile to the 50th for a single task. (Although there's probably a much more nuanced discussion to be had about that: people with poor writing skills can still have unique, valuable, and interesting perspectives that get destroyed in the median-ization of current LLM output.) But after a couple years of using LLMs regularly, I fear that whatever actual talent they have will atrophy below their starting point.
However good writing is a skill you can get good at with enough practice. Read a lot, write a lot of garbage, consult more experienced writers and eventually you will write readable articles soon. Do 10-100x more of that and you will be pretty great. The rest is some kind skill and experience in many other fields than writing which will inform how to write even better. Some of it is intelligence, luck, great mentors and perhaps something we call talent even. As with most things you can get far just by working diligently a lot.
That does, to my mind, explain all the vengeful "haw haw, you're all going to get left behind" comments from some LLM proponents. They actually do get benefit from LLMs, unlike the highest part of the scale who are overrepresented on HN, without realizing what that implies and they think they can overtake the highest part of the scale by using them. Well, we'll see.
Writing is entirely different, and for some reason, generic writing even when polished (ChatGPT-esque tone) is so much more intolerable than say AI-generated imagery. Images can blend in the background, reading takes active processing so we're much more sensitive. And for the end user of a product, they care 0 or next to 0 about AI code.
Very interesting point!
Coding, robotics, navigation of constrained data spaces such as translation, tagging, indexing, logging, parsing, data transformations… those are all strong target candidates for transformer architecture automation.
Creative thought is not.
To be fair, I also don't like using Copilot when working on code. In many cases it turns into a weird experience when the agent generates the next line(s) and I basically become a discriminator judging if the thing really understands my problem and solution. To be honest, it's boring even if eventually it might make me turn in code faster.
With that said, I cannot ignore that LLMs are happening, and this is the future. The models keep improving but more importantly, the ecosystem keeps improving with things like MCP and better defined context for LLM tools.
We might be looking at a somewhat grim prospect. But like it or not, this is the future. Adapt and survive.
For me survival means: - continuing to do my best at the language level – even if more people would start be gradually satisfied with less - I just believe that education, critical thinking and evidence-based principles are at core of humanity progress and one day it will make comeback - I am ok with smaller income and not wishing to exchange it for creating bullshit
The adaptation for me means: - generally: stay open-minded - I have to understand and somehow accept that the prospect is a bit grim but not to fall into some extreme and doom thinking - I have to explore new ways how to augment human-oriented creativity (with or without these tools)
What do you think?
Well, it's a trap. You see a snippet is right, you accept it. Next time you do it faster, and faster. And then you get one that seems right but it's not. If you're lucky, it will cause an error.
It's either the tech is advancing so quickly that many people can't keep up, or simply the cost of adapting outweighs the potential profit from their remaining careers, even when taking the new tech into account.
Well, it has a problem with my use of the Oxford comma, for one. Because a huge amount of the corpus is American English, and mine ain't. So it fails on grammar repeatedly.
And if you introduce any words of your own, it will sometimes randomly correct them to something else, and randomly correct other words to the made up ones. And it can't always tell when it's made such a change. And sometimes it does that even if you're just mixing existing languages like French or English. So you can make it useless for spellcheck by touching more than one language.
I do keep trying, despite the fact my stuff has been stolen and is in the training data, because of all the proselytising, but right now... No AI is useful for my writing. Not even just for grammar and spelling.