Anyway, in terms of "interesting" work, if you can't copy it from somewhere else than I don't think LLMs are that helpful, personally. I mean they can still give you small building blocks but you can't really prompt it to make the thing.
SQLAlchemy docs are vast and detailed, it's not surprising I didn't know about the feature even though I've spent plenty of time in those docs.
“We thought we were getting an accountant, but we got a poet.”
Frederik Kulager: Jeg fik ChatGPT til at skrive dette afsnit, og testede, om min chefredaktør ville opdage det. https://open.spotify.com/episode/22HBze1k55lFnnsLtRlEu1?si=h...
LLMs are not sentient. They are designed to make stuff up based on probability.
It also invites reflections on what “sentience” means. In my experience — make of it what you will — correct fact retrieval isn’t really necessary or sufficient for there to be a lived, first-person experience.
The thing is that since it can't think, it's absolutely useless when it comes to things that hasn't been done before, because if you are creating something new, the software won't have had any chance to train on what you are doing.
So if you are in a situation in which it is a good idea to create a new DSL for your problem **, then the autocruise control magic won't work because it's a new language.
Now if you're just mashing out propaganda like some brainwashed soviet apparatchik propagandist, maybe it helps. So maybe people who writes predictable slop like this the guardian article (https://archive.is/6hrKo) would be really grateful that their computer has a cruise control for their political spam.
) if that's what you meant *) which you statistically speaking might not want to do, but this is about actually interesting work where it's more likely to happen*
It's so weird how people use LLMs to automate the most important and rewarding parts of the creative process. I get that companies have no clue how to market the things, but it really shows a lack of imagination and self-awareness when a 'creative' repackages slop for their audience and calls it 'fun'.
In this way, I am infinitely more tolerant of minor problems in the output, because I’m not using the tool to create a specific output, I’m using it to enhance the thing I’m making myself.
To be more concrete: let’s say I’m writing a book about a novel philosophical concept. I don’t use the AI to actually write the book itself, but to research thinkers/works that are similar, critique my arguments, make suggestions on topics to cover, etc. It functions more as a researcher and editor, not a writer – and in that sense it is extremely useful.
Your role is between the two: deciding on the architecture, writing the top-level types, deciding on the concrete system design.
And then AI tools help you zoom in and glue things together in an easily verifiable way.
I suspect that people who still haven't figured out how to make use of LLMs, assuming it's not just resentful performative complaining which it probably is, are expecting it to do it all. Which never seemed very engineer-minded.
willy wonka _oh really_ meme
AI isn’t replacing innovation or original thought. It is just working off an existing body of knowledge.
But in general, in the past there was much less specialization. That means each individual was responsible for a lot more stuff, and likely had a lot more varied work day. The apprentice blacksmith didn't just hammer out nail after nail all day with no breaks. They made all sorts of tools, cutlery, horseshoes. But they also carried water, operated bellows, went to fetch coke etc, sometimes even spending days without actually hammering metal at all - freeing up mental energy and separation to be able to enjoy it when they actually got to do it.
Similarly, farm laborers had massively varied lives. Their daily tasks of a given week or month would look totally different depending on the season, with winter essentially being time off to go fix or make other stuff because you can't do much more than wait to make plants grow faster
People might make the criticism and say "oh but that was only for rich people/government" etc, but look at for example old street lights, bollards etc. Old works tend to be
Specialization allows us to curse ourselves with efficiency, and a curse it is indeed. Now if you're good at hammering nails, nails are all you'll get, morning to night, and rewarded the shittier and cheaper and faster you make your nails, sucking all incentive to do any more than the minimum
My problem with it as a scientist is that I can't trust a word it writes until I've checked everything 10 times over. Checking over everything was always the hardest part of my job. Subtle inconsistencies can lead to embarrassing retractions or worse. So the easy part is now automatic, and the hard part is 10x harder, because it will introduce mistakes in ways I wouldn't normally do, and therefore it's like I've got somebody working against me the whole time.
Reviewing code is way harder than writing it, for me. Building a mental model of what I want to build, then building that comes very naturally to me, but building a mental model of what someone else made is much more difficult and slow for me
Feeling like it is working against me instead of with me is exactly the right way to describe it
Calling this various work "uninteresting" mostly reflects on your preferences rather than the folks who were doing the work. A lot of the work was repetitive, but the same is true of most jobs today. That didn't stop many people from thinking about something else while they worked.
So LLMs are awesome if I want to say "create a dashboard in Next.js and whatever visualization library you think is appropriate that will hit these endpoints [dumping some API specs in there] and display the results to a non-technical user", along with some other context here and there, and get a working first pass to hack on.
When they are not awesome is if I am working on adding a map visualization to that dashboard a year or two later, and then I need to talk to the team that handles some of the API endpoints to discuss how to feed me the map data. Then I need to figure out how to handle large map pin datasets. Oh, and the map shows regions of activity that were clustered with DBSCAN, so I need to know that Alpha shape will provide a generalization of a convex hull that will allow me to perfectly visualize the cluster regions from DBSCAN's epsilon parameter with the corresponding choice of alpha parameter. Etc, etc, etc.
I very rarely write code for greenfield projects these days, sadly. I can see how startup founders are head over heels over this stuff because that's what their founding engineers are doing, and LLMs let them get it cranking very very fast. You just have to hope that they are prudent enough to review and tweak what's written so that you're not saddled with tech debt. And when inevitable tech debt needs paying (or working around) later, you have to hope that said founders aren't forcing their engineers to keep using LLMs for decisions that could cut across many different teams and systems.
That boilerplate heavy, skill-less, frontend stuff like configuring a map control with something like react-leaflet seems to be precisely what AI is good at.
edit: Just spot checked it and it thinks it's a good idea to use convex hulls.
That idea reminds me of "DevOps is to automate fail". Perhaps: "agent collaboration is to automate chaos"
This thesis only makes sense if the work is somehow interesting and you also have no desire to extend, expand, or enrich the work. That's not a plausible position.
Or your interesting work wasn't appearing in training set often enough. Currently I am writing a compiler and runtime for some niche modeling language, and every model I poke for help was rather useless except some obvious things I already know.
1. Look up compiler research in relevant areas
2. Investigate different parsing or compilation strategies
3. Describe enough of the language to produce or expand test cases
4. Use the AI to create tools to visualize or understand the domain or compiler output
5. Discuss architectural approaches with the AI (this might be like rubber duck architecting, but I find that helpful just like rubber duck debugging is helpful)
The more core or essential a piece of code is, the less likely I am to lean on AI to produce that piece of code. But that's just one use of AI.
Either that, or replacing the time with slacking off and not even getting whatever benefits doing the easiest tasks might have had (learning, the feeling of accomplishing something), like what some teachers see with writing essays in schools and homework.
The tech has the potential to let us do less busywork (which is great, even regular codegen for boilerplate and ORM mappings etc. can save time), it's just that it might take conscious effort not to be lazy with this freed up time.
One implication is that when AI providers claim that "AI can make a person TWICE as productive!"
... business owners seem to be hearing that as "Those users should cost me HALF as much!"
...But it can't, which means your inference has no implications, because it evaluates to False.
Deciding what to build and how to build it is often harder than building.
What LLMs of today do is basically super-autocomplete. It's a continuation of the history of programming automation: compilers, more advanced compilers, IDEs, code generators, LINTers, autocomplete, codeinsight, etc.
I kind of use it that way. The LLM is walking a few feet in front of me, quickly ideating possible paths, allowing me to experiment more quickly. Ultimately I am the decider of what matters.
This reminds me a bit of photography. A photographer will take a lot of pictures. They try a lot of paths. Most of the paths don't actually work out. What you see of their body of work is the paths that worked, that they selected.
> interesting work (i.e., work worth doing)
Let me guess, the work you do is interesting work (i.e., work worth doing) and the work other people do is uninteresting work (i.e., work not worth doing).
Funny how that always happens!
You'd think that there be someone who'd be nice enough to create a library or a framework or something that's well documented and is popular enough to get support and updates. Maybe you should consider offloading the boring part to such a project, maybe even pay someone to do it?
And I'm not even sure these 'template adjacent' regurgitations are what the crude LLM is best at, as the output needs to pass some rigorous inflexible test to 'pass'. Hallucinating some non-existing function in an API will be a hard fail.
LLM's have a far easier time in domains where failures are 'soft'. This is why 'Elisa' passed as a therapist in the 60's, long before auto-programmers were a thing.
Also, in 'academic' research, LLM use has reached nearly 100%, not just for embelishing writeups to the expected 20 pages, but in each stage of the'game' including 'ideation'.
And if as a CIO you believe that your prohibition on using LLMs for coding because of 'divulging company secrets' holds, you are either strip searching your employees on the way in and out, or wilfully blind.
I'm not saing 'nobody' exists that is not using AI in anything created on a computer, just like some woodworker still handcrafts exclusive bespoke furniture in a time of presses, glue and CNC, but adoption is skyrocketing and not just because the C-suite pressures their serves into using the shiny new toy.
I have been exploring local AI tools for coding (ollama + aider) with a small stock market simulator (~200 lines of python).
First I tried making the AI extract the dataclasses representing events to a separated file. It decided to extract some extra classes, leave behind some others, and delete parts of the code.
Then I tried to make it explain one of the actors called LongVol_player_v1, around 15 lines of code. It successfully concluded it does options delta hedging, but it jumped to the conclusion that it calculates the implied volatility. I set it as a constant, because I'm simulating specific interactions between volatility players and option dealers. It hasn't caught yet the bug where the vol player buys 3000 options but accounts only for 2000.
When asking for improvements, it is obsessed with splitting the initialization and the execution.
So far I wasted half of Saturday trying to make the machine do simple refactors. Refactors I could do myself in half of an hour.
I'm yet to see the wonders of AI.
My experience is that the hosted frontier models (o3, Gemini 2.5, Claude 4) would handle those problems with ease.
Local models that fit on a laptop are a lot less capable, sadly.
You might still be disappointed but at least you won't have shot your leg off out of the gates!
He should use it as a Stack Overflow on steroids. I assume he uses Stack Overflow without remorse.
I used to have 1y streaks on being on SO, now I'm there around once or twice per week.
and ask it to write a design doc, and to write a work plan of different prompts to implement the change
But I do use LLM/AI, as a rubber duck that talks back, as a google on steroids - but one who needs his work double checked. And as domain discovery tool when quickly trying to get a grasp of a new area.
Its just another tool in the toolbox for me. But the toolbox is like a box of chocolates - you never know what you are going to get.
They don't always listen.
Writing SQL, I'll give ChatGPT the schema for 5 different tables. It habitually generates solutions with columns that don't exist. So, naturally, I append, "By the way, TableA has no column FieldB." Then it just imagines a different one. Or, I'll say, "Do not generate a solution with any table-col pair not provided above." It doesn't listen to that at all.
But. "Interesting" is subjective, and there's no good definition for "intelligence", AI has so much associated hype. So we could debate endlessly on HN.
Supposing "interesting" means something like coming up with a new Fast Fourier Transform algorithm. I seriously doubt an LLM could do something there. OTOH AI did do new stuff with protein folding.
So, we can keep debating I guess.
It's also definitely real that a lot of other smart productive people are more productive when they use it.
These sort of articles and comments here seem to be saying I'm proof it can't be done. When really there's enough proof it can be that you're just proving you'll be left behind.
... said every grifter ever since the beginning of time.