That's not exactly really where I hoped my career would lead. It's like managing junior developers, but without having nice people to work with.
- can write code
- tireless
- have no aspirations
- have no stylistic or architectural preferences
- have massive, but at the same time not well defined, body of knowledge
- have no intrinsic memories of past interactions.
- change in unexpected ways when underlying models change
- ...
Edit: Drones? Drains?
- don't have career growth that you can feel good about having contributed to
- don't have a genuine interest in accomplishment or team goals
- have no past and no future. When you change companies, they won't recognize you in the hall.
- no ownership over results. If they make a mistake, they won't suffer.
Whenever I have a model fix something new I ask it to update the markdown implementation guides I have in the docs folder in my projects. I add these files to context as needed. I have one for implementing routes and one for implementing backend tests and so on.
They then know how to do stuff in the future in my projects.
It's a tool, not an intelligent being
We'll fix that, eventually.
- don't have career growth that you can feel good about having contributed to
Humans are on the verge of building machines that are smarter than we are. I feel pretty goddamned awesome about that. It's what we're supposed to be doing.
- don't have a genuine interest in accomplishment or team goals
Easy to train for, if it turns out to be necessary. I'd always assumed that a competitive drive would be necessary in order to achieve or at least simulate human-level intelligence, but things don't seem to be playing out that way.
- have no past and no future. When you change companies, they won't recognize you in the hall.
Or on the picket line.
- no ownership over results. If they make a mistake, they won't suffer.
Good deal. Less human suffering is usually worth striving for.
Coincidentally, the hippocampus looks like a seahorse (emoji). It's all connected.
Not to mention; hippocampus literally means "seahorse" in Greek. I knew neither of those things before today, thanks!
- constantly give wrong answers, with surprising confidence
- constantly apologize, then make the same mistake again immediately
- constantly forget what you just told them
- ...
They can usually write code, but not that well. They have lots of energy and little to say about architecture and style. Don't have a well defined body of knowledge and have no experience. Individual juniors don't change, but the cast members of your junior cohort regularly do.
But they don't have a grasp for the project's architecture and will reinvent the wheel for feature X even when feature Y has it or there is an internal common library that does it. This is why you need to be the "manager of agents" and stay on top of their work.
Sometimes it's just about hitting ESC and going "waitaminute, why'd you do that?" and sometimes it's about updating the project documentation (AGENTS.md, docs/) with extra information.
Example: I have a project with a system that builds "rules" using a specific interpreter. Every LLM wants to "optimise" it by using a pattern that looks correct, but will in fact break immediately when there's more than one simultaneous user - and I have a unit test that catches it.
I got bored by LLMs trying to optimise the bit wrong, so I added a specific instruction, with reasoning why it shouldn't be attempted and has been tried and failed multiple times. And now they stopped doing it =)
NO ONE TALKS TO EACH OTHER unless absolutely necessary for work.
We get on Zooms to talk. Even with the person 1 cubicle over.
Who normalized this?!!
But why? Required? Culture? Maybe it's the company?
> You can verify code quality as a glance, and ship absolute with confidence.
> You can confidently trust and merge the code without hours of manual review.
I couldn't possibly imagine that going wrong.
It's clear now that "agents" in the context of "AI" is really about answering the question "How can we make users make 10x more calls to our models in a way that makes it feel like we're not just squeezing money out of them?" I've seen so many people that think setting some "agents" of on a minutes to hours long task of basically just driving up internal KPIs at LLM providers is cutting edge work.
The problem is, I haven't seen any evidence at all that spending 10x the number of API calls on an agent results in anything closer to useful than last year when people where purely vibe coding all the time. At least then people would interactively learn about the slop they were building.
It's astounding to watch a coworker walk though through a PR with hundreds of added new files and repeatedly mention "I'm not sure if these actually work, but it does look like there's something here".
Now I'm sure I'll get some fantastic "no true Scotsman" replies about how my coworkers must not be skilled enough or how they need to follow xyz pattern, but the entire point of AI was to remove the need for specialize skills and make everyone 10x more productive.
Not to mention that the shift in focus on "agents" is also useful in detracting from clearly diminishing returns on foundation models. I just hope there are enough people that still remember how to code (and think in some cases) to rebuild when this house of cards falls apart.
At least for programming tools, for everything (well, the vast majority, at least) that is sold that way—since long before generative AI—it actually succeeds or fails based not on whether it eliminates need for specialized skills and makes everyone more productive, but whether it further rewards specialized skills, and makes the people who devote time to learning it more productive than if they devoted the same time to learning something else.
Nice? I thought all sycophant LLMs were exceedingly nice.
Someone gave me a great tip though - at least for ChatGPT there's a setting where you can change its personality to "robot". I guess that affects the system prompt in some way but it basically fixes the issue.
Sadly, this is not sustainable and I am not sure what I'm going to do.
With a human, you give them feedback or advice and generally by the 2nd or 3rd time the same kind of thing happens they can figure it out and improve. With an LLM, you have to specifically setup a convoluted (and potentially financially and electrical power expensive) system in order to provide MANY MORE examples of how to improve via fine tuning or other training actions.
The only way that an AI model can "learn" is during model creation, which is then fixed. Any "instructions" or other data or "correcting" you give the model is just part of the context window.
The burden of human interaction is removed from building.
I just need some time by myself to recharge after all the social interactions.