Turns out we weren't opposed to bad metrics! We were just opposed to being measured! Given the chance to pick our own, we jumped straight to the same nonsense.
Along those lines, some techniques I've been dabbling in: 1. Getting multiple agents to implement a requirement from scratch, them combining the best ideas from all of them with my own informed approach. 2. Gathering documentation (requirements, background info, glossaries, etc), targeting an Agent at it, and asking carefully selected questions for which the answers are likely useful. 3. Getting agents to review my code, abstracting review comments I agree with to a re-usable checklist of general guidelines, then using those guidelines to inform the agents in subsequent code reviews. Over time I hope this will make the code reviews increasingly well fitted to the code base and nature of the problems I work on.
Working to the point of making yourself sick should not be seen as a mark of pride, it is a sign that something is broken. Not necessarily the individual, maybe the system the individual is in.
I find it crazy to build a complex system to juggle 10 different threads in your brain, including the complexity of the tool itself.
Claiming that you have "ten agents writing code at night" is not the flex you think it is. That's just a recipe for burnout and bad design decisions.
Stop running your agents and go touch grass.
COCOMO, which considers lines of code, is generally accepted as being accurate (enough) at estimating the value of a software system, at least as far as how courts (in the US) are concerned.
LOC is essentially only useful to give a ballpark estimate it complexity and even then only if you compare orders of magnitude and only between similar program languages and ecosystems.
It’s certainly not useful for AI generated projects. Just look at OpenClaw. Last I heard it was something close to half a million lines of code.
When I was in college we had a professor senior year who was obsessed with COCOMO. He required our final group project to be 50k LOC (He also required that we print out every line and turn it in). We made it, but only because we build a generator for the UI and made sure the generator was as verbose as possible.
I'm not sure most developers, managers, or owners care about the calculated dollar value of their codebase. They're not trading code on an exchange. By condensing all software into a scalar, you're losing almost all important information.
I can see why it's important in court, obviously, since civil court is built around condensing everything into a scalar.
The linked article does not demonstrate this. It establishes no causal link. One can obviously bloat LOC to an arbitrary degree while maintaining feature parity. Very generously, assuming good faith participants, it might reflect a kind average human efficiency within the fixed environment of the time.
Carrying the conclusions of this study from the 80s into the LLM age is not justified scientifically.
Yes, and in fact a lot of the studies that show the impact of AI on coding productivity get dismissed because they use LoC or PRs as a metric and "everyone knows LoC/PR counts is a BS metric." But the better designed of these studies specifically call this out and explicitly design their experiments to use these as aggregate metrics.
That's an anti-signal if we're being honest.
Courts would be the last place to understand something like code quality or software project value....
This seems like a distinction without a difference, unless there actually are any good metrics (which also requires them to be objectively and reliably quantifiable). I think most developers don't really want to measure themselves, it's just that pro-AI people think measurement is necessary to put forward a convincing argument that they've improved anything.
It's not the only metric. But I'm more and more convinced that the people protesting any discussion of it are the ones who... don't ship a lot.
Of course it matters in what code base. What size PR. How many bugs. Maintenance burden. Complexity. All of that doesn't go away. But that doesn't disqualify the metric, it just points out it's not a one-dimensional problem.
And for a solo project, it's fairly easy to hold most of these variables relatively constant. Which means "volume went up" is a pretty meaningful signal in that context.
If you mostly get around on your feet, distance traveled in a day is a reasonable metric for how much exercise you got. It's true that it also matters how you walk and where you walk, but it would be pretty tedious to tell someone that a "3 mile run" is meaningless and they must track cardiovascular health directly. It's fine, it works OK for most purposes, not every metric has to be perfect.
But once you buy a car, the metric completely decouples, and no longer points towards your original fitness goals even a tiny bit. It's not that cars are useless, or that driving has a magic slowdown factor that just so happens to compensate for your increased distance travelled. The distance just doesn't have anything to do with the exercise except by a contingent link that's been broken.
PRs or closed jira tickets can be a metric of productivity only if they add or improve the existing feature set of the product.
If a PR introduces a feature with 10 bugs in other features and I have my agent swarm fix those in 10-20 PRs in a week, my productivity and delivery have both taken a hit. If any of these features went to prod, I have lost revenue as well.
Shipping is not same as shipping correctly with minimal introduction of bugs.
For profit failing as a metric, see: Enron.
Yeah but all else isn’t equal, so unless you’re measuring a whole lot more than PRs it’s completely meaningless.
Even on a solo project, something as simple as I’m working with a new technology that I’m excited about is enough to drastically ramp up number of PRs.
I have started using Claude to develop an implementation plan, but instead of making Claude implement it and then have me spend time figuring out what it did, I simply tell it to walk me through implementing it by hand. This means that I actually understand every step of the development process and get to intervene and make different choices at the point of time where it matters. As opposed to the default mode which spits out hundreds of lines of code changes which overloads my brain, this mode of working actually feels like offloading the cognitive burden of keeping track of the implementation plan and letting me focus on both the details and the big picture without losing track of either one. For truly mechanical sub-tasks I can still save time by asking Claude to do them for me.
What I do is to use the LLM to ask a lot of questions to help me better understand to problem. After I have a good understanding I jump into the code and code by hand the core of the solution. With this core work finished(keep in mind that at this point the code doesn't even need to compile) I fire up my LLM and say something like "I need to do X, uncommited in this repo we have a POC for how we want to do it. Create and implement a plan on what we need to do to finish this feature."
I think this is a good model because I'm using the LLM for the thing it is good at: "reading through code and explaining what it does" and "doing the grunt work". While I do the hard part of actually selecting the right way of solving a problem.
This resonates with me because I've been looking for a way to detect when I would make a different decision than the LLM. These divergence points generally happen because I'm thinking about future changes as I code, and the LLM just needs to pick something to make progress.
Prompts like "list your assumptions and do not write any code yet" help during planning. I've been experimenting with "list the decisions you've made during implementation that were not established upfront in the plan" after it makes a change, before I review it, because when eyeballing the diff alone, I often miss subtle decisions.
Thanks for sharing the suggestion to slow it down and walk the forking path with the LLM :)
I know many will then say, BUT QUALITY, but if you learn to deal with your own and claude quirks, you also learn how to validate & verify more efficiently. And experience helps here.
Unless you don't review every generated line manually, and instead rely on, let's say, UI e2e testing, or perhaps unit testing (that the agents also wrote). I don't know, perhaps we are past the phase of "double check what agents write" and are now in the phase of "ship it. if it breaks, let agents fix it, no manual debugging needed!" ?
Serious planning. The plans should include constraints, scope, escalation criteria, completion criteria, test and documentation plan.
Enforce single responsibility, cqrs, domain segregation, etc. Make the code as easy for you to reason about as possible. Enforce domain naming and function / variable naming conventions to make the code as easy to talk about as possible.
Use code review bots (Sourcery, CodeRabbit, and Codescene). They catch the small things (violations of contract, antipatterns, etc.) and the large (ux concerns, architectural flaws, etc.).
Go all in on linting. Make the rules as strict as possible, and tell the review bots to call out rule subversions. Write your own lints for the things the review bots are complaining about regularly that aren't caught by lints.
Use BDD alongside unit tests, read the .feature files before the build and give feedback. Use property testing as part of your normal testing strategy. Snapshot testing, e2e testing with mitm proxies, etc. For functions of any non-trivial complexity, consider bounded or unbounded proofs, model checking or undefined behaviour testing.
I'm looking into mutation testing and fuzzing too, but I am still learning.
Pause for frequent code audits. Ask an agent to audit for code duplication, redundancy, poor assumptions, architectural or domain violations, TOCTOU violations. Give yourself maintenance sprints where you pay down debt before resuming new features.
The beauty of agentic coding is, suddenly you have time for all of this.
I feel like i am a bit stupid to be not able to do this. my process is more iterative. i start working on a feature then i disocover some other function thats silightly related. go refactor into commmon code then proceed with original task. sometimes i stop midway and see if this can be done with a libarary somewhere and go look at example. i take many detours like these. I am never working on a single task like a robot. i dont want claude to work like that either .That seems so opposite of how my brain works.
what i am missing.
Many of those tools are overpowered unless you have a very complex project that many people depend on.
The AI tools will catch the most obvious issues, but will not help you with the most important aspects (e.g. whether you project is useful, or the UX is good).
In fact, having this complexity from the start may kneecap you (the "code is a liability" cliché).
You may be "shipping a lot of PRs" and "implementing solid engineering practices", but how do you know if that is getting closer to what you value?
How do you know that this is not actually slowing your down?
The only obvious bit you didn't cover was extensive documentation including historical records of various investigations, debug sessions and technical decisions.
I'm sure these larger models are both faster and more cogent, but its also clear what matter is managing it's side tracks and cutting them short. Then I started seeing the deeper problematic pattern.
Agents arn't there to increase the multifactor of production; their real purpose is to shorten context to manageable levels. In effect, they're basically try to reduce the odds of longer context poisoning.
So, if we boil down the probabilty of any given token triggering the wrong subcontext, it's clear that the greater the context, the greater the odds of a poison substitution.
Then that's really the problematic issue every model is going to contend with because there's zero reality in which a single model is good enough. So now you're onto agents, breaking a problem into more manageable subcontext and trying to put that back into the larger context gracefully, etc.
Then that fails, because there's zero consistent determinism, so you end up at the harness, trying to herd the cats. This is all before you realize that these businesses can't just keep throwing GPUs at everything, because the problem isn't computing bound, it's contextual/DAG the same way a brain is limited.
We all got intelligence and use several orders of magnitude less energy, doing mostly the same thing.
There are features you can skip safely behind feature flags or staged releases. As you push in you fine with the right tooling it can be a lot.
If you break it down often quite a bit can be deployed safely with minimal human intervention (depends naturally on the domain, but for a lot of systems).
I’m aiming to revamp the while process - I wrote a little on it here : https://jonathannen.com/building-towards-100-prs-a-day/
> if it breaks, let agents fix it, no manual debugging needed!" ?
Pretty trivial to have every Sentry issue have an immediate first pass by AI now to attempt to solve the bug.
Not at all, it's just a skill that gets easier with practice. Generally if you're in the position to review a lot of PR's, you get proficient at it pretty quickly. It's even easier when you know the context of what the code is trying to do, which is almost always the case when e.g. reviewing your team-mates' PR's or the code you asked the AI to write.
As I've said before (e.g. https://news.ycombinator.com/item?id=47401494), I find reviewing AI-generated code very lightweight because I tend to decompose tasks to a level where I know what the code should look like, and so the rare issues that crop up quickly stand out. I also rely on comprehensive tests and I review the test cases more closely than the code.
That is still a huge amount of time-savings, especially as the scope of tasks has gone from a functions to entire modules.
That said, I'm not slinging multiple agents at a time, so my throughput with AI is way higher than without AI, but not nearly as much as some credible reports I've heard. I'm not sure they personally review the code (e.g. they have agents review it?) but they do have strategies for correctness.
Some agents will be developing plans for the next feature, but there can sometimes be up to 4 coding.
These are typically a mix between trivial bug fixes and 2 larger but non-overlapping features. For very deep refactoring I'll only have a single agent run.
Code reviews are generally simple since nothing of any significance is done without a plan. First I run the new code to see if it works. Then I glance at diffs and can quickly ignore the trivial var/class renames, new class attributes, etc leaving me to focus on new significant code.
If I'm reviewing feature A I'll ignore feature B code at this point. Merge what I can of feature A then repeat for feature B, etc.
This is all backed by a test suite I spot check and linters for eg required security classes.
Periodically we'll review the codebase for vulnerabilities (eg incorrectly scoped db queries, etc), and redundant/cheating tests.
But the keys to multiple concurrent agents are plans where you're in control ("use the existing mixin", "nonsense, do it like this" etc) and non-overlapping tasks. This makes reviewing PRs feasible.
I'm so conflicted about this. On the one hand I love the buzz of feeling so productive and working on many different threads. On the other hand my brain gets so fried, and I think this is a big contributor.
I have nothing to back up the idea though.
I also have nothing to back it up, but it fits my mental models. When juggling multiple things as humans, it eats up your context window (working memory). After a long day, your coherence degrades and your context window needs flushing (sleeping) and you need to start a new session (new day, or post-nap afternoon).
I prefer focusing mostly on 1 task at a time (sometimes 2 for a short time, or asking other agent some questions simultaneously) and doing the task in chunks so it doesn't take much time until you have something to review. Then I review it, maybe ask for some refactoring and let it continue to the next step (maybe let it continue a bit before finishing review if feeling confident about the code). It's easier to review smaller self-contained chunks and easier to refer to code and tell AI what needs changing because of fewer amount of relevant lines.
the assumption to this workflow is that claude code can complete tasks with little or no oversight.
If the flow looks like review->accept, review->accept, it is manageable.
In my personal experience, claude needs heavy guidance and multiple rounds of feedback before arriving at a mergeable solution (if it does at all).
Interleaving many long running tasks with multiple rounds of feedback does not scale well unfortunately.
I can only remember so much, and at some point I spend more time trying to understand what has been done so far to give accurate feedback than actually giving feedback for the next iteration.
Where I find it incredible - learning new things, I recently started flutter/dart dev - I just ask Claude to tell me about the bits, or explaining things to me, it's truly revolutionary imho, I'm building things in flutter after a week without reading a book or manual. It's like a talking encyclopaedia, or having an expert on tap, do many people use it like this? or am I just out of the loop, I always think of Star Trek when I'm doing it. I architected / designed a new system by asking Claude for alternatives and it gave me an option I'd never considered to a problem, it's amazing for this, after all it's read all the books and manuals in the world, it's just a matter of asking the right questions.
Imo we may be messing up the economy with AIs. They should be engineering better workers, not being employed to make one person do the work of three poorly.
The power of AIs to smooth learning and raise expertise, rather than replace it, should be the adaptation goal. Obviously AIs as work assistants are powerful, but all the AI bullshitting CEOs overselling AIs is really damaging on the whole economic level
Particularly because the current marketing leads to the next generation abandoning roles that AI bullshitters claim are perfectly replaced.
It's like the urbanization demographic bomb on steroids.
But it's just a damn good tool, not the apocalypse/the thing that lets you finally fire everyone. So it kind of gets lost in the hype.
Mentioning LLM usage as a distinction is like bragging about using a modern compiler instead of writing assembly. Yeah it's faster, but so is everyone else code... Besides, I wouldn't brag about being more productive with LLMS because it's a double edge sword: it's very easy to use them, and nobody is reviewing all the lines of code you are pushing to prod (really, when was the last time you reviewed a PR generated by AI that changed 20+ files and added/removed thousands of lines of code?), so you don't know what's the long game of your changes; they seem to work now but who knows how it will turn out later?
Outside of work, yeah, everything is fine and there's nothing but the pure pursue of knowledge and joy.
Yet people look at me like I'm the odd one out when I say I am more productive with a modern compiler like GHC.
but a chart of commits/contribs is such a lousy metric for productivity.
It's about on par with the ridiculousness of LOC implying code quality.
And it's not like I'm blindly commiting LLM output. I often write everything myself because I want to understand what I'm doing. Claude often comments that my version is better and cleaner. It's just that the tasks seemed so monumental I felt paralyzed and had difficulty even starting. Claude broke things down into manageable steps that were easy to do. Having a code review partner was also invaluable for a solo hobbyist like me.
That said, by the time I'm happy with it all the AI stuff outside very boilerplate ops/config stuff has been rewritten and refined. I just find it quite helpful to get over that initial hump of "I have nothing but a dream" to the stage of "I have a thing that compiles but is terrible". Once I can compile it then I can refine which where my strengths lie.
Every comment I make is a "really perceptive observation" according to Claude and every question I ask is either "brilliant" or at least "good", so...
Most effective engineers on the brownfield projects I've worked on, usually deleted more LOC than they've added, because they were always looking to simplify the code and replace it with useful (and often shorter) abstractions.
Especially in brownfield settings, if you do use CC, you really should be spending something like a day refactoring the code for every 15 minutes of work it spends implementing new functionality. Otherwise the accumulation of technical debt will make the code base unworkable by both human and claude hands in a fairly short time.
I think overall it can be a force for good, and a source of high quality code, but it requires a significant amount of human intervention.
Claude Code operating on unsupervised Claude code fairly rapidly generates a mess not even Claude Code can decode, resulting in a sort of technical debt Kessler syndrome, where the low quality makes the edits worse, which makes the quality worse, rinse and repeat.
This one's interesting to me. For a lot of my career, the act of writing the PR is the last sanity check that surfaces any weirdness or my own misgivings about my choices. Sometimes there would be code that felt natural when I was writing it and getting the feature working, and maybe that code survived my own personal round of code review... but having to write about it in plain english for the benefit of someone doing review with less context was a useful spot to do some self-reflection.
> I switched the build to SWC, and server restarts dropped to under a second.
What is SWC? The blog assumes I know it. Is it https://swc.rs/ ? or this https://docs.nestjs.com/recipes/swc ?
What's the point of using it during development, then?
I've started to use git worktrees to parallelize my work. I spend so much time waiting...why not wait less on 2 things? This is not a solved problem in my setup. I have a hard time managing just two agents and keeping them isolated. But again, I'm the bottleneck. I think I could use 5 agents if my brain were smarter........or if the tools were better.
I am also a PM by day and I'm in Claude Code for PM work almost 90% of my day.
Solving new problems is a thing engineers get to do constantly, whereas building an agent infrastructure is mostly a one-ish time thing. Yes, it evolves, but I worry that once the fun of building an agentic engineering system is done, we’re stuck doing arguably the most tedious job in the SDLC, reviewing code. It’s like if you were a principal researcher who stopped doing research and instead only peer reviewed other people’s papers.
The silver lining is if the feeling of faster progress through these AI tools gives enough satisfaction to replace the missing satisfaction of problem-solving. Different people will derive different levels of contentment from this. For me, it has not been an obvious upgrade in satisfaction. I’m definitely spending less time in flow.
Is that how it works? Do managers claim credit for the work of those below them, despite not doing the work?
I hope they also get penalised when a lowly worker does a bad thing, even if the worker is an LLM silently misinterpreting a vague instruction.
Yeah the buck stops with the manager (IMO the direct manager). So I can do some constructive criticism with my dev if they make a mistake, but it's my fault in the larger org that it happened. Then it's my manager's job to work with me to make sure I create the environment where the same mistake doesn't happen again. Am I training well? Am I giving them well-scoped work? All that.
When things go south, no penalization is made. A simple "post-mortem" is written in confluence and people write "action items". So, yeah, no need for the manager to get the blame.
It's all very shitty, but it's always been like that.
Is that the end game? Well why can’t the agents orchestrate the agents? Agents all the way down?
The whole agent coding scene seems like people selling their soul for very shiny inflatable balloons. Now you have twelve bespoke apps tailored for you that you don’t even care about.
Like thinking about it a pr skill is pretty much an antipattern even telling ai to just create a pr is faster.
I think some vibe coders should let AI teach them some cli tooling
It's checking if I'm in a worktree, renames branches accordingly, adds a linear ticket if provided, generates a proper PR summary.
I'm not optimising for how fast the PR is created, I want it to do the menial steps I used to do .
I have a cli script(wtq) that takes whatever is in my clipboard, creates a new worktree, cds into that worktree, installs dependencies, and then starts a claude session with the query in my clipboard. Once im done i can rune `wtf` and it it does the finish up work you described.
It’s not about the workflow. A skill doesn’t make sense when you have a deterministic describable workflow, it’s just slower, because you have an interpretation and consuming step in there.
You can just tell claude to turn the skill into a bash script and then alias it to whatever you like.
A skill is useful if you have a variety of use cases that need to be interepretated and need a lot of the same utility.
Who are you creating PR descriptions for, exactly? If you consider it "drudgery", how do you think your coworkers will feel having to read pages of generic "AI" text? If reviewing can be considered "drudgery" as well, can we also offload that to "AI"? In which case, why even bother with PRs at all? Why are you still participating in a ceremony that was useful for humans to share knowledge and improve the codebase, when machines don't need any of it?
> My role has changed. I used to derive joy from figuring out a complicated problem, spending hours crafting the perfect UI. [...] What’s become more fun is building the infrastructure that makes the agents effective. Being a manager of a team of ten versus being a solo dev.
Yeah, it's great that you enjoy being a "manager" now. Personally, that is not what I enjoy doing, nor why I joined this industry.
Quick question: do you think your manager role is safe from being automated away? If machines can write code and prose now better than you, couldn't they also manage other machines into producing useful output better than you? So which role is left for you, and would you enjoy doing it if "manager" is not available?
Purely rhetorical, of course, since I don't think the base premise is true, besides the fact that it's ignoring important factors in software development such as quality, reliability, maintainability, etc. This idea that the role of an IC has now shifted into management is amusing. It sounds like a coping mechanism for people to prove that they can still provide value while facing redundancy.
_Parts_ of what I write are drudgery, which gets automated away. The "why" we talk about in sync, so it's much less of an issue in general.
When I say management, I mean more like a staff engineer or a tech lead, rather than a traditional manager.
What I want from a PR is what's not in the patch, especially the end goal of the PR, or the reasoning for the solution represented by the changes.
> SWC removed the friction of waiting - the dead time between making a change and seeing it.
Not sure how that relates to Claude Code.
> The preview removed the friction of verifying changes - I could quickly see what’s happening.
How Claude is "verifying" UI changes is left very vague in the article.
> The worktree system removed the friction of context-switching - juggling multiple streams of work without them colliding.
Ultimately, there's only one (or two) main branches. All those changes needs to be merged back together again and they needs to be reviewed. Not sure how collisions and conflicts is miraculously solved.
Why do people do this? Why do they outsource something that is meant to have been written by a human, so that another human can actually understand what that first human wanted to do, so why do people outsource that to AI? It just doesn't make sense.
This weird notion that the purpose of the thing is the thing itself, not what people get out of the thing. Tracks completely that a person who thinks their number of commits and think that shows how productive they are (while acknowledging that it's a poor metric and just shrugging).
We have “Cursor Bot” enabled at work. It reviews our PRs (in addition to a human review)
One thing it does is add a PR summary to the PR description. It’s kind of helpful since it outlines a clear list of what changed in code. But it would be very lacking if it was the full PR description. It doesn’t include anything about _why_ the changes were made, what else was tried, what is coming next, etc.
Most of the time, the PR descriptions it generates for me are great.
I think the issue is you're assuming it's always a poor output, which isn't the case. I'm in a much smaller team than you'd expect, so the why is talked about sync more often than not, and it becomes less of an problem.
Says who? The point of the summary is so that I don't have to go look at the diff and figure out what happened.
Helped me surface an important distinction on why it doesn't really happen for me. I think there's three parts to it:
1. I work on only one thing at a time, and try to keep chunks meaty
2. I make sure my agents can run a lot longer so every meaty chunk gets the time it deserves, and I'm not babysitting every change in parallel, that would be horrible! (how I do this is what this post focuses on)
3. New small items that keep coming up / bug fixes get their own thread in the middle of the flow when they do come up, so I can fire and forget, come back to it when I have time. This works better for me because I'm not also thinking about these X other bugs that are pending, and I can focus on what I'm currently doing.
What I had to figure out was how to adapt this workflow to my strengths (I love reviewing code and working on one thing at a time, but also get distracted easily). For my trade-offs, it was ideal to offload context to agents whenever a new thing pops up, so I continue focusing on my main task.
The # of PRs might look huge (and they are to me), but I'm focusing on one big chonky thing a day, the others are smaller things, which together mean progress on my product is much faster than it otherwise would be.
This is an honest as someone who is also now doing this.
A colleague has been using Claude for this exact purpose for the past 2-3 months. Left alone, Claude just kept spewing spammy, formulaic, uninteresting summaries. E.g. phrases like "updated migrations" or "updated admin" were frequent occurrences for changes in our Django project. On the other hand, important implementation choices were left undocumented.
Basically, my conclusion was that, for the time being, Claude's summaries aren't worthy for inclusion in our git log. They missed most things that would make the log message useful, and included mostly stuff that Claude could generate on demand at any time. I.e. spam.
I got praised for my commit messages by another team, they asked me how I was making Claude generate them, and I had to tell them I'm just not using Claude for that.
I like writing my own commit messages because it helps me as well, I have to understand what was done and be able to summarise it, if I don't understand quickly enough to write a summary in the commit message it means something can be simplified or is complex enough to need comments in the code.
Overstating things of course. But paying off technical debt never felt so good. And the expected decrease in forward friction has never been so achievable so quickly.
However, I agree with you that commits are a terrible (or an unreliable) metric; more commits do not necessarily equal higher productivity.
Meanwhile in the real world the expectations shift to normalise the 10x and your boss wants to know why your output isn’t 12x like that of Max
Oh really? I enjoy doing one thing at the time, with focus.
AI, as you're using it OP, isn't make you faster, it is making you work more for the same amount of money. You burn yourself for no reason.
If you have the tokens for it, having a team of agents checking and improving on the work does help a lot and reduces the slop.
Some says features. Well. Are they used. Are they beneficial in any way for our society or humanity? Or are we junk producing for the sake of producing?