Task Allocation: Using real-time data on each engineer's strengths, past performance, learning curve, and even their preferred working hours, EMAI allocates tasks from the backlog. It uses predictive modeling to optimize for both efficiency and team satisfaction.
Conflict Resolution: If two engineers have a disagreement or are blocked by each other, EMAI steps in. Using its vast knowledge base and understanding of human psychology (aided by its training data), it mediates discussions, ensuring a harmonious team environment.
Training & Upgradation: EMAI monitors the latest tech trends. If a new tool or technology emerges in the market, it identifies which team members would benefit most from training and automatically schedules online courses or tutorials for them.
End-of-Day Reports: Every team member receives a personalized report detailing their accomplishments, areas of improvement, and resources for further learning. These reports aren't just data-driven and include motivational feedback designed to boost morale and foster continuous learning."
It'll be a cold day in hell before I work 5 minutes under those conditions.
I feel like there's a Dilbert (pre-cancellation) strip in this with the AI ending up assigning everyone no work because everyone's "preferred working hours" are no hours and getting paid to do nothing leads to the most team satisfaction.
Gleeful acquiescence?
“Well, Manager AI said we didn’t have to do work anymore. I’m not the sort to be insubordinate.”
In 2.0 they'll catch on and implement performance improvement plans that lead to separation.
If I try to pretend this thing has good intentions at heart, I think it'd be great for some folks to improve based on AI recommendations where they can shed their ego. Harder to do in front of a project manager.
I would expect AI regulation to speed up real quick if it starts making middle management redundant :-)
Wait, we are??
It's going to be a very slow burn (if you aren't just replaced with an AI), and no point will seem worth quitting over, until you're managed by the AI.
It's not true though because unlike frogs, humans are capable of making judgments based on the first and second derivatives.
It’s a smooth slope towards more ai assistance.
Unless more people start thinking like you.
[1]: This is more creepy than it initially sounds. The notes are detailed but paraphrased in a formal way. Every on topic question or off topic remark is there, with attribution.
It's still better than the current human managers, who aspire to this, but fail due to incompetence, laziness and petty biases.
There's no machine learning model that is truly objective. They're all biased due to their, usually human generated, datasets. It's impossible to account sufficiently for every scenario in a training set, so these models just give an objective veneer to the biases of those that created the dataset.
This phenomena is well documented with predictive models for crime.
Many arrests happen in low-income areas. The data on arrests skew towards those areas. The predictive models are trained on that data. Using that data, police make more arrests in low income areas. Those arrests get added to the data set Rinse and repeat.
Replace police, arrests, and low-income with anything and it's still true
For example: Company Leadership, promotions, race
Unintentional feedback loop amplifying the thing being measured is a problem, yes, but it doesn't stem from predictions themselves - it's decisions and actions informed by the predictions that can amplify the problem instead of reducing it.
But those not in the know can and do assume that it’s a computer program, so it can’t be biased, which is not the case for predictive models.
Especially if you use data created as a direct product of that predictive model to further tune that model.
That bias would accumulate.
People need a concrete goal or specific feature to work on, one that takes time and space to work on. You can't easily or usually create units of measurable work where everything is a story point or a widget. In fact I'd say most of the time the real work isn't like that at all. That's just not how software development works, at least in environments that are conducive to real software engineering.
Any EMAI outputs or recommendations that adhere to and respect the reality of actual software engineering will be of limited value to a business head or a scrum master. It'll offer things like how to improve your work flow, or what tools would benefit your work flow. These things don't translate into more story points any time soon, and certainly not within a matter of days...
> Using real-time data on each engineer's strengths, past performance, learning curve, and even their preferred working hours, EMAI allocates tasks from the backlog. It uses predictive modeling to optimize for both efficiency and team satisfaction.
> End-of-Day Reports: Every team member receives a personalized report detailing their accomplishments, areas of improvement, and resources for further learning. These reports aren't just data-driven and include motivational feedback designed to boost morale and foster continuous learning.
If it's allocating tasks this way from a backlog and trying to give you daily reports, this just sounds like something that would be of interest to a ticket farm rather than to a tech company that is really building software.
Prompt injection is an attack against AI gullibility.
Gullibility is not a characteristic of competent managers. One of the most important jobs of managers is to be able to see through bullshit and figure out what's actually going on.
I am extremely skeptical that the current generation of AI is capable of doing that.
Time to re-read Marshall Brain's "Manna".
TLDR: Fiction about dystopic-vs-utopic outcomes from AI-management.
"Automated management software replaces fast food workers" ---> "Utopians Bought Australia"
Hard to think of other stories that manage that tone. Huxley's Island:
"Self-serving journalist relates psychedelic ethnography" ---> "Magic Mushrooms Cannot Save You From the Forces of Capital"
and Cory Doctorow's Down and Out in the Magic Kingdom, maybe:
"Disenchanted immortal undergoes social death" ---> "Cowboy Hats vs. Entropy"
I was a product manager but not really a project manager. I was also a tech lead: when things went awry and someone couldn't figure out how to get it unstuck I would unstick it, but in general the system just produced functional software. My primary inputs were sketches at the start and ongoing client feedback and so on.
All the workers were in different locations, in completely different timezones, and they all reported a high level of satisfaction. So I don't even think you need AI, you just need better procedures.
https://docs.google.com/presentation/d/1eJz43N_o8adJBDBtovzu...
Closely related:
https://benkoworks.com/your-templating-engine-sucks-and-ever...
taking orders - yes, but there are no moving parts or liability there.
for software you also generate N solutions and evaluate each of them to shore up weaknesses in the current, very nascent, technology. microsoft is sure to be going down this path.
[1] I'm basing this off the assumption that other IC jobs and low-level management jobs don't have a significantly greater cognitive demand than software engineering. I could be wrong.
Can you expand this premise?
It invalidates not just managers, but therapy, virtually any social function.
> A COMPUTER CAN NEVER BE HELD ACCOUNTABLE
> THEREFORE A COMPUTER MUST NEVER MAKE A MANAGEMENT DECISION
(Via https://twitter.com/SwiftOnSecurity/status/13855657371677245...)
(bit too negative - but basically i disagree)
You do not ask a human how the computer is doing. You see the working code. If the working code is running great, if not bad. But you don't ask the human. you ask the test suite.
I mean, I see a different end point for software orgs - I call it the "whole org test rig". Every part of a companies current processes is digitised (future changes and improvements are yet to be committed) but the sales people will pitch using software that tells them who to pitch to and when, the customer service agent is probably already a bit etc etc.
And when a whole org is "in code" then you can set up test environments - run sensitivity tests, try out new applications and new services and ensure the training is ready and ...
basically most management is co-ordination. And if you can just test then the co-ordination sits in the test rig
It doesn't just sound dystopian, it is dystopian.
Or at least you believe it has such prime directive. It is a black box, after all. Tread carefully, as if there's one true thing about reinforcement learning, it's that the more you squeeze with constraints on a tough problem, the more creative your model will get at solving that problem while meeting all your constraints. It will discover tricks and side channels you didn't even conceive of. It's not the kind of creativity you want to be on a receiving end of.
Also, the more human-like the chat AI gets, the more your ruthless and evil behavior hurts you, as you're burning off your empathy circuits and becoming a sociopath. You may win the negotiations and get what you want, at the cost of your own soul.
To get rid of managers you would need to delegate tasks it is bad at. Seems doable enough.
Fine tuning for empaty and social qualities also seems doable if you can certify, validate and gurantee it.
Human managers are useful to keep business logic dumb and stupid, AI would make things much more complex.
Also facinating is the option to say it like it is. There needs not be any hidden agenda aimed at promotion. The thing has tenure!
In the real world, "computer" used to be a human job title.
> It might sound dystopian, but setting emotions aside and viewing it purely from a business perspective, the idea of replacing engineering managers with AI offers potential efficiencies.
A manager is there precisely to optimize for business objectives. They are not your paid friend, therapist or life coach.
The "AI" sea change that obsoletes the manager will first replace the producer.
Consider it from the present day case of out-sourced labour, which is as real and present as AGI.
Managers are more likely to be valued when out-sourcing production, as business/human organization and communication become the bottleneck vs. productive capacity.
If the value of production is driven even lower via generative automation, such that automation is cheaper than outsourcing, then managers are at risk because they exist by ratio relative to the productive labour force. Out-sourcing often leads to an expanded labour force due to market imbalances (3 for the price of 1!). This results in an increase in management before automation >first< reduces the size of the labour force, which only then reduces the need for management.
A office therapist could actually be precisely what is needed to effectively optimize and alight people for business objectives!
Because if it was earnestly presenting core engineering manager job responsibility at SV tech companies right now, then the whole sector has satirized itself. Again.
The stuff it describes is babysitter work for weak teams, which is helpful for a manager to be able to provide but takes away from what they can actually add to a team when relieved from doing so.
Which I suppose means it's the best kind of satire. :)
Whoever came up with this fundamentally doesn’t understand what it’s like to be a people manager. Nowhere does it mention trying to resolve conflicts with people outside the team. I’m not talking about petty conflicts but business conflicts like conflicting requests/direction and lots of ambiguity. Administrative tasks a people manager does is a minimal part of the job and could be automated but it would take longer to write the software to do the automation than to just click the stupid buttons in the HR/Payroll tool.
The things mentioned in the article like stand ups aren’t even orchestrated by managers in a lot of companies and besides are a tiny aspect of the job.
Get real dudes.
They shouldn't be orchestrated by managers in any company. Same with handling the backlog. That managers get involved in these things is one of the ways that agile has gone so wrong.
To be fair, though, a lot of people I know have only ever had terrible managers.
Off-topic, but setting emotions aside means no decisions ever get made because inductive reasoning (and therefore all of prediction) is an emotional process and has zero grounding in rationality.
You can't operate in reality without emotion because reality doesn't follow any guaranteed logic we've discovered.
I wish more people understood this.
I can tell from above that this "AI" doesn't actually know what a good engineering manager does
> Morning stand up meetings:
Meetings are synchronous time for human beings to interact with each other because we don't know what they are going to share. Replicating this with a machine that is going to asynchronously process all inputs including direct input on your tickets and direct work makes absolutely no sense.
> analyzing voice tones for stress or uncertainty
This is a creepy way to manage as a person. Applied by a machine it is HAL9000 levels of creepy it just trains people to talk like robots to the robot so that HAL doesn't bother or use it as a data point counting towards them getting later terminated.
> Conflict Resolution: Using its vast knowledge base and understanding of human psychology...
Humans are incredibly bad at psychology and its literally mostly snake oil and impossible to replicate nonsense.
> Training & Upgradation: EMAI monitors the latest tech trends. If a new tool or technology emerges in the market, it identifies which team members would benefit most from training and automatically schedules online courses or tutorials for them.
In what universe would this result in a better result than just asking people what they would like to learn
> End-of-Day Reports: Every team member receives a personalized report detailing their accomplishments, areas of improvement, and resources for further learning. These reports aren't just data-driven and include motivational feedback designed to boost morale and foster continuous learning.
Motivation is motivational because it demonstrates that your work is important enough that manager bob took the time out of his schedule to praise it particularly. Automating it and having a computer do it makes it worse than useless. It's telling your people that they are so worthless that having a fake robot generate fake praise is all they are worth. It's like taking the much memed pizza party to "boost moral" and taking it to the next level by delivering pictures of pizzas instead of pies.
> EMAI also manages to keep stakeholders informed, and it can negotiate with them to find the best solution given their inputs and the business context.
If your interests are represented by a URL which you can babble at chatGPT you aren't a stakeholder.
There is no such thing as an AI manager. It’s just an automated todo list. But when I have a problem I need to talk with someone in charge. Machines are not in charge. Somebody owns the machine.
Is it done? Is it done? And then? And then? https://youtu.be/oqwzuiSy9y0
- Developers, usually position themselves as MASTERS of machines (sure, I'm also dev, and sure, I feel myself father of my semiconductor little pet), but article describes, how devs built slavery, where MACHINE is master :)))
In the last 5 years or so the role of manager had gone from those with accountability "with" resources/authority to just accountability and no support or resources. Pretty shit deal unless you were a true sociopath leaving the well intentioned ones stranded.
Yeah, right, just like the data.
"[...] without a clear indicator of the author's intent, any parodic or sarcastic expression of extreme views can be mistaken for a sincere expression of those views."