There's something that doesn't sit right with me about this statement, and I'm not sure what it is. Are you sure you didn't just join for the money? (edit: cool problems, too)
Theil, Musk. Theyre all deluded that their Wealth and AI are saviors.
I've learned over the years that I was naive and it's a coincidence if the tech giants make people's lives better. That's not their goal.
Gavin Belson
Sure you might argue "well if they can do more with less they won't need as many data centers." But who is going to believe that a company that can squeeze more money from their investment won't grow?
Tangentially, I am looking forward to learn the new innovations that come from this problem space. [Self-rightous] BG certainly is exceptional at presenting hard topics in an approachable and digestible manner. And now it seems he has an unlimited fund to get creative.
Sure, humans going extinct is good for the planet, I guess, but be up front about what you are really supporting.
And they're 100% justified to do so, until they hit another bottleneck (when there is literally not that much Nvidia hardware to buy, for example.)
OpenAI released an open source model only because they are capped on growth right now by the amount if hardware they have. Improve resource efficiency and you better believe they'll just crank up use of said resources until they capped again.
It’s just a kind of lazy fatalistic nihilism. I worry about the world and think that these adult children who were told “no” during the pandemic have let their worst misanthropic tendencies flourish. They are indoctrinated in a belief that “you can just do things” for “you can just do things” sake.
The problems are interesting and the pay is exceptional. Just fucking own it.
However, I'm not sure your analysis is quite correct, in this case.
If OpenAI can mobilize X (giga)dollars to buy Y amounts of energy, your work there will not reduce X or Y, it will simply help them produce more "tokens" (or whatever "unit of AI") for a given amount of energy.
So in a sense you're helping make OpenAI tools better, more effective, but it's not helping reduce resource usage.
Also while the thirst for training may be insatiable, I could see the energy cost of "hey chat can you check the basketball score" coming down.
What do you think is happening with the efficiency gains? You're making rich people richer and helping AI to become an integral (i.e. positive ROI from business perspective) part of our lives. And that's perfectly fine if it aligns with your philosophy. It's not for quite a few others, and you not owning up to it leads to all kinds of negativity in the comments.
You have done a brilliant job elevating your chosen specialty to the world, and encouraging and inspiring others in the industry for a long time - so you should be fairly compensated for that lofty position. I don't envy the late nights or very early mornings you have ahead of you on conference calls with SF, but good luck at OpenAI mate !
I’ll give it a shot; I think you’re successful in what you do and very altruistic and open, not only in your discoveries but also your opinions. You also have a higher sense of duty. as oscar wilde once said, we’re all in the gutter, but some of us are looking at the stars. compensation is boring gutter talk. It’s hard for people to reconcile your benevolence with your success, and just as the trope of joining a company to change the world is a veil for making money, so is the trope of criticizing the agent of change entering an industry because the industry is bad.
Personally I can’t wait to read about the inefficiencies you find and have a little glimpse into openai tech from your opinionated point of view.
I am a long time fan, I have the physical copy of each and every book that you have authored, I have watched each and every video that you are in, and I walk team members and clients through your USE method at every engagement I am on.
I would say to you that the "make the world a better place" has been excessively misquoted. Even the Silicon Valley episode on Tech Crunch parodies show how anything and everything is intended to "make the world a better place".
Please reconsider your use of the phrase given the well-earned negativity around it.
Reality is, these AI giants are here and they are using a massive amount of resources. Love them or hate them, that is where we are. Whether or not you accept the job with them, OpenAI is gonna OpenAI.
Given how much the detractors scream about resource uses, you'd think they'd welcome the fact that someone of your calibre is going in and attempting to make a difference.
Which, leads me to believe you're encountering a lot of projecting from people who perhaps can't land the highest of comp roles, and shield their ego by ascribing to the concept of it being selling out, which they would of course never do.
I loved your work back when I was an IC, and I'm sure this is a common sentiment across the industry amongst those of us who started systems adjacent! I still refer to your BPF tools and Systems Performance books despite having not written professional code for years now.
Can't wait to read content similar to what you wrote about when at Netflix and Intel albeit about the newer generation of GPUs and ASICs and the newer generation of performance problems!
I hope there will be harder problem waiting for you, than using flamegraphs to optimize GenAI Porn.
https://www.axios.com/2025/10/14/openai-chatgpt-erotica-ment...
This is a company which at the first opportunity seized and stopped doing open research, cut open source contributions, converted itself to for profit after years of fiscal benefits, that scrapped its ethics committee and removed all engineers who opposed any of this.
Don't come with the excuse there is any work being done for the better of something.
One should never input one's own expectations into another, but, I feel disappointed. It is having the guy I saw growing from the first posts working for an evil machine on his own volition.
Do what you want. But that's what I feel about this disheartening news
Inferring the overall tone from the comments, I think the folks here are struggling with what sounds like a logical fallacy from someone who is certainly a logical thinker.
> how I could lead performance efforts and help save the planet.
The problem on the face of it being: Performance gains will not translate to less energy usage (and by extension less heat released into the atmosphere). Rather, performance gains will mean that more effective compute can be squeezed from the existing hardware.
If performance gains translate to better utilization of the hardware, it also follows that it will translate to more money for the company, allowing for the purchase of more GPUs. Ad infinitum.
My stance is that this is just businesses doing what they do. It's always required regulation to slow down the direct/indirect negative byproducts (petro companies being the most obvious example). I don't see how AI would inherently be different.
Is there another angle that I (we) am (are) missing where the performance efficiencies translate to net benefits for the planet?
As a performance engineer I'm familiar with Jevons paradox, but it does not discourage improving efficiency.
Brendan.
First of all congratulations on your new job. However,
It is easier to just say to everyone it is about the money, compensation and the stock options.
You're not joining a charity, or to save the planet, this company is about to unload on the public markets at an unfathomable $1TN valuation.
Don't insult your readers.
EDIT: possibly a corollary--does Mia pay money for chatgpt or use a free plan?
My wife was paying for ChatGPT before I joined. I didn't ask Mia. I probably have three months of hair growth before my next chance to ask.
> There's so many interesting things to work on, things I have done before and things I haven't.
What are the things you haven’t done before, if you could mention them?
Would be fantastic if you can find a way to make optimizations you find available more openly. The whole ecosystem benefits when efficiency improvements are shared. Looking forward to seeing where this goes and don't let the negativity from some get to you
No, it never does. Those people somehow delude themselves into thinking it might, but...it might just work for us.
You arnt going to stop the excesses.
A quite funny post from his blog on this topic: https://www.brendangregg.com/blog/2021-06-04/an-unbelievable...
How could she not know?
BG and eBPF are awesome but this article read like a midlife crisis to me.
Interacting outside of the tech bubble is eye opening. Conversely, the hair stylist might have mentioned the brand of a super popular scissor supplier/other equipment you’d have never heard of.
This is like my worst nightmare as a systems engineer: that years of navigating bureaucracy at a place like Intel slowly brainrots me into prioritizing politics and self-promotion over the technical truth.
I hope this is just PR reflex and not an actual loss of grounding.
infact, you could argue that politics is in some sense the biggest, most complex dynamic system of them all, and thus poses the greatest 'engineering' challenge. and it invariably involves promotion of oneself, or an idea, or a certain direction, with real trade-offs that have positive impact on some people, and negative on others.
Just explained it can be fun and involves trade offs. Well ok, but these claims of saving world are still empty bullshit. It is perfectlt fine to be aware of that fact.
We dont have to pretend grandiose claims of Musks and Altmans and Bezos and whatever management trues to please them are reflections of their search of positive impact.
This seems rather sad. Is this really what AI is for?
And we do not need gigawatts and gigawatts for this use case anyway. A small local model or batched inference of a small model should do just fine.
I guess I'm a dinosaur but I think emailing the friend to ask what they are actually up to would be even better than involving an LLM to imagine it.
Asynchronous human to human communication is a pretty solved problem.
Or, you know, Signal/Matrix/WhatsApp/{your_preferred_chat_app}. If you're already texting things, might as well do that.
The GP quote also wasn't about a personal assistant use case, it was about filling a hole in personal connection. Its sad because we are more often today having less human connections and more digital, aka fake, ones.
I couldn't go on reading.
Unless they put on a show for themselves and that's who they try to fool. Probably why nobody mentions money in these shows. They're self motivational.
> Do anything, do it at scale, and do it today
> It's not just GPUs, it's everything.
> I'm not the first, I'm just the latest.
Yes, he has done a lot of good work in the past, but he has put as much effort into self-promotion and landed a series of interesting and well-paying gigs.
I can't blame him for that. It just makes me tired to watch.
Of course that kind of heuristic can have false positives, and not every accusation of AI-written content on HN is correct. But given how much stuff Gregg has written over the years, it's easy to spot-check a few previous posts. This clearly isn't his normal style of writing.
Once we know this blog was generated by a chatbot, why would the reader care about any of it? Was there a Mia, or did the prompt ask for a humanizing anecdote? Basically, show us the prompt rather than the slop.
This guy and Rob Pike should have a talk.
“I don't want to live in a world where someone makes the world a better place, better than we do.”
Beautiful satire in that show. I'm still throwing my own version of this quote every now and again at the office.
Thanks for reading. Please subscribe to my newsletter to keep up to date with my works and the latest news about AI.
just say it's for the money, people understand that
but this sort of post is simply gross
WRT "AI saving the planet", obviously.
We need ungodly amounts of machine learning. Weather modeling, forecasting, resilience planning, risk mgmt, planning, etc.
To implement virtual power plants (aka P2P distributed grid), everything needs to get smart. Just this transformation alone is a generational project.
There's dozens more of "must have" stacks we need to tackle climate crisis. Replace industrial heat. Decarbonize agriculture. Build out geothermal. Find and stop methane leaks. Pretty much everything needs a makeover, really.
OpenAI is as good a place (for you) to start as any.
Happy hunting.
Something tells me that in a year we'll see a post about why you left OpenAI.
Sama won't listen to anyone. That's why. None of these CEOs are going to listen.
I don't think that indicates that any one company interviewed him 20+ times.
You're in for a surprise buddy.
We are currently fucked as well to be clear as people genuinely have this disconnected mindset of reality.
I didn't know it was possible for a sentence structure to cause such a thing.