We swapped OpenAI out for Claude and it required updating about 15 lines of code. All these guys are just commodity to us. If next week there’s a better supplier of commodity AI we’ll spend an hour and swap to something else again. There’s zero loyalty here.
But right now we have 3-5 top contenders that are so evenly matched that the de-facto sticking point is mostly the harness, ie. the collection of proven plugins/commands/tools/agent features that are tuned to the users personal workflow.
Just want to note something there:
Okay, premise that AI really is 'intelligent' up to the point of business decisions.
So, this all then implies that 'intelligence' is then a commodity too?
Like, I'm trying to drive at that your's, mine, all of our 'intelligence' is now no longer a trait that I hold, but a thing to be used, at least as far as the economy is concerned.
We did this with muscles and memory previously. We invented writing and so those with really good memories became just like everyone else. Then we did it with muscles and the industrial revolution, and so really strong or endurant people became just like everyone else. Yes, many exceptions here, but they mostly prove the rule, I think.
Now it seems that really smart people we've made AI and so they're going to be like everyone else?
This is obviously already the case with the intelligence level required to produce blog posts and article slop, generade coding agent quality code, do mid-level translations, and things like that...
In my case, I always use Opus 4.6 in my work, but quite often I get a 504 error, and that's quite annoying. I get errors like that with Gemini too. I can't estimate if I'd get a similar number of errors with ChatGPT, since I use it very infrequently.
But imagine that at some point one of the big 3 (OpenAI, Anthropic, Google) gets very high availability, while the others have very poor availability. Then people would switch to them, even if their models were a bit worse.
Now, OpenAI has been building like crazy, and contracting for future builds like crazy too. Google has very deep pockets, so they'll probably have enough compute to stay in the game. But I fear that Anthropic will not be able to match OpenAI and Google in terms of datacenter build, so it's only a matter of time (and not a lot of time) until they'll be in a pretty tight spot.
We have basically 4 companies in the world one can seriously consider, and they all seem to heavily subsidise usage, so under normal market conditions not all of them are going to survive.
The training runs aren’t priced in, but the cost of inference is clearly pretty cheap.
It's a market worth many billions so the prize is a slice of that market. Perhaps it is just a commodity, but you can build a big company if you can take a big slice of that commodity e.g. by building a good product (claude code) on top of your commodity model.
I have curated my youtube recommendations over the years. It knows my likes and dislikes very well. It knows about me a lot.
The same moat exists in interactions with Claude. Claude remembers so many of preferences. It knows that I work in Python and Pandas and starts writing code for that combination. It knows about what type of person I am and what kind of toys I want my nephews and nieces to play. These "facts" about the person are the moat now. Stackoverflow was a repository of "facts" about what worked and what didn't. Those facts or user chat sessions are now Anthropic's moat.
Then you can feed them into another service.
is it ever clear? pretty much everything seems to be a senseless race to bottom.
As a non-US person, that sounds far more concerning than no statement at all. Because if their tools weren't used for surveillance against Europeans they would have said so as a marketing message...
You have this one? You are subhuman, treated as such and you have very limited rights on our soil, we can do nasty things to you without any court, defense, or hope for fairness. You have that one? Please welcome back.
Sociopathic behavior. Then don't wonder why most of the world is again starting to hate US with passion. I don't mean countries where you already killed hundreds of thousands of civilians, I mean whole world. There isn't a single country out there currently even OK with US, thats more than 95% of the mankind. Why the fuck do you guys allow this? Its not even current gov, rather long term US tradition going back at least till 9/11.
Anthropic literally said the same, but seem to be getting positive PR.
https://www.cbsnews.com/news/ai-executive-dario-amodei-on-th...
https://www.lesswrong.com/posts/FSGfzDLFdFtRDADF4/openai-s-s...
Plus, you know, you'd think they'd ask their cleaner or baker or something. Or hire someone.
Around 2005, a Yale Psychology PhD candidate asked me to write a web-based survey instrument with various questions, some on complex but straight forward business questions (the controls) and others with moral/ethical aspects. Senior executives participated and they answered similarly to rank & file, often completing the entire survey much faster. What they didn't know -- we were tracking how long they spent on each question. Questions with moral/ethical concerns took senior executives relatively longer than the rank & file.
Late Addendum: Sorry that I don't recall the author/paper. The survey population spanned multiple industries representing many Fortune 500s, including huge tech companies. The survey was the same for everyone. The questions were story problems from business and law school case reports. The participating companies were anonymized on our end. We provided HR departments with survey link; only subject rank (not identity) was collected. Survey was voluntary, with informed consent according to IRB approval.
Meanwhile codex is ... boring. It keeps chugging on, asking for "please proceed" once in a while. No drama. Which is in complete contrast with ChatGPT the chatbot, that is a completely unusable, arrogant, unhelpful, and confrontational. How they made both from the same loaf I dunno.
IDEK what that means, specific examples?
At first I found the Gemini Code Assist to be absolutely terrible, bordering on unusable. It would mess up parameter order for function calls in simple 200 line Python. But then I found out about the "model router" which is a layer on top which dynamically routes requests between the flash and pro model. Disabling it and always using the pro model did wonders for my results.
There are however some pretty aggressive rate limits that reset every 24 hours. For me it's okay though. As a hobbyist I only use it about 2-3 hours per day at most anyway.
codex often speaks in very dense technical terms that I'm not familiar with and tends to use acronyms I've not encountered so there's a learning curve. It also often thinks I'm providing feedback when I'm just trying to understand what it just said. But it does give nice explanations once it understands that I'm just confused.
And I've been absolutely amazed with Codex. I started using that with version ChatGPT 5.3-Codex, and it was so much better than online ChatGPT 5.2, even sticking to single page apps which both can do. I don't have any way to measure the "smarts" for of the new 5.4, but it seems similar.
Anyways, I'll try to get Claude running if it's better in some significant way. I'm happy enough the the Codex GUI on MacOS, but that's just one of several things that could be different between them.
Claude, IMO, is much better at empathizing with me as a user: It asks better questions, tries harder to understand WHY I'm trying to do something, and is more likely to tell me if there's a better way.
Both have plenty of flaws. Codex might be better if you want to set it loose on a well-defined problem and let it churn overnight. But if you want a back-and-forth collaboration, I find Claude far better.
Not Claude Code specifically, but you can try the Claude Opus and Sonnet 4.6 models for free using Google Antigravity.
Recently did the full transition to Claude, the model is great, but what I really love is how they seem to have landed on a clear path for their GUI/ecosystem. The cowork feature fits my workflows really well and connecting enterprise apps, skills and plugins works really well.
Haven’t been this excited about AI since GPT 4o launched.
Yep there really is no switching cost it seems.
People generally want something from a model and then leave. I think people are sub-consciously forming relationships with Tech firms such that they do not care about them, and its all about what the user themselves gets. Generally there is no attachment. There's some examples of psychotic stuff but that's thankfully the exception not the norm.
That's why Apple cares deeply about its brand - it doesn't want to fall into that group of firms.
On an unrelated note, UI is such a personal preference that it's impossible, beyond core pillars that have been studied for decades, to say one is better over the other. That being said, I like OpenAI's design system much better than Anthropic. OpenAI products (cli and chat ui) "feel" nice and consumer focused whereas Anthropic's products feel utilitarian and "designed for business".
A couple of weeks ago, to huge numbers of people, ChatGPT was AI. The biggest public perception shift that will have come from the DoD/DoW spat will be how many people know that Claude exists at all, that they are being unreasonably punished by the government for taking a principled stance will benefit.
People have been made aware of a product, made aware that it's good enough that the government wants to use it. They have then been shown a archetypical underdog against the government narrative. That makes almost a perfect storm for gaining customers.
When they actually use the thing and discover that it actually is good, They will stay, and they will tell their friends.
At this rate they should be sending Hegseth a thank you card.
My experience has, for a few months, been that OpenAI's models are consistently quite noticeably better for me, and so my Codex CLI usage had been probably 5x as much as my Claude Code usage. So it's a major bummer to have cancelled, but I don't have it in me to keep giving them money.
I'd love to get off Anthropic too, despite the admirable stance they took, the whole deal made me extra uncomfortable that they were ever a defense contractor (war contractor?) to begin with.
I've been on the internet since 93 or 94 and I've never once heard it called that. If anything, "Al Gore".
I've literally never heard anybody call the Internet "Reagan's internet", the best I can do is the Al Gore quote and who's calling anything Trump's AI?
What ideas are you trying to express here?
- American politics presents a false choice between Democrats and Republicans
- America is both a consumerist and corporatist society
- Anthropic asked for minimal limits on AI usage
- People view Anthropic's stand as heroic, while viewing OpenAI as villainous
- The false choice between Anthropic and OpenAI mirrors the false choice in American politics.
- People at OpenAI, Anthropic, and elsewhere used to view ethical deployment of AI as paramount, but those goalposts have shifted as financial and political incentives changed.
- Specifically, the ethics of AI have become conveniently synonymous with the current financial and political moment.
- The current political moment is fascist.
- Technology is broadly neutral and it is politics that primarily dictates how technology is actually used and deployed, and therefore its broad impacts.
- The internet was developed in the neoliberal era, which began with the election of Ronald Reagan and extended through the Obama presidency.
- The structure and dynamics of the internet over the last 30 years is more reflective of neoliberal politics than it is of anything inherent in the technology. Extreme privitization and the refusal to use public institutions to provision or regulate public goods.
- AI is being developed in a new political era, begun with the first Trump presidency, and taking more full shape under the second Trump presidency.
- We are likely to find that AI's trajectory is similarly dictated largely by politics rather than anything inherent to the technology.
- With this political era being fascist and explicitly neo-imperial/neo-colonial, I fear for the technology's impact on humanity.
- God help us.
OpenAI simply provides more value for the money at the moment.
Anthropic is the outlier here, obviously they can limit their subscriptions as they want but it's a major disadvantage compared to their competitors.
Google seems to be on a hot streak with their models, and, since they're playing from behind, I'd expect favorable pricing and terms. But, I don't know anyone who is using or talking about Gemini. All the chatter seems to be Anthropic vs. OpenAI.
I would love to, but a practical look at that concept seems practically impossible.
My .02c : Claude was already involved in underhanded shit I don't want a part of[0] and that generated little ethical response from Anthropic , i've had better luck as a 200/mo tier customer with ChatGPT, and I don't really think that Dario claiming that their newest LLM is conscious[1] on a market schedule is all that ethical, either.
[0]: https://en.wikipedia.org/wiki/Project_Maven [1]: https://tech.yahoo.com/ai/claude/articles/anthropic-ceo-admi...
realistically: AI WILL get used in military and for killing autonomously, like it or not, believe it or not. I am also against that in principle but I do accept the fact my opinion just doesn't matter and practice radial acceptance or reality as-is. twitter/X is also alive and kicking, despite musk and anti-musk-hate. xAI/Grok is genuinely really good too compared to OAI/Claude, a bit different but very good. At this point all the "outcries" feel like noise I just skip on principle. But it could turn up the fire under the OAI team to go aggressive feature/pricing wise in order to retain/increase their userbase again, which is ... good, after all.