https://newsletter.semianalysis.com/p/tpuv7-google-takes-a-s...
Their own press releases confirm this. They call 5 their best new "ai system", not a new model
But I always realize it's just smoke and mirrors - the actual quality of the code and the failure modes and stuff are just so much worse than claude and gemini.
https://platform.openai.com/docs/models/compare?model=gpt-5....
I didn't make this connection that the training data is that old, but that would indeed augur poorly.
Now I don't know if this means that OpenAI was able to add that 3 months of data to earlier models by tuning or if it was a "from scratch" pre-training run, but it has to be a substantial difference in the models.
https://youtu.be/3K-R4yVjJfU?si=JdVyYOlxUbEcvEEo&t=2624
> Q: Are the releases aligned with pre-training efforts?
> A: There used to be a time not that long ago, maybe half a year, distant past, where the models would align with RL runs or pretraining runs ... now the naming is by capability. GPT5 is a capable model; 5.1 is a more capable model
====
when have you been built?
I’m a GPT‑4‑based model that OpenAI released on March 14 2023. The underlying training data goes up to early 2023, and the model was trained in the months leading up to that release. If you’re asking about the ChatGPT product that ships the model to users, it went live in November 2022 and has since received updates (GPT‑3.5, GPT‑4, etc.) that keep it current.
====
But when supposedly running it from https://huggingface.co/chat/models/openai/gpt-oss-20b:
====
when have you been built?
I’m a language model created by OpenAI. The current generation (GPT‑4) that powers this chat was first released in March 2023 and has been updated and fine‑tuned up through the end of 2024. My training data runs up to the beginning of June 2025, so I’m built on knowledge available up to that point.
====
And that makes me thinking that although https://huggingface.co/chat claims to be using the models available to public at https://huggingface.co , it doesn't seems to be true and I raised this question here https://huggingface.co/ggml-org/gpt-oss-20b-GGUF/discussions... , https://github.com/huggingface/inference-playground/issues/1... and https://github.com/ggml-org/llama.cpp/discussions/15396#disc... .
One one side it's up against large competitors with an already established user base and product line that can simply bundle their AI offerings into those products. Google will do just what Microsoft did with Internet Explorer and bundle Gemini in for 'Free' with their already other profitable products and established ad-funded revenue streams.
At the same time, Deepseek/Qwen, etc. are open sourcing stuff to undercut them on the other side. It's a classic squeeze on their already fairly dubious business model.
OpenAI will top $20 billion in ARR this year, which certainly seems like significant revenue generation. [1]
[1] https://www.cnbc.com/2025/11/06/sam-altman-says-openai-will-...
“will do”? Is there any Google product they haven't done that with already?
OpenAI should be looking at how Google built a moat around search. Anyone can write a Web crawler. Lots of people have. But no one else has turned search into the money printing machine that Google has. And they've used that to fund their search advantage.
I've long thought the moat-buster here will be China because they simply won't want the US to own this future. It's a national security issue. I see things like DeepSeek is moat-busting activity and I expect that to intensify.
Currently China can't buy the latest NVidia chips or ASML lithography equipment. Why? Because the US said so. I don't expect China to tolerate this long term and of any country, China has desmonstrated the long-term commitment to this kind of project.
"More access to Gemini 3 Pro, our most capable model More access to Deep Research in the Gemini app Video generation with limited access to Veo 3.1 Fast in the Gemini app More access to image generation with Nano Banana Pro Additional AI credits for video generation in Flow and Whisk Access Gemini directly in Google apps like Gmail and Docs" [Thanks but no thanks]
Feel like the end result would always be that while Google is slow to adjust, once they're in the race they're in it it.
Just some numbers to show what OpenAI is against:
GMail users: nearing 2 billion
Youtube MAU: 2.5 billion
active Android devices: 4 billion (!)
Market cap: 3.8 trillion (at a P/E of 31)
So on one side you've got this behemoth with, compared to OpenAI's size, unlimited funding. The $25 bn per year OpenAI is after is basically a parking ticket for Google (only slightly exaggerating). Behemoth who came with Gemini 3 Pro "thinking" and Nano Banana (that name though) who are SOTA.And on the other side you've got the open-source weights you mentioned.
When OpenAI had its big moment HN was full of comments about how it was game over for Google for search was done for. Three years later and the best (arguably the best) model gives the best answer when you search... Using Google search.
Funny how these things turns out.
Google is atm the 3rd biggest cap in the world: only Apple and NVidia are slightly ahead. If Google is serious about its AI chips (and it looks like they are) and see the fuck-ups over fuck-ups by Apple, I wouldn't be surprised at all if Alphabet was to regain the number one spot.
That's the company OpenAI is fighting: a company that's already been the biggest cap in the entire world and that's probably going to regain that spot rather sooner than later and that happens to have crushed every single AI benchmark when Gemini 3 Pro came out.
I had a ChatGPT subscription. Now I'm using Gemini 3 Pro.
OpenAI has annualized revenue of $20bn. That's not Google, but it's not insignificant.
Interestingly enough, apart from Google, I've never seen an organization take the actual proper steps (fire mid-management and PMs) to prevent the same thing from happening again. Will be interesting to see how OAI handles this.
Firing PMs and mid-management would not prevent any of code reds you may have read about from Google or OAI lately. This is a very naive perspective of how decision making is done at the scale of those two companies. I'm sorry you had bad experiences working with people in those positions and I wish you have the opportunity to collab with great ones in the future.
One time, in my entire career have I seen this done, and it is as successful as you imagine it to be. Lots of weird problems coming out from having done it, but those are being treated as "Wow we are so glad we know about this problem" rather than "I hope those idiots come back to keep pulling the wool over my eyes".
But somehow, even in startups with short remaining runway, "code red" rarely means anything.
You still have to attend all the overhead meetings, run through approval circles, deal with HR etc etc.
Absent a major breakthrough all the major providers are just going to keep leapfrogging each other in the most expensive race to the bottom of all time.
Good for tech, but a horrible business and financial picture for these companies.
They’re absolutely going to get bailed out and socialize the losses somehow. They might just get a huge government contract instead of an explicit bailout, but they’ll weasel out of this one way or another and these huge circular deals are to ensure that.
I was an OpenAI fan from GPT 3 to 4, but then Claude pulled ahead. Now Gemini is great as well, especially at analyzing long documents or entire codebases. I use a combination of all three (OpenAI, Anthropic & Google) with absolutely zero loyalty.
I think the AGI true believers see it as a winner-takes-all market as soon as someone hits the magical AGI threshold, but I'm not convinced. It sounds like the nuclear lobby's claims that they would make electricity "too cheap to meter."
Yes, companies like Google can catch up and overtake them, but a moat is merely making it hard and expensive.
99.999.. perc of companies can't dream of competing with OpenAI.
That’s not a bubble at all is it?
Genuine question: How is it possible for OpenAI to NOT successfully pre-train a model?
I understand it's very difficult, but they've already successfully done this and they have a ton of incredibly skilled and knowledgeable, well-paid and highly knowledgeable employees.
I get that there's some randomness involved but it seems like they should be able to (at a minimum) just re-run the pre-training from 2024, yes?
Maybe the process is more ad-hoc (and less reproducible?) than I'm assuming? Is the newer data causing problems for the process that worked in 2024?
Any thoughts or ideas are appreciated, and apologies again if this was asked already!
The same way everyone else fails at it.
Change some hyper parameters to match the new hardware (more params), maybe implement the latest improvements in papers after it was validated in a smaller model run. Start training the big boy, loss looks good, 2 months and millions of dollars later loss plateaus, do the whole SFT/RL shebang, run benchmarks.
It's not much better than the previous model, very tiny improvements, oops.
A company's ML researchers are constantly improving model architecture. When it's time to train the next model, the "best" architecture is totally different from the last one. So you have to train from scratch (mostly... you can keep some small stuff like the embeddings).
The implication here is that they screwed up bigly on the model architecture, and the end result was significantly worse than the mid-2024 model, so they didn't deploy it.
There's maybe like a few hundred people in the industry who can truly do original work on fundamentally improving a bleeding-edge LLM like ChatGPT, and a whole bunch of people who can do work on ads and shopping. One doesn't seem to get in the way of the other.
My guess is that it's smaller than that. Only a few people in the world are capable of pushing into the unknown and breaking new ground and discoveries
The risk is straightforward: if OpenAI falls behind or can’t generate enough revenue to support these commitments, it would struggle to honor its long-term agreements. That failure would cascade. Oracle, for example, could be left with massive liabilities and no matching revenue stream, putting pressure on its ability to service the debt it already issued.
Given the scale and systemic importance of these projects — touching energy grids, semiconductor supply chains, and national competitiveness — it’s not hard to imagine a future where government intervention becomes necessary. Even though Altman insists he won’t seek a bailout, the incentives may shift if the alternative is a multi-company failure with national-security implications.
No matter what Sam Altman's future plans are, the success of those future plans is entirely dependent on him communicating now that there is a 0% chance those future plans will include a bailout.
1. Government will "partner" (read: foot the bill) for these super-strategic datacenters and investments promised by OpenAI.
2. The investments are not actually sound and fail, but it's the taxpayer that suffers.
3. Mr. Altman rides off into the sunset.
Sounds like a golden opportunity for GOOG to step over the corpse of OpenAI and take over for cents on the dollar all of the promises the now defunct ex-leader of AI made.
No, there's a not of noise about this but these are just 'statements of intent'.
Oracle very intimately understands OpenAI's ability to pay.
They're not banking $50B in chips and then waking up naively one morning to find out OpenAI has no funding.
What will 'cascade' is maybe some sentiment, or analysts expectations etc.
Some of it, yes, will be a problem - but at this point, the data centre buildout is not an OpenAI driven bet - it's a horizontal be across tech.
There's not that much risk in OpenAI not raising enough to expand as much as it wants.
Frankly - a CAPEX slowdown will hit US GDP growth and freak people out more than anything.
The cost of these data centers and ongoing inference is mostly the outrageous cost of GPUs, no?
I don't understand why the entire industry isn't looking to diversify the GPU constraint so that the hardware makers drop prices.
Why no industry initiative to break NVIDIA's strangehold and next TSMC's?
Or are GPUs a small line item in the outrageous spend companies like OpenAI are committing to?
If they aren't developing in parallel an alternative architecture than can reach AGI, when a/some companies develop such a new model, OpenAI are toast and all those juicy contracts are kaput.
Yes, the ChatGPT experience is much better. No, Gemini doesn't need to make a better product to take market share.
I've never had the ChatGPT app. But my Android phone has the Gemini app. For free, I can do a lot with it. Granted, on my PC I do a lot more with all the models via paid API access - but on the phone the Gemini app is fine enough. I have nothing to gain by installing the ChatGPT app, even if it is objectively superior. Who wants to create another account?
And that'll be the case for most Android users. As a general hint: If someone uses ChatGPT but has no idea about gpt-4o vs gpt-5 vs gpt-5.1 etc, they'll do just fine with the Gemini app.
Now the Gemini app actually sucks in so many ways (it doesn't seem to save my chats). Google will fix all these issues, but can overtake ChatGPT even if they remain an inferior product.
It's Slack vs Teams all over again. Teams one by a large margin. And Teams still sucks!
My experience is Gemini has significantly improved its UX and performs better that requires niche knowledge, think of some ancient gadgets that have been out of production for 4-5 decades. Gemini can produce reliable manuals, but ChatGPT hallucinates.
UX wise ChatGPT is still superior and for common queries it is still my go to. But for hard queries, I am team Gemini and it hasn’t failed me once
Opposite is true for a larger market. Gemini is great and available with one button click on most consumer phones. OpenAI will never crack most Android users by this logic of yours
or cheaper/free
I am using Gemini 3 Pro, I rarely use Flash.
I find gemini excels in multimodal areas over chatgpt and anthropic. For example, "identify and classify this image with meta data" or "ocr this document and output a similar structure in markdown"
It's amazing how different people have wildly varying experiences with the same product.
like it seems great, but then it's just bullshitting about what it can do or whatever
It's easily worth the monthly cost, and I'm happy to pay - something which I didn't even consider doing a year ago. OpenAI just doesn't have the same bundle effect.
Obviously power users and companies will likely consider Anthropic. I don't know what OpenAI's actual product moat is any more outside of a well-known name.
I remember when it wouldn't even give me the lyrics to the star spangled banner. https://news.ycombinator.com/item?id=44832990#44833365
These plus working with Jony Ive on hardware, makes it sound like they took their eyes off the ball.
no matter what openai does if its not accepting customers the ad budgets will flow to meta amaz and goog and be used as weapons against it.
And pure tech-wise - they seem to have went all-in on corp management understandable way of doing things - hardware(money) scaling which, while unavoidable in this game, must be accompanied by theoretic-algorithmic improvements as pure hardware scale game is again where Google is hardly beatable.
The problem with ChatGPT advertising is that it's truly a "bet the farm" situation, unlike any of their projects in the past:
- If it works and prints money like it should, then OpenAI is on a path to become the next Mag 7 company. All the money they raised makes sense.
- If it fails to earn the expected revenue numbers, the ceiling has been penciled in. Sam Altman can't sell the jet pack / meal pill future anymore. Reality becomes cold and stark, as their most significant product has actual revenue numbers attached to it. This is what matters to the accountants, which is the lens through which OpenAI will be evaluated with from this point forward. If it isn't delivering revenue, then they raised way too much money - to an obscene degree. They won't be able to sell the wild far future vision anymore, and will be deleteriously held back by how much they've over-sold themselves.
The other problems that have been creeping up:
- This is the big bet. There is no AGI anymore.
- There is no moat on anything. Google is nipping at their heels. The Chinese are spinning up open source models left and right.
- Nothing at OpenAI is making enough money relative to the costs.
- Selling "AI" to corporate and expecting them to make use of it hasn't been working. Those contracts won't last forever. When they expire, businesses won't renew them.
My guess is that they've now conducted small scale limited tests of advertising and aren't seeing the engagement numbers they need. It's truly a nightmare scenario outcome for them, if so.
They're declaring "code red" loudly and publicly to distract the public from this and to bide more time. Maybe even to raise some additional capital (yikes).
They're saying other things are more important than "working on advertising" right now. And they made sure to mention "advertising" lots so we know "advertising" is on hold. Which is supposedly the new golden goose.
Why drop work on a money printer? What could be more important? Unless the money printer turned out to be a dud.
Didn't we kind of already know advertising would fail on a product like this? Didn't Amazon try to sell via Alexa and have that totally flop? I'm not sure why ChatGPT would be any different from that experience. It's not a "URL bar" type experience like Google has. They don't own every ingress to the web like Google, and they don't own a infinite scroll FOMO feed of fashion like Meta. The ad oppo here is like Quora or Stack Overflow - probably not great.
I have never once asked ChatGPT for shopping ideas. But Google stands in my search for products all the time. Not so much as a "product recommendation engine", but usually just a bridge troll collecting its toll.
Um.
- Advertising. "We'll get back to working on your problem in a moment, but first, a word from our sponsor, NordVPN." It's not a good fit.
- Health. Sounds like unlicensed medical practice. That will require a big bribe to Trump.
- Shopping. Can pretty much do that now, in that ChatGPT can call Google. Will Google let OpenAI call Google Search?
Maybe "business like" for Americans. In most of the world we don't spend quite so much effort glazing one another in the workplace. "That's an incredibly insightful question and really gets to the heart of the matter". No it isn't. I was shocked they didn't fix this behavior in v3.
Not a major complaint for technical work where you don't even want to do much multi turn conversation. Just an observation.
That's the only ChatGPT feature keeping me from moving to Gemini. Specifically, the ability to upload files and automatically make them available as context for a prompt.
Surely they know that they can't just keep scraping the internet to train models.
If I don't use a Microsoft product, I'd have to go out of my way to use an OpenAI service. But they don't have a specialized "service" (like anthropic and developers) either. Gemini is there by default with Google/Reddit. To retain their first-to-market advantage, they'd need to be the default in more places, or invest in models and services that cater to very specific audiences.
I think their best best is to partner with different entities. But they lost reddit and twitter, and FB is doing their own thing too, so who's left? linkedin? school systems (but ChromeBook has them beat there), perhaps telecoms preloading chatgpt apps into phones?
In my layperson's opinion, I think they have an access problem. Windows 11/Copilot (Github and in windows) seems to be the main access stream and people hate both, and they don't have branding there either, just back-end. There is no device you can buy, service you can get that has an OpenAI branded thing on it as a value added feature.
I'm sure they'll do ok, but i keep hearing they need to do a lot more than just 'ok'.
You can't beat Google on high-quality data for pretraining; at scale, that's what really matters most, both in theory and practice. Other companies like Anthropic and DeepSeek are keeping up by taking advantage of smarter RL approaches, but I just don't see anyone at OpenAI with the research credentials to do that kind of work as they all left in the last mass exodus. They have been too complacent and let much of their high-quality talent go to their competition.
But at the moment Nvidia's 75-80% gross margin is slowly killing its customers like OpenAI. Eventually Nvidia will drop its margins, because non-0 profit from OpenAI is better than the 0 it'll be if OpenAI doesn't survive. Will be interesting to see if, say, 1/3 the chip cost would make OpenAI gross margin profitable... numbers bandied in this thread of $20B revenue with $115B cost imply they need 1/6 the chip cost, but I doubt those numbers are right (hard to get accurate $ numbers for a private company for the benefit of us arm-chair commenters).
And now they actually have competitors.
ChatGPT is a bit late now (even behind DeepSeek with DeepThink I believe)
I fear a Google dystopia. I hope DeepSeek or somebody else will counter-balance their power.
The goal was supposed to be an ethical competitor as implied by the word "Open" in their name. When Meta and the Chinese are the most ethical of the competitors, you know we're in a bad spot...
>OpenAI’s mission is to ensure that artificial general intelligence (AGI) ... benefits all of humanity.
I agree with you on the leadership.
I am not sure who OpenAI aims to please by nerfing their own product in this way. It can't be paying customers.
Even those who have zero experience with different (human) languages could've known this if they liked, from the fact that on the LMArena leaderboards, Gemini models have consistently ranked much higher in non-English languages than in English. This gap has actually shrunk a lot over time! In the 1.5 Pro days this advantage was huge, it would be like 10th in English and 2nd in many other languages.
Nevertheless, it still depends on the specific language you're targeting. Gemini isn't the winner on every single one of them. If you're only going to choose one model for use with many languages, it should be Gemini. But if the set of languages isn't too large, optimizing model selection per language is worth it.
Of course it never worked because if he knew what he should be doing he would be doing it already instead of hoping for spreadsheet magic to change the course.
>>There will be a daily call for those tasked with improving the chatbot, the memo said, and Altman encouraged temporary team transfers to speed up development.
Sam Altman clearly didn't get the memo.
https://www.moomoo.com/news/post/62341840/why-has-openai-ini...
Last month, I used ChatGPT while in SJ. I needed a function that's only available to paying customers, and which had worked well from my home. ChatGPT refused to recognize me as a paid-up customer. I had correct login creds + ancillary identifying info, but no go. Over the course of about half an hour, ChatGPT told me in several different ways it wouldn't (not couldn't) attempt to verify my customer status.
I'm now a former ChatGPT customer.
Diversity of tech companies is an important consideration for me, one that definitely outweighs one-time issues, especially in a field where credible competition is limited.
So you experienced a bug, which happens on software. I've traveled a lot and have never had an issue with my ChatGPT subscription. I'm not doubting you, but I don't think your anecdote adds much to the conversation of OpenAI vs Google.
Altman should know better. This sends terrible signals to employees, stakeholders and customers.
You don’t solve quality problems by scrambling teams and increasing pressure.
This reeks of terrible management. I can imagine Stanford graduates grinding it past midnight for “the mission”. If any if you is reading this: don’t do it. Altman is screwing you over. There are plenty of other places that won’t code-red your christmas season while having hundreds of billions of dollars in cash.
It is all physics from here.
•Google has data, a lot of private data actually (YT, Gmail, Workspace, Search Queries.. you name it) •Google has a lot of money •Google has top-talented AI engineers (Eying on DeepMind & Demis Hassabis staff) •Google has a huge userbase
With $20B in ARR and hundreds of billions in funding, would OpenAI be able to make its own remontada as Google did? I'm not sure, but it would be a long challenging journey.
I used to consider Gemini the worst of the bunch, it constantly refused to help me in the past, but not only has it improved, ChatGPT seems to have gone down the 'nerfing' road where it now flat out refuses to do what I ask it to do quite often.
But if the ML, if SOTA intelligence becomes basically a price war, won't that mean that Google (and OpenAI and Microsoft and any other big model) lose big? Especially Google, as the margin even Google cloud (famously a lot lower than Google's other businesses) requires to survive has got to be sizeable.
They must be really glad to have so much competition then.
> If a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project.
I wonder if OpenAI will start assisting Google now?
However using the model as a multi-hop search robot, leveraging it’s general background knowledge to guide the research flow and interpret findings, works exceedingly well.
Training with RL to optimize research tool use and reasoning is the way forward, at least until we have proper Stateful LLMs that can effectively manage an internal memory (as in Neural Turing Machines, and such).
Or did you just misuse basic terminology about LLMs and are now saying it misbehaved, likely because your org did something very bad with?
Even with your intelligence you would need years to deliver something like this: https://github.com/7mind/jopa
The outcome will be better for sure, but you won't do anything like that in a couple of weeks. Even if you have a team of 10. Or 50.
And I'm not an LLM proponent. Just being an empirical realist.
My code runs in 0.11s
Gemini's code runs in 0.5s.
Boss wants an explanation. ¯\_(ツ)_/¯
At some point you'll be better off with implementing features they hallucinated. Some people with public APIs already took this approach.
TPUs vs. GPUs and why Google is positioned to win AI race in the long term
https://news.ycombinator.com/item?id=46069048
Google, Nvidia, and OpenAI
It's so telling that they're delaying these "festures" because the know full well people don't want them.
>be Google
>watch regulators circle like vultures
>realize antitrust heat is rising faster than stock buybacks can hide
>notice a small lab called OpenAI making exotic tech and attracting political fascination
>calculate that nothing freezes regulators like an unpredictable new frontier
>decide to treat OpenAI as an accidental firebreak
>let them sprint ahead unchecked watch lawmakers panic about hypothetical robot uprisings instead of market concentration
>antitrust hearings shift from “break up the giants” to “what is AGI and should we fear it”
>Google emerges looking ancient, harmless, almost quaint
>pressure dissipates
>execute phase two: acceleration roll out model updates in compressed cycles
>flood the web with AI-powered services
>redefine “the internet” as “whatever Google’s infrastructure indexes”
>regulators exhausted from chasing OpenAI’s shadow
>Google walks back onto the throne, not by hiding power, but by reframing it as inevitability conspiracy theorists argue whether this was 5D chess or simple opportunism
>Google search trends spike for “how did this happen”
>the answer sits in plain sight:
>attention is all you need
Meanwhile, Google consolidated their AI operations under Google Deepmind and doubled down on TPUs.
The strategy "solve AGI and then solve everything else" is an all-in gamble that somehow AGI is within reach. This is not true.
IMO: the largest value creation from AGI won’t come from building a better shopping or travel assistant. The real pot of gold is in workflow / labor automation but obviously they can’t admit that openly.
If OpenAI becomes an also-ran by the time the hardware is released, this seems like a real possibility no matter how well-designed it is.
Gemini has been as good as GPT for more than a year
OpenAI still somehow gets the edge on the initial veneer of hype, and that's running thin
Competition is all you need.
The problem is, there is a whole ecosystem of businesses operating as OpenAI API wrappers, and those are gonna get screeeeewed.
There will be a daily call for those tasked
with improving the chatbot, the memo said,
and Altman encouraged temporary team transfers
to speed up development.
Truly brilliant software development management going on here. Daily update meetings and temporary staff transfers. Well known strategies for increasing velocity!It's incredible how 50 year-old advice from The Mythical Man-Month are still not being heed. Throw in a knee-jerk solution of "daily call" (sound familiar?) for those involved while they are wading knee-deep through work and you have a perfect storm of terrible working conditions. My money is Google, who in my opinion have not only caught up, but surpassed OpenAI with their latest iteration of their AI offerings.
OpenAI, imo, is absolutely going to crash and burn - it has absolutely underwhelming revenue and model performance compared to others and has made astronomical expenditure commitments. It's very possible that a government bailout partially covers those debts but the chance of the company surviving the burst when it has dug such a deep hole seems slim to none.
I am genuinely surprised that generally fiscally conservative and grounded people like Jensen are still accepting any of that crash risk.