I don't know what's wrong with GPT-5 (on ChatGPT.com). On the surface, everything looks hunky-dory until you get into the details.
Sometimes it is too smart, and sometimes it is stupid. All this within the same chat-session. Sometimes it makes mistakes, other times it does not. Sometimes it remembers the context, sometimes it forgets and has to be reminded multiple times. With code - it writes easier code or complex code - buggy code or working code - and it is hard to predict when it would do what.
Problems often have multiple solutions. And GPT-5 seems to adhere to that philosophy. Given it the same problem twice, it will come up with different solutions. But not reliably do so every time.
It has become quite difficult to use it (or 'vibe' with it) - compared to Claude models or GPT-4o.
The GPT-4o model was not better - but at least it was reliable. There was a subtle pattern to its behavior. I knew when it would work, what mistake it would make, and how to get it to work, and when it would not work. And when writing, GPT-4o would usually sound like a philosophical poet.
First, GPT-5 disappointed after the hyped launch. And now this.
This experience is mostly based on ChatGPT, and a bit on GitHub Copilot. With Copilot, I stick to Claude Sonnet 4.
The Canvas also feels buggy. When you ask it to edit a part of the text, it sometimes deletes other sections as well. Albeit, I do not use Canvas as much.
I thought it may be due to auto-routing. And tried to fix it by selecting the GPT-5 type, no luck.
Has anybody else faced the same problem? And how did you resolve it?
But then I thought, let me erase everything.
Keep it a clean slate - because sometimes context, memories, and resulting bias spill a lot more into the chats.
That prompted me to do what I do with Google and YouTube - clear all history.
I took my backups. I cleared my chats, the library, and the memories, removed all shared links, and deleted the projects. Logged out of all devices, too. (Did it 2-3 times, you'll see why)
And then got back in. I ask, "Do you remember me?"
And that thing still remembers every major point about me, including some of the project code names that I discussed with it. But there were a couple that it did not remember at all - as if it did not want to remember those and selectively forgot. Its memory seems faded when I ask particular questions, and then it mixes it with generic knowledge.
I've tried the process multiple times, but I still can't get it to forget me. The only thing I did not try was to delete my account.
I even asked it to forget me, but I know it's not capable of erasing memories by itself - it was worth a try.
Not that I must make it forget, but this is pretty odd and may not be as simple as clearing cookies and deleting the browser's history.
Could be related to some memory state (i mean physical memory like RAM or cache) that will be cleared once I stop using the model for a few hours?
Any ideas?