I would like to see a memory provider/system that allows us to own this data and put OpenAI et al on the customer end. They should be paying US for that.
Oof, I wish I had the optimism to even consider this a realistic option. If we thought social media power was a threat to democracy, wait til we see what AI companies do.
My email and work documents are obviously important if I'm querying for information about them, but that is self evident and also not a moat (I could grant another tool access to these things).
Computational efficiency is a moat. If Google can provide an AI response for $0.05 of infrastructure and electricity, but it takes OpenAI $0.57, that's bad news for OpenAI.
If that’s the case, it mostly just seems like you’re not working on sufficiently complex problems to find the AI useful. Or are just keeping that complexity in your head, and just bringing in the AI “as a consultant,” as it were.
If that’s the case, I recommend trying to organize your project with the AI from the start. I’ve had a lot of productive benefits from treating ChatGPT folders as ongoing conversations about a particular project, questions I have on it, random ideas, etc. Memory is absolutely crucial for my use case.
No. Iterative interrogation is the main way these tools are used, hence "Chat" GPT. It is rare that I'm revising queries from a week ago.
More useful AI context comes from permanent (and portable) artifacts like a code repo. Having a 2 million token context window is much more useful than being able to continue a chat session from a week or more ago.
The interaction data is the actual interesting but, but there's no guarantee that its the refinement that's best needed.
OR it keeps telling me one thing or another
"X didnt work, heres the output"
OK try "X"
ok buddy.
The moat for AI products will be, as is so often the case, user data. In this case, your personal history of interactions with a given AI.
The author predicts a land grab where AI companies try to scoop up as much personal data on you as they can as fast as they can, which renders them significantly more personalized to you than other AIs. That's the moat.
Analogous to Facebook managing to scoop up your entire social graph. Other social networks popped up, but there was no incentive to use them because you didn't have your social graph setup there and it was really hard to rebuild.
When the author mentions "memory," what does that mean? Is this about RAG-style memory? I'm not sure that's a "moat."
You can see this in the reddit memes that say things like “open chatgpt and ask it for your 5 biggest blind spots right now. Mind. Blown.”
Those who know it’s a tool call - plus some clever algorithms governing what the tool returns - could not be rolling their eyes harder. People who know what’s up will keep pasting things into new chats, and keep using delete and “forget memories” buttons. Maybe even multiple accounts.
But increasingly that’ll be “the old slow way”. You can see it in the comments here - people are grateful not to have to explain the stack again. They don’t want a blank unprimed conversation - and rather than copy-pasting a priming prompt (or having the model write a Cursor rule) they’d rather abdicate control over the AI’s behavior to an opaque priming process and a tool with unknown recall.
But everyone else is doing it, so a great many eye-rollers will give up and be swept up too.
AI memory has already captured the type of person who obeys instructions in reddit memes. Next is normies (your parents) who will find it pleasant the AI seems to know them well. They won’t understand how creepy it is, nor how much power is in the hands of someone who can train an AI on their chats. And experts will do their best to make the AI forget with delete buttons and the like; but even they will need to let the tools remember their patterns just to keep up with society.
Ergo, lock-in & network effects.
So yes, it’s a pretty reasonable prediction.
But I am using ChatGpt mostly as a way to flesh out ideas, question my assumptions, and do similar things, so YMMV.