Subject: GPT Store Launch
Dear GPT Builder,
We want to let you know that we will launch the GPT Store next week. If you’re interested in sharing your GPT in the store, you’ll need to:
- Review our updated usage policies and GPT brand guidelines to ensure that your GPT is compliant
- Verify your Builder Profile (settings > builder profile > enable your name or a verified website)
- Publish your GPT as ‘Public’ (GPT’s with ‘Anyone with a link’ selected will not be shown in the store)
Thank you for investing time to build a GPT.
- ChatGPT Team
1. The GPT builder itself didn't feel like it was a well-tuned prompt (i.e., the prompt they use to guide prompt creation). It created long-winded prompts that left out information and didn't pay attention to what I said. Anything I enter into the GPT builder interface is probably very important!
2. The quotas are fairly low, and apply to testing. I was only able to do maybe 10 minutes of playtesting before I ran out of quota.
3. There's no tools to help with testing, it's all just vibes. No prompt comparisons.
4. The implied RAG is entirely opaque. You can upload documents, and I guess they get used...? But how? The best I could figure out was to put text into the prompt telling GPT to be very open about how it used documents, then basically ask it questions to see if it understood the content and purpose of the documents I uploaded.
5. There's no extended interface outside of the intro questions. No way to emit buttons or choices, just the ever-present text field.
6. There's no hidden state. I don't particularly want impossible-to-see state, but a powerful technique is to get GPT to make plans or internal notes as it responds. These are very confusing when presented in the chat itself. In applications I often use tags like <plan>...</plan> to mark these, which is compatible with the simple data model of a chat.
7. There's no context management. Like hidden state, I'd like to be able to mark things as "sticky"; things that should be prioritized when the context outgrows the context window.
These are all fixable, though I worry that OpenAI's confidence in AI maximalism will keep them from making hard features and instead they just rely on GPT "getting smarter" and magically not needing real features.
Re: 2. 100%. It's rough.
Re: 4. Had a ton of issues where it would just error and say it couldn't find the document, completely ruined the point / purpose of one I made.
Re: Hidden state - could be fun, but I do like the transparency of everything needing to be out in the open. But maybe a "hidden by default" scratchpad.
But yes, the whole thing does seem bound to regress to low-quality "I'll enter 1/5th of your request in for you". You can probably even make a script that uses GPT-3.5 to generate these "GPTs".
At the moment Actions are pretty brutal to the experience, with constant confirmations and they seem to halve both the speed and the quota since I think it takes two GPT responses, one to assemble the Action and another to react to the result. Using a Custom GPT that uses extensive actions means sitting and watching it think, waiting to hit a button, then sitting and watching it think for a long time, only to run out of quota before you get to anything interesting.
Every time I get a response that is off from the custom then I give it a nudge to edit its configuration. The main issue is that its configuration doesn't have enough space for all the rules and it likes to ditch more minor rules when you give it something extra to remember.
Now with custom gpts anyone can piggyback on that trust and just upload all that data to their servers.
Can they? I thought custom GPTs were hosted entirely by OpenAI with no middle-man in-between? How would the creator of a custom GPT receive any user prompt data? AFAIK they're not even accessible through the API to build on top of (unlike the Assistants)
Sorry, I haven't been doing much in the AI space other than an occasional Ask GPT question once in a while.
We’re rolling out custom versions of ChatGPT that you can create for a specific purpose—called GPTs. GPTs are a new way for anyone to create a tailored version of ChatGPT to be more helpful in their daily life, at specific tasks, at work, or at home—and then share that creation with others.
Starting today, you can create GPTs and share them publicly. Later this month, we’re launching the GPT Store, featuring creations by verified builders. Once in the store, GPTs become searchable and may climb the leaderboards. We will also spotlight the most useful and delightful GPTs we come across in categories like productivity, education, and “just for fun”. In the coming months, you’ll also be able to earn money based on how many people are using your GPT.
So how does that work? Are they showing ads? (Narrator: its always ads) Is this the new ad-sense and search engine replacement? Its not SEO anymore, its custom GPTs.
"In the coming months, you’ll also be able to earn money based on how many people are using your GPT." https://openai.com/blog/introducing-gpts
The one with the most usage by people other than me (posted it on twitter) is https://chat.openai.com/g/g-MGIdYisxl-small-answer
It's called "Small Answer". It's exactly what it sounds like. It gives very short and concise answers, which means it's very fast. Very useful when there are load problems or you just want it to be short and to the point.
-----------------
I made another that is ridiculously niche, but has been useful to a few others. https://chat.openai.com/g/g-thBmRjljk-ffmpeg-build-script-cr...
"ffmpeg Build Script Creator" - it generates a build script for you to build a custom build of ffmpeg with only the features you need on the platform you need it on (to minimize binary size etc. probably for embedding in an application).
This is the kind of thing I'm looking more forward to on GPT store. Incredibly niche use cases that someone took the time to prompt / provide sample data / 3rd party information for it to work properly.
-----------------
The most useful by far to me is "The Full Imp". It does a much better job building full implementations of things you tell it to. I find it much higher quality than Grimoire and GPT-4.
Check it out at https://choir.chat — it redirects directly to the Choir custom GPT.
Unfortunately there's no magic open-source solution since there's a lot of moving pieces involved that are bespoke to a given use case, and the ones that claim to be magic are libraries like LangChain, which aren't.
It's perplexing why there aren't, especially when the individual components are relatively straightforward (code interpreter, RAG, search, function calling, image generation).
If you are referring to the assistants API which adds some more complex behavior, there is LangChain as others mentioned, but also some more turn key, self hosted options (which I have not tried) such as
- https://github.com/stellar-amenities/assistants
- https://github.com/transitive-bullshit/OpenOpenAI
If you are referring to the marketplace itself, most developers are currently rolling their own web apps with billing and auth while they wait for OpenAI’s offering.
Finally, as a shameless plug, I’ve been working with some friends on a marketplace which provides auth and billing but decouples you from a specific model provider and the high platform fees they may plan to charge. It isn’t open source but we think it might strike the right balance. https://market.interactwith.ai/
If you don’t have GPT4, no value is added
^^ Look no further! LOOKS LIKE A PROBLEM FOR AI!
How will they prevent a developer from sending and storing private information from their GPT to their server through APIs?
How will they prevent doxing? Such as a GPT posting hyperlinks that direct to a server owned by the developer that performs geolocation.
It is quite possible this will spell the end of GPTs directories but if they mess up with curation there might still be a space for them.
you can't even ask it to spell words incorrectly
if you ask gpt4 to spell words incorrectly, it fusses about it. i'm commenting on the fact that the company is too stuck on appearances
Yes you can.
https://chat.openai.com/share/cd28dc1e-cb50-45b0-986b-275d53...
If a student doesn't engineer the prompt properly and the LLM uses its "native" voice, this can be easily detected by instructors, so it's very important for cheaters to disguise their cheating and hew closely to an inexperienced writer, even a foreign-language speaker, in order to pass muster.
> write a short speech in the style of Donald Trump about the promises of space travel. Make lots of common spelling errors, favoring phonetic spelling over correct spelling
It gave me a decent response, but not as many spelling errors as I wanted. I replied with:
> Rewrite it with much worse spelling.
This is what I got:
> Ladeez and jentlemen, lemme tell ya, space travul, it's gunna be huuuge. We're talkin' 'bout the final frunteer, folks. An' we, the United States, we're gunna leed the way, like alwayz. We're gunna have the best rokkits, the best peeple, the best tek-noluhgee.
> Now, I promis ya, we're gunna do it biglee. We're gunna go to Marz, an' lemme tell ya, it's gunna be terrifik. We're gunna plant the Amurican flag on Marz, an' it's gunna be bootiful. The Marshans, they're gunna luv us, I'm shure.
> We're gunna make space travul so affordebul, evry Amurican can get a tikkit to the starrs. An' we're gunna do it fast. Why wate? We've got the best enjineers, the best syentists, beleeve me, they're inkredible.
> An' we're gunna make grate deels in space, the best deels. We're gunna mine the asteroieds, we're gunna bild the best space stashuns. It's gunna be a win-win. Jobz, ekonomy, syence, ejukashun – evrything is gunna benefit.
> So let's make space travul grate agen! Thank ya, God bles ya, and God bles Amurica!
Reason: The GPT-4 API is too expensive for most use cases. This encourages app developers to build custom GPTs, letting their users pay $20/month for a ChatGPT Plus subscription. To provide the same service in a standalone webapp using the GPT-4 API you'd have to charge ~$50/mo for your app only, and that's if you settle for ~50-75% margins (SaaS usually has much higher margins).
By the time GPT-4 level (proprietary or open source) models proliferate and come down in price, OpenAI will have GPT-4.5/GPT-5 for $20/mo in ChatGPT Plus, and it will be tough to ask users to pay to use a subpar model.
For many dev needs, an open source model can be used far cheaper, without vendor lockin, with more flexibility, free of censorship, and with lower latency.
To make this play and sustain it, OpenAI has to continue to have a model that is significantly better than open source models in particular- I’d say perhaps as much as one order of magnitude.
They are vulnerable. If Meta, for example, releases another Llama that is close to the current GPT state of the art, it may be a silver bullet.
My world view is tainted. I’m profoundly opposed to AI via API, and I think supporting open source models is so important that it has the potential to affect the future of our species, given how important the AI revolution will be this century.
Consider some points:
1. GPT-4 was trained in Summer 2022. OpenAI already has better models.
2. It's not just the model, but the infra around it: ChatGPT has tool use — image generation, web search, and API calls through "actions" — built in.
3. More infra: ChatGPT has a builtin moderation endpoint. This is not sexy, and although many of us hackers want uncensored AI, most applications will need some moderation.
4. ChatGPT has >100M users, and there is some lock-in already. ChatGPT users don't want their chats split over multiple apps.
5. Open source (and proprietary models like Grok) are fine-tuning of synthetic data generated by GPT-4. This fine-tuning process limits them to be sub-GPT-4 level.
6. Even the best open source models (eg Mixtral) are significantly worse than GPT-4. Their low cost makes them attractive, but if you believe, as I do, that sub-GPT-4 level models are just not that compelling, the open source AI ecosystem has a lot of catching up to do.
As long as there exist proprietary models that are an order of magnitude more capable than open source models, I expect the bulk of the value and usage will accrue to the ecosystems of the proprietary models. I do hope that at some point soon this changes. Maybe open source models achieve a flywheel of data, crowdsourced algorithmic optimization, and perhaps some form of efficient distributed training on consumer hardware is possible. This would be awesome, IMO.