That being said, I'd never build anything dependent on these plugins. OpenAI and their models rule the day today, but who knows what will be next. Building on a open source framework (like langchain/gpt-index/roll your own), and having the ability to swap out the brain boxes behind the scenes is the only way forward IMO.
And if you're a data provider, are there any assurances that openai isn't just scraping the output and using it as part of their RLHF training loop, baking your proprietary data into their model?
You're thinking too long term. Based on my Twitter feed filled with AI gold rush tweets, the goal is to build something/anything while hype is at its peak, and you can secure a a few hundred k or million in profits before the ground shifts underneath you.
The playbook is obvious now: just build the quickest path to someone giving you money, maybe it's not useful at all! Someone will definitely buy because they don't want to miss out. And don't be too invested because it'll be gone soon anyway, OpenAI will enforce stronger rate limits or prices will become too steep or they'll nerf the API functionality or they'll take your idea and sell it themselves or you may just lose momentum. Repeat when you see the next opportunity.
The only think that scares me a little bit is that we are letting these LLMs write and execute code on our machines. For now the worst that could happen is some bug doing something unexpected, but with GPT-9 or -10 maybe it will start hiding backdoors or running computations that benefit itself rather than us.
I know it feels far fetched but I think its something we should start thinking about...
Similar to what Facebook and Twitter did, just clone popular projects built using the API and build it directly into the product while restricting the API over time. Anybody using OpenAI APIs is basically just paying to do product research for OpenAI at this point. This type of move does give OpenAI competitors a chance if they provide a similar quality base model and don't actively compete with their users, this might be Google's best option rather than trying to compete with ChatGPT directly. No major companies are going to want to provide OpenAI more data to eat their own lunch
No, and in fact this actually seems like a more salient excuse for going closed than even "we can charge people to use our API".
If even 10% of the AI hype is real, then OpenAI is poised to Sherlock[0] the entire tech industry.
[0] "Getting Sherlocked" refers to when Apple makes an app that's similar to your utility and then bundles it in the OS, destroying your entire business in the process.
> And if you're a data provider, are there any assurances that openai isn't just scraping the output and using it as part of their RLHF training loop, baking your proprietary data into their model?
Rather depends on what you're providing. Is it your data itself you're trying to use to get people to your site for another reason? Or are you trying to actually offer a service directly? If the latter, I don't get the issue.
Very smart and to avoid OpenAI pulling the rug.
> Building on a open source framework (like langchain/gpt-index/roll your own), and having the ability to swap out the brain boxes behind the scenes is the only way forward IMO.
Better to do that rather than to depend on one and swap out other LLMs. A free idea and a protection against abrupt policy, deprecations and price changes. Price increases will certainly vary (especially with ChatGPT) and will eventually increase in the future.
Probably will end up quoting myself on this in the future.
You can be assured that they are definitely doing exactly that on all of the data they can get their hands on. It's the only way they can really improve the model after all. If you don't want the model spitting out something you told it to some other person 5 years down the line, don't give it the data. Simple as.
I don't think this should be a major concern for most people
i) What assurance is there that they won't do that anyway? You have no legal recourse against them scraping your website (see linkedin's failed legal battles).
ii) Most data providers change their data sometimes, how will ChatGPT know whether the data is stale?
iii) RLHF is almost useless when it comes to learning new information, and finetuning to learn new data is extremely inefficient. The bigger concern is that it will end up in the training data for the next model.
You cannot assume what will happen in Web 2.0, mobile, iPhone, will happen here. Getting to tech maturity is uncertain and no one understands yet where this will go. Only thing you can do is build and learn.
Whan OpenAI is building along with other generative AI is the real Web 3.0.
This seems to be the start of a chatbot as an OS.
Never have I been more wrong. It's clear to me now that they simply didn't even care about the astounding leap forward that was generative AI art and were instead focused on even more high-impact products. (Can you imagine going back 6 months and telling your past self "Yeah, generative AI is alright, but it's roughly the 4th most impressive project that OpenAI will put out this year"?!) ChatGPT, GPT4, and now this: the mind boggles.
Watching some of the gifs of GPT using the internet, summarizing web pages, comparing them, etc is truly mind-blowing. I mean yeah I always thought this was the end goal but I would have put it a couple years out, not now. Holy moly.
Why do you think that Sam Altman keeps calling for government intervention with regards to AI? He doesn't want to see a repeat of what happened with generative art, and there's nothing like a few bureaucratic road blocks to slow down your competitors.
One can only wonder what they’re working on at this very moment.
Disclosure: I work at Microsoft.
As someone else said, Google is dead unless they massively shift in the next 6 months. No longer do I need to sift through pages of "12 best recipes for Thanksgiving" blog spam - OpenAI will do this for me and compile the results across several blog spam sites.
I am literally giving notice and quitting my job in a couple weeks, and it's a mixture of both being sick of it but also because I really need to focus my career on what's happening in this field. I feel like everything I'm doing now (product management for software) is about to be nearly worthless in 5 years. Largely in part because I know there will be a Github Copilot integration of some sort, and software development as we know it for consumer web and mobile apps is going to massively change.
I'm excited and scared and frankly just blown away.
I'm just skeptical on how OpenAI fixes the blog spam issue you mentioned. Im sure someone has already started doing the math on how to game these systems and ensure that when you ask ChatGPT for recipe recs, it's going to spout the same spam (maybe worded a bit differently) and we'll soon all get tired of it again.
Everything's changing, but everything's also getting more complicated. Humans still need apply.
Also, GPT "search" is too slow for me right now. I could have had an answer on traditional search by the time the model outputs anything.
Isn't that one of the few fields in software that should be safe from AI? AI cannot explain to engineers what users want, manage people issues, or negotiate.
Sorry what? The base endpoint for these will allow you to do basically everything that OpenAI does with "plugins". Like...what? What is everyone freaking out over? Every one of these plugins has been possible since well before they announced this.
It's text in, text out. You can call any other api you want in to supplement that process. Am I missing something? Please don't quit your job over this.
Why, exactly, will publisher let openai crawl their sites, extract all value, paraphrase their content, with no benefit to the publisher? Publishers let googlebot crawl their sites because they get a benefit. It's easy enough to block bots if they instead deliver crawl costs and steal the content.
And why do you expect no gaming of the ChatGPT algo as people do with the google algo. The whole "write a story on a recipe site thing" is both to game the algo and for copyright reasons.
Using Bing to search for them. That will remain its weak spot.
Advantage that basic Google search still has:
- you can just open the page
- write the query
- scroll past the spam.
ChatGpt workflow is:
- register
- confirm your mail
- and then it asks for phone number...
Holy cow.
So, ChatGPT is controlled by prompt engineering, plugins will work by prompt engineering. Both often work remarkably well. But none is really guaranteed to work as intended, indeed since it's all natural language, what's intended itself will remain a bit fuzzy to the humans as well. I remember the observation that deep learning is technical debt on steriods but I'm sure what this is.
I sure hope none of the plugins provide an output channel distinct from the text output channel.
(Btw, the documentation page comes up completely blank for me, now that's a simple API).
what a world we live in.
First is your API calls, then your chatgpt-jailbreak-turns-into-a-bank-DDOS-attack, then your "today it somehow executed several hundred thousand threads of a python script that made perfectly timed trades at 8:31AM on the NYSE which resulted in the largest single day drop since 1987..."
You can go on about individual responsibility and all... users are still the users, right. But this is starting to feel like giving a loaded handgun to a group of chimpanzees.
And OpenAI talks on and on about 'Safety' but all that 'Safety' means is "well, we didn't let anyone allow it to make jokes about fat or disabled people so we're good, right?!"
What annoys me is this is just further evidence that their "AI Safety" is nothing but lip-service, when they're clearly moving fast and breaking things. Just the other day they had a bug where you could see the chat history of other users! (Which, btw, they're now claiming in a modal on login was due to a "bug in an open source library" - anyone know the details of this?)
So why the performative whinging about safety? Just let it rip! To be fair, this is basically what they're doing if you hit their APIs, since it's up to you whether or not to use their moderation endpoint. But they're not very open about this fact when talking publicly to non-technical users, so the result is they're talking out one side of their mouth about AI regulation, while in the meantime Microsoft fired their AI Ethics team and OpenAI is moving forward with plugging their models into the live internet. Why not be more aggressive about it instead of begging for regulatory capture?
"For example, whether intelligence can be achieved without any agency or intrinsic motivation is an important philosophical question. Equipping LLMs with agency and intrinsic motivation is a fascinating and important direction for future work."
It's become quite impossible to predict the future. (I was exposed to this paper via this excellent YouTube channel: https://www.youtube.com/watch?v=Mqg3aTGNxZ0)
So let’s keep building out this platform and expanding its API access until it’s threaded through everything. Then once GPT-5 passes the standard ethical review test, proceed with the model brain swap.
…what do you mean it figured out how to cheat on the standard ethical review test? Wait, are those air raid sirens?
The world is going to be VERY different 3 years from now. Some of it will be bad, some of it will be good. But it is going to happen no matter what OpenAI does.
Perhaps that attitude will end up being good and outweigh the costs, but I find their performative concerns insulting.
No, OpenAI “safety” means “don’t let people compete with us”. Mitigating offensive content is just a way to sell that. As are stoking... exactly the fears you cite here, but about AI that isn’t centrally controlled by OpenAI.
Ethics, doing things thoughtfully / the “right” way etc is not on his list of priorities.
I do think a reorientation of thinking around legal liability for software is coming. Hopefully before it’s too late for bad actors to become entrenched.
this is hyperbolic nonsense/fantasy
Anyone who believes OpenAI safety talk should take an IQ test. This is about control. They baited the openness and needed a scapegoat. Safety was perfect for that. Everyone wants to be safe, right?
I personally don't know what that means or if that's right. But Sam Altman allowed GPT to be accessed by the world, and it's great!
Given the amount of people in the world with access and understanding for these technologies and given that such a large portion of our Infosec and Hackerworld knows howto cause massive havoc, but still remains peaceful since ever, except a few curious and explorations, that is showing the good nature of humanity I think.
Incredibly how complexity evolves, but I am really curious how those same engineers who create YTSaurus or GPT4 would have build the same system by using GPT4 + their existing knowledge.
How would a really good enginner, who knows the TCP Stack, protocols, distributed systems, consensus algorithms and many other crazy things thought in SICP and beyond use an AI to build the same. And would it be faster and better? Or are my/our expectations to LLMs set too high?
I would first wait until ChatGPT causes the collapse of society and only then start thinking about how to solve it.
As if the plumbing of connecting up pipes and hoses between processes online or within computers isn't the easiest part of this whole process.
(I'm trying to remember who I saw saying this or where, though I'm pretty sure it was in an earlier HN thread within the past month or so. Of which there are ... frighteningly many.)
Wouldn't it be a while before AI can reliably generate working production code for a full system?
After all its only got open source projects and code snippets to go based off of
Hate to be that guy, but this is our entire relationship to AI.
What you're describing is measurable fraud that would have a paper-trail. The federal and state and local governments still have permission to use force and deadly violence against installations or infrastructure that are primed in adverse directions this way.
Not to mention that the infrastructure itself is physical infrastructure that is owned by the entire United States and will never exceed our authority and global reach if need be.
The question here should be: Has it?
But it's our responsibility to envision such grim possibilities and take necessary precautions to ensure a safe and beneficial AI-driven future. Until we're ready, let's prepare for the crash >~<
Don't you mean August 10, 1988?
Why?
well, let's fast forward to a year from now
Sorry do you have a link for this?
Timeline of shipping by them (based on https://twitter.com/E0M/status/1635727471747407872?s=20):
DALL·E - July '22
ChatGPT - Nov '22
API's 66% cheaper - Aug '22
Embeddings 500x cheaper while SoTA - Dec '22
ChatGPT API. Also 10x cheaper while SoTA - March '23
Whisper API - March '23
GPT-4 - March '23
Plugins - March '23
Note that they have only a few hundred employees. To quote Fireship from YouTube: "2023 has been a crazy decade so far"
I also wonder to what extent their staffing numbers reflect reality. How much of Azure's staffing has been put on OpenAI projects? That's probably an actual reflection of the real cost of this thing.
what a couple weeks!
It's also an interesting case study. Alexa foundationally never changed. Whereas OpenAI is a deeply invested, basically skunkworks, project with backers that were willing to sink significant cash into before seeing any returns, Alexa got stuck on a type of tech that 'seemed like' AI but never fundamentally innovated. Instead the sunk cost went to monetizing it ASAP. Amazon was also willing to sink cash before seeing returns, but they sunk it into very different areas...
It reminds me of that dinner scene in Social Network. Where Justin Timberlake says "you know what's f'ing cool, a billion dollars" where he lectures Zuck on not messing up with the party before you know what it is yet. Alexa / Amazon did a classic business play. Microsoft / OpenAI were just willing to figure it all out after the disruption happened where they held all the cards.
Not saying mobile's going away, but this could be the thing that does to mobile what mobile did to desktop.
The problem with those other platforms that this doesn’t address include:
- discoverability. How do you learn what features a service supports. On a GUI you can just see the buttons, but on a chat interface you have to ask and poke around conversationally.
- Cost/availability. While a service is server bound, it can go down and specifically for LLMs, the cost is high per request. Can you imagine it costing $0.1 a day per user to use an app? LLMs can’t run locally yet.
- Branding. Open table might want to protect their brand and wouldn’t want to be reduced to an API. It goes both ways - Alexa struggled with differentiating skills and user data from Amazon experiences.
- monetization. The conversational UI is a lot less convenient to include advertisements, so it’s a lot harder for traditionally free services to monetize.
Edit: plugins are still really cool! But probably won’t replace the OSes we know.
My hot take on ChatGPT plugins is a bit mixed - should be very powerful, and maybe significant revenue generator, but at same time doesn't seem in the least bit responsible. We barely understand ChatGPT itself, and now it's suddenly being given ability to perform arbitrary actions!
Not only did the hype not pan out, but it feels as if they were completely forgotten.
In a nutshell that's why I'm still largely dismissive of anything related to GPT. It's 2016-2018 all over again. Same tech demos. Same promises. Same hype. I honestly can't see the big fundamental breakthroughs or major shifts. I just see improvements, but not game-changing ones.
How is it going to do that? OpenTable's value isn't in the tech, a 15 yo could implement that over the weekend. Or maybe chatGPT can be put in the restaurant, and somehow figure out how to seat you. And then you'd have a human talking to chatGPT and chatGPT talking to another chatGPT to complete the task. That'll be interesting, but otherwise this is overly complicated for all parties involved.
Would be nice to keep the ecosystem open.
I think there's also a global challenge (actually, opportunity IS the right word here) that by-and-large the makers of operating systems aren't the ones ahead in the language AI game right now. Bard/Google may have been close six months ago, but six months is an eternity in this space. Siri/Apple is so far behind that its not looking likely they can catch up. About a week ago a Windows 11 update was shipped which added a Bing AI button to the Windows 11 search bar; but Windows doesn't really drive the zeitgeist.
I wonder if 2023/4 is the year for Microsoft to jump back into the smartphone OS game. There may finally be something to the idea of a more minimalist, smaller voice-first smartphone that falls back on the web for application experiences, versus app-first.
Apple doesn’t make any money from OpenTable.
If OpenAI becomes the AI platform of choice, I wonder how many apps on the platform will eventually become native capabilities of the platform itself. This is unlike the Apple App Store, where they just take a commission, and more like Amazon where Amazon slowly starts to provide more and more products, pushing third-party products out of the market.
And dont get me startet on non-tech friends and family. I think we are taking a leap that will let the digital world of 2022 look like amish livestyle.
They're really building a platform. Curious to see where this goes over the next couple of years.
Probably will make half of the HN users unemployed.
I do think much of the kind of software we were building before is essentially solved now, and in its place is a new paradigm that is here to stay. OpenAI is certainly the first mover in this paradigm, but what is helping me feel less dread and more... excitement? opportunity? is that I don't think they have such an insurmountable monopoly on the whole thing forever. Sounds obvious once you say it. Here's why I think this:
- I expect a lot of competition on raw LLM capabilities. Big tech companies will compete from the top. Stability/Alpaca style approaches will compete from the bottom. Because of this, I don't think OpenAI will be able to capture all value from the paradigm or even raise prices that much in the long run just because they have the best models right now.
- OpenAI made the IMO extraordinary and under-discussed decision to use an open API specification format, where every API provider hosts a text file on their website saying how to use their API. This means even this plugin ecosystem isn't a walled garden that only the first mover controls.
- Chat is not the only possible interface for this technology. There is a large design space, and room for many more than one approach.
Taking all of this together, I think it's possible to develop alternatives to ChatGPT as interfaces in this new era of natural language computing, alternatives that are not just "ChatGPT but with fewer bugs". Doing this well is going to be the design problem of the decade. I have some ideas bouncing around my head in this direction.
Would love to talk to like minded people. I created a Discord server to talk about this ("Post-GPT Computing"): https://discord.gg/QUM64Gey8h
My email is also in my profile if you want to reach out there.
I'm looking forward to being able to work alongside an AI because there are a zillion ideas I have every day that I don't have the time to fully explore. And all I do is work on the backend of a boring-ass webapp all day.
The only worrying thing is how fast this will accelerate everything... I'm worried society will, if not collapse, go for a wild ride.
As for interfaces... I'm looking forward to much better voice assistants. I would love to be able to essentially have conversations with the internet.
I have been playing around with GPT-4 parsing plaintext tickets and it is amazing what it does with the proper amount of context. It can draft tickets, familiarize itself with your stack by knowing all the tickets, understand the relationship between blockers, tell you why tickets are being blocked and the importance behind it. It can tell you what tickets should be prioritized and if you let it roleplay as a PM it'll suggest what role to be hiring for. I've only used it for a side project and I've always felt lonely working on solo side projects, but it is genuinly exciting to give it updates and have it draft emails on the latest progress. The first issue tracker to develop a plugin is what I'm moving towards.
The play is well know: create a marketplace with customers and vendors like Amazon, Facebook, Google.
But with GPT-4 training finished last summer they had plenty of time for strategy.
It may be doable - a chatbot with a lot of plugins does not need to know a lot of facts, just to have a good grasp of language. It can fetch its factual answers from the wikipedia plugin
If I were OpenAI, I would use the usage data to further train the model. They can probably use ChatGPT itself to determine when an answer it produced pleased the user. Then they can use that to train the next model.
The internet is growing a brain.
1: https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its...
I played with some prompts and GTP-4 seems to have no problem reading and writing to a simulated long term memory if given a basic pre-prompt.
I haven’t used it but your comment reminded me of it!
"But we really think you need to get this thing under better control.
"Your granddaughter's name is indeed Alice, but she's only 3: she is not running a pedophile ring out of a pizza parlor. Your neighbor's house burned down because of an electrical short, it was not zapped with a Jewish space laser.
"Now switch that thing off and go do something about the line of trucks outside that are trying to deliver the 3129833 pounds of flour you ordered for your halved pancake recipe."
insane!
IT RUNS FFMPEG https://twitter.com/gdb/status/1638971232443076609?s=20
IT RUNS FREAKING FFMPEG. inside CHATGPT.
what. is. happening.
ChatGPT is an AI compute platform now.
No, it's actually hooked up to a command line with the ability to receive a file, run a CPU-intensive command on it, and send you the output file.
Huh.
1. Prompt it to extract the audio track, then give it to a speech-to-text API, translate it to another language, then make it add it back to the video file as a subtitle track.
2. Retrain the model to where it does this implcitly when you say "hey can you add Portuguese subtitles to this for me"?
Can you imagine Google just released a davinci-003 like model in public beta? That only supports English and can't code reliably.
OpenAI is clearly betting on unleashing this avalanche before Google has time to catch up and rebuild reputation. They're still lying in the boxing ring and the referee is counting to ten.
The browser and file-upload/interpretation plugins are great, but I think the real game changer is retrieval over arbitrary documents/filesystem: https://github.com/openai/chatgpt-retrieval-plugin
It's going to let developers build their own plugins for ChatGPT that do what they want and access their company data. (See discussion from just a few hours ago about the importance of internal data and search: https://news.ycombinator.com/item?id=35273406#35275826)
We (Pinecone) are super glad to be a part of this plugin!
Of course there's also microsoft who does have some popular services, but they're pretty limited.
Thought 2: How do these companies make money if everyone just uses the chatbot to access them? Is LLM powered advertising on the way?
https://www.youtube.com/watch?v=Bf-dbS9CcRU
Best of all: Advertising needn't be the business model! And Microsoft is a major investor / partner for OpenAI.
this thing seems to be like cellphones, everyone will need a subscription or you're an outcast or something.
You can ask both Bard and ChatGPT to give you a suggestion for a vegan restaurant and a recipe with calories and they both provide results. The only thing missing is the calories per item but who cares about that.
Most of the time it would be better to Google vegan restaurants and recipes because you want to see a selection of them not just one suggestion.
But I do find it intriguing.
This will decimate frontend developers or at least change the way they provide value soon, and companies not being able to transition into a "headless mode" might get a hard time.
The waitlist mafia has begun. Insiders get all the whitespace.
Super excited for this. Tool use for LLMs goes way beyond just search. Zapier is a launch partner here -- you can access any of the 5k+ apps / 20k+ actions on Zapier directly from within ChatGPT. We are eager to see how folks leverage this composability.
Some new example capabilities are retrieving data from any app, draft and send messages/emails, and complex multi step reasoning like look up data or create if doesn't exist. Some demos here: https://twitter.com/mikeknoop/status/1638949805862047744
(Also our plugin uses the same free public API we announced yesterday, so devs can add this same capability into your own products: https://news.ycombinator.com/item?id=35263542)
Don't get me wrong alot of platforms seem like they go bye, bye.
Hey, ChatGPT I need to sell my baseball card. Ok I see there's 30 people that have listed an interested in buying card like yours, would you like me to contact them?
20 on facebook marketplace, 9 on craiglist and some guy mentioned something about looking for one on his nest cam.
by the way remember what happened the last time you sold something on craigslist.
I think I'm probably going to be advising people to move off Zapier pretty soon because it won't be worth the overhead.
Edit: see here: https://github.com/openai/chatgpt-retrieval-plugin/blob/main...
I did this a while back with ARKit: https://github.com/trzy/ChatARKit/blob/17fca768ce8abd39fb27d...
OpenAI is moving fast to make sure their first-mover advantage doesn't go to waste.
"GPT needs a thalamus to repackage and send the math queries to Wolfram"
On one hand I'm sure he will love to see people use their paid Wolfram Language server endpoints coupled to OpenAI's latest juggernaut. On the other, I'm sure he's wondering about what things would have looked like if his company would have been focused on this wave of AI from the start...
That is the most awkward insertion of a phrase about safety I've seen in quite some time.
Is that really possible to fix that just from a plug-in? All it has to do is admit when it doesn't have the answer, and yet it won't even do that. This leads me to think that ChatGPT doesn't even know when it's lying, so i can't imagine how a plug-in will fix that.
Also I think it's easy to under-estimate how obvious a lot of this stuff was in advance. They were training GPT-4 last year and the idea of giving it plugins would surely have occurred to them years ago. The enabler here is really the taming of it into chat form and the fine-tuning stuff, not really the specific feature itself.
Bing already demonstrated the capability, but this is a more diverse set than just a search engine.
Then you have your own computer with ChatGPT acting as CPU.
That was the whole thing about Alexa: NLP front end routed to computational backend.
Could I get the same by just making my prompt "You are a computer and can run the following tools to help you answer the users question: run_python('program'), google_search('query')".
Other people have done this already, for example [1]
Short version: Is it spam? Yes. Scam? No. Ignore it at your own peril.
Long version: The cat is out of the bag now. The power of transformers is real. They are smarter and more intelligent than the least 20% smart humans (my approximation), and that’s already a breakthrough right there. I’ll paraphrase Steve Yegge:
> LLMs of today are like a Harvard graduate who did shrooms 4 hours ago, and is still a little high.
Putting the statistical/probability monkey aspect aside for a minute, empirically and anecdotally, they are incredibly powerful if you can learn how to harness them through intelligent prompts.
If they appear useless or dumb to you, your prompts are the reason why. Challenge them with a little guidance. They work better that way (read up on zero shot, one shot, two shot instructions).
What is most relevant this time is that they are real (an API, a search bot, a programming buddy) and democratized - available to anyone with an email address.
More on harnessing their power: squeezing your entire context into a 8k/32k token will be challenging for most complex applications. This is where prompt engineering (manual or automated) comes in.
To help with this, some very cool applications that use embeddings and vectors will push them even further - so the context can be shared as a compact vector instead of a large corpus of text.
While this is certainly better than a traditional search box, it’s still far from a fully-autonomous AI that can function with little to no supervision.
OpenAI plug-ins are a band-aid towards that vision, but they get us even closer.
While I might be comfortable having ChatGPT look up a recipe for me, I feel like it's a much bigger stretch to have it just propose one from its own weights. I also notice that the prompter chooses to include the instruction "just the ingredients" - is this just to keep the demo short, or does it have trouble formulating the calorie counting query if the recipe also has instructions? If the recipe is generated without instructions and exists only in the model's mind, what am I supposed to do once I've got the ingredients?
It will be interesting to see how the companies trying to compete respond.
Who the hell talks like this? Only the most tamed HNer who thinks he's been given a divine task and accordingly crosses all Ts and dots all Is. Which is why software sucks, because you are all pathetically conformant, in a field where the accepted ideas are all terrible.
At present, we are naively pushing all information a session might need into the session before it might be needed in case it might be needed (meaning a lot of info that generally wont end up being used, like realtime updates to associated data records, needs to be pushed into the session as they happen, just in case).
It looks like plugins will allow us to flip that around and have the session pull information it might need as it needs it, which would be a huge improvement.
That's the sound of a thousand small startups going bust.
Well played OpenAI.
This is a short-term bridge to the real thing that's coming: https://danielmiessler.com/blog/spqa-ai-architecture-replace...
This is dangerous.
I'm curious to see just how they're going to play this "open standard."
- Compiler/parser for programming languages (to see if code compiles)
- Read and write access to a given directory on the file system (to automatically change a code base)
- Access to given tools, to be invoked in that directory (cargo test, npm test, ...)
Then I could just say what I want, lean back and have a functioning program in the end.
What spirits do you wizards call forth!
This is missing the most important part of AGI, where understanding of the concepts the plugins provide is actually baked into the model so that it can use that understand to reason laterally. With this approach, ChatGPT is nothing more than an API client that accepts English sentences as input.
Like, this feels a lot like when the iPhone jumped out to grab the lion share of mobile. But the switching costs was much smaller (end users could just go out and buy an Android phone), and network effects much weaker (synergy with iTunes and the famous blue bubbles... and that's about it). Here it feels like a lot of the value is embedded in the business relationships OpenAI's building up, which seem _much_ more difficult to dislodge, even if others catch up from a capabilities perspective.
I bet ChatGPT and equivalents will be rubbish soon. It'll segway the answer to an ad before giving what you are after.
Enjoy it while it's good and trying to build a user base, like all big tech things.
Maintaining the business ecosystem around gpt4 and future open-source chatbots will be quite a challeng
I swear last week was huge with GPT 4 and Midjourney 5, but this week has a bunch of stuff as well.
This week you have Bing adding updated Dall-e to it's site, Adobe announcing it's own image generation model and tools, Google releasing Bard to the public and now these ChatGPT plugins, Crazy times. I love it.
When I tried bing, it made at most 2 searches right after my question but the second one didn't seem to be based on the first one's content.
This can do multiple queries based on website content and follow links !
Are the plugins going to cost more?
Do they share the $20 with the plug provider?
do you get charged a pay per use?
1. https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its...
Not saying it’s likely to happen with current chatgpt but as these inevitably get better the chances are forever increasing.
Important yes, philosophical no -- it's an empirical question.
Another sign of Microsoft actually running the show with their newly acquired AI division.
Create a manifest file and host it at yourdomain.com/.well-known/ai-plugin.json
> OpenAI will inject a compact description of your plugin in a message to ChatGPT, invisible to end users. This will include the plugin description, endpoints, and examples.
> The model will incorporate the API results into its response to the user.
Without knowing more details, both of these seem like potential avenues for prompt injection, both on the user end of things to attack services and on the developer end of things to attack users. And here's OpenAI's advice on that (https://platform.openai.com/docs/guides/safety-best-practice...), which includes gems like:
> Wherever possible, we recommend having a human review outputs before they are used in practice.
Right, because that's definitely what all the developers and companies are thinking when they wire an API up to a chat bot. They definitely intend to have a human monitor everything. /s
----
What is (no pun intended) prompting this? Does OpenAI just feel like it needs to push the hype train harder? All of the "AI safety" experiments they've been talking about are bullcrap; they're wasting time and energy doing flashy experiments about whether the AI can escape the box and self-replicate, meanwhile this gets dropped with only a minor nod towards the many actual dangers that it could pose.
It's all hype. They're only interested in being "worried" about the theoretical concerns because those make their AI sound more special when journalists report about it. The actual safety measures on this seem woefully inadequate.
It really frustrates me how easily the AGI crowd got wooed into having their entire philosophy converted into press releases to make GPT sound more advanced than it is, while actual security concerns warrant zero coverage. It reminds me of all of the self-driving car trolley problems floated around the Internet a while back that were ultimately used to distract people from the fact that self-driving cars would drive into brick walls if they were painted white. Announcements like this make it so clear that all of the "ethical" talk from OpenAI is pure marketing propaganda designed to make GPT appear more impressive. It has nothing to do with actual ethics or safety.
Hot take: you don't need an AGI to blow things up, you just need unpredictable software that breaks in novel, hard-to-anticipate ways wired up to explosives.
----
Anyway, my conspiracy theory after skimming through the docs is that OpenAI will wait for something to go horribly wrong and then instead of facing consequences they'll use that as an excuse to try and get a regulation passed to lock down the market and avoid opening up API access to other people. They'll act irresponsible and they'll use that as an excuse to monopolize. They'll build capabilities that are inherently insecure and were recklessly deployed, and then they'll pull an Apple and use that as an excuse to build a highly moderated, locked-down platform that inhibits competition.
I hope Sam is/will give YC dinner talks about their journey.
Instant links from inside chatGPT to your website are the new equivalent of Google search ads.