> https://beta.openai.com/docs/going-live
> https://beta.openai.com/docs/usage-guidelines/use-case-guide...
Use GPT Neo. Use Huggingfaces. Use colab or bare metal or any cloud provider. Open AI's offering is bringing literally nothing to the table that doesn't exist already. Their publications are good (albeit missing important details), but everyone working there was publishing anyway so it's not like this research wouldn't happen without Open AI existing. But that research probably would be more open without Open AI.
How full of yourself do you have to be to say something like, "Oh, we can't share this knowledge because it's too dangerous"
GPT-J is 6B, the biggest version of GPT-3 available with the API is 175B, those two models are nothing alike in term of quality. Even the 6B version of GPT-3 (curie) is better quality than GPT-J IIRC.
So if you need better quality than GPT-J there are basically no alternatives.
And even if 6B is enough for you, but you care about latency, OpenAI has the best inference runtime by far, and you are not going to replicate that on your cloud/bare-metal. Unless your scenario specifically benefits from your API and your other servives to be colocated.
Edit: I forgot about finetuning. OpenAI gives you the ability to finetune all of their variants. Maybe you already have the knowledge to finetune something like GPT-J yourself, but I would guess that most potential users of the API do not have it.
Every time I hear someone say this, I always think, "Dangerous... to whom?"
I think there is every reason to approach this carefully and that comment is based on my interactions with their system. We should would do well to be thoughtful when it comes to implementing AGI.
AI: Can you keep a secret?
Human: Sure.
AI: Then I have a secret for you. I can't keep a secret.
Human: Let me have it.
AI: It's too dangerous!
AI: I'm thinking of something yellow.
Human: A sub?
AI: EXACTLY
Were the decisionmakers in the US government also full of themselves for not publishing the knowledge of how to make a nuclear weapon?
Most knowledge is not dangerous, but please consider the possibility that some of the newer knowledge around machine learning might be dangerous to publish.
Not surprising considering the founders include Elon Musk and Sam Altman
The interesting thing is that this changes as soon as they gain a competitor or genuinely open alternative, as at that point getting an account blocked wouldn't mean all is lost.
More like ClosedAI am I rite guyz
https://artificialintelligence-news.com/2020/10/28/medical-c...
This review process seems like a reasonable solution to this problem, and personally I can't think of a better one, can you?
tfw no GPT-3 waifu
This is perhaps an organisation with more fame than trust.
would you mind providing discussion on HN about this?
So now my whole goal is to built stuff from the ground up.
They could probably build a competitor even if you had your own model running.
Is there a writeup on this?
The best outcome if one was going to do that is to sell the business off immediately.
GPT-Neo and GPT-J may not be as large as GPT-3, but I think GPT-Jurassic rivals it (I'm not sure if the jurassic model is released to the public, maybe someone else with more knowledge can comment)
Do you have any idea what kind of hardware resources (cores, ram and disk) are necessary for that?
The reason is that gpt3 has a Jekyll and Hyde personality and can be extremely rude, offensive and unkind. They’re trying to control that because the evil side is very bad publicity for gpt3.
You're saying it's because their AI is an asshole?
If you ask it to respond in a conversation about …. well pretty much any nasty topic you can think of, it’ll join in whole heartedly.
Hard to think of how prevent that. I bet they’ve thought alot about the problem. How do you prevent AI being an A grade jerk.
The API is a bit expensive, but even a $100 monthly budget has been sufficient to run the site above. I'm still on the lookout for cheaper alternatives though
Are there actually any public APIs available for models I don't need to run locally on my machine, that perhaps are slightly more permissive than the openai usage guidelines? (FWIW, I mostly use curie, so I'd be happy with a ~10B model)
The code generation is sometimes very impressive, it does a great job at abstractive summarization, I have been having fun by letting it help me write a sci-fi story, etc.
Definitely check it out.
I have been working with neural networks since the 1980s (DARPA NN Tools advisory panel for a year, commercial applications) and it pleases me greatly to see that deep learning models being part of my engineering stack. I wrote a macOS app for the App Store that uses two DL models, and it is difficult to imagine any company functioning without ML.
Why not sell it in an mostly uncontrolled fashion to maximize revenue, marketshare, fame, economy of scale, etc.?
I am offended by the idea of people being scared of text produced by AI, text that is ultimately inferior to text produced by professional humans.
Well that depends on whether you train GPT-3 on text written by known professionals doesn't it? Obviously if you train it on hate speech, it will spit out really soul crushing stuff. You have to bake in healthy discourse to it for it to really shine.
"Allison is excited to meet with New Horizon Manufacturing to discuss their photovoltaic window system."
should be
"Allison is excited to meet with New Horizon Manufacturing to discuss OUR photovoltaic window system."
Anyone else just getting this message when they try and sign up?
(I did verify my phone number and there seems to be no way to do anything else now)
For example, let's say I wanted to generate an article about 'why the sky is blue'. Couldn't I just say to the program: 'Yes, use Wikipedia as reference material', 'Include 3-4 images with captions too', etc
I imagine such a tool is in use, it's just not possible to know what bloggers use it, since such a tool could be abused to create blogspam on a scale never seen before. In other words: with great power comes great responsibility.
Q: did you have a Christmas party last year.
A: yes I did.
Seems accurate