The free version gets a lot of use around here but the most powerful feature is the ability to search the web, which is only available to paid users. I pay $20/month for myself and I’d happily pay a bit more for the whole family, but not $20/month per person - it adds up. Family members end up asking to borrow my phone a lot to use it.
Give me a 3-4 person plan that costs $30-$40/month. You’re leaving money on the table!
For software development I find that Phind is pretty good at combining search results with GPT-4 in a way that increases the quality of the result.
Maybe OpenAI can convince the Bing team to index everything using their embeddings. If ChatGPT could also read the text directly from Bind instead of having to "surf the web" it would be able to consume several search results at the same time. In the future I could even see Bing et al. running an LLM over all text when indexing a page to extract info and stats like a summary, keywords, truthfulness, usefulness, etc.
This is where I suspect Bard is going to be an absolute beast of a product. Ability to quickly and thoroughly consume a bunch of hits and find the best and summarize and such is something uniquely able for Google (and increasingly, Kagi)
It would be pretty rad if she could just have the app on her tablet with a family plan. She doesn't use it quite enough to justify getting her own subscription, but especially if we could share GPTs across devices, so she gets the ones I make for her, but doesn't get flooded with my work or research related GPTs.
BTW. I read once some person made automated generation of bed time stories (with childrens as the main characters) for his children using open AI API and speakers - I was quite amazed (not a thing I would do, but nice usage for gpt).
(Although I haven't yet myself tried any alternative that is clearly on par with ChatGPT 4)
I don’t even like that when my family picks up the remote, Apple TV assumes it’s me using the TV. They watch something and mess up my Up Next history and recommendations. I wish it supported using a PIN. I’ve thought about getting rid of the remote to force everyone to use their phone as a remote, because then it detects who is using it and automatically switches accounts. But that means everyone has to have an iPhone and have their phone charged, etc. Getting rid of the remote just for my convenience seems too inconsiderate.
they want you hooked on apps, API, etc, before the real costs are brought in. they likely should be charging anywhere from 50-100$ depending on hours
If you want an entirely free and open LLM experience, you can also run one of the ever-improving open source models on your own hardware. But for many if not most companies, paying $25/mo per user for something as amazing as ChatGPT-4 is a bargain.
Yea, another way to word it would be to imagine that they _only_ had a more expensive "no train" option. Now ask if it would be okay to offer a lower priced but yes-train version.
It would make more sense for them to just train on it anyway.
Alternatively, you could also use your own UI/API token (API calls aren't trained on). Chatbot UI just got a major update released and has nice things like folders, and chat search: https://github.com/mckaywrigley/chatbot-ui
But "we won't train from your data" is a powerful marketing line, and differentiator between classes of customer, even if they have no intention to train from the data of anyone.
On All In, they discussed the leverage from AI tools and they probably also meant open source, but one of the companies just rolled their own instance of a big monthly SaaS product because it was such a big expense for the startup.
- This is an essential, best-in-class tool. You wouldn't deny your employees a laptop or a free lunch, would you?
- $5/user/mo is a bargain compared to the hassle of building/hosting this yourself, punching holes in your firewall every time you need to receive a webhook, dealing with security and auth issues.
- $60k is half the cost of someone you don't need to hire on your in-house IT team. Does it make sense yet?
I'll take that bet ;) Not really sure about OpenAI, but you can absolutely negotiate with almost any company.
The thing is those same people need to be paid, and that’s a much (100x) larger bill, so the extra amount doesn’t really signify.
The higher bandwidth is to clearly entice new customers, but the question remains, what happens to the old ChatGPT Plus users? Do their quotas get eaten up by these new teams?
Chat History is off for this browser. When history is turned off, new chats on this browser won't appear in your history on any of your devices, be used to train our models, or stored for longer than 30 days. This setting does not sync across browsers or devices.
Aside: If you can see other colleagues' interactions with the custom/private GPTs, it could be quite an efficient way to share knowledge, especially for people in disparate time zones.
This is probably run on Microsoft servers (Azure, basically), not OpenAI servers, so it shouldn't directly compete for capacity. This is more of a "the pie got bigger" situation.
Hoping to see something good come out of this
Use cases I see are common ones - basic usage of ChatGPT but admin can control access. Provides ability for companies to bill directly instead of reimbursements, and have more control over it. HR docs and policies can be a separate GPT. Though nothing which requires multi level access control.
UI components can be generated as per your UI guidelines, same for tests. Hoping for good things
You have one non actionable marketing answer, a growth graph created without axis (what are people going to do with that?) and a Python file which would be easier just to run to get the error.
That kind of reinforce my belief that those AI tools aren't without their learning curves despite being in plain English.
Does this mean they will still use your data for other non-training purposes?
Does anyone know if this applies to voice conversations? This is me while I'm driving: upload big PDF -> talk to GPT: "Ok, read to me the study/book/article word for word."
Good job OpenAI.
Sorry where do you see that? I only see "higher usage limits"?
That article doesn't say 100.
100 is what I read in the openai forums earlier today.
It seems odd we have enterprise but cannot access GPT-4 through the main ChatGPT interface.
The former should have GPT-4 access; if not, that’s a bug, and I can look into it if you email me at atty@openai.com.
The API and ChatGPT are separate products, and usage or credits purchased for the API do not provide paid ChatGPT access.
My wife uses chatgpt only a few times a day.
I guess I need to 2x my browsers. I don't think this would work on the phone because I believe I need my browser open for chatgpt to continue its computations.
The GPT store
I was working on something at the end November that was proposing competent PRs based upon request for work in a GH issue. I was about halfway through the first iteration of a prompt role that can review, approve and merge these PRs. End goal being a fully autonomous software factory wherein the humans simply communicate via GH issue threads. Will probably be back on this project by mid February or so. Really looking forward to it.
Bigger, more useful context is all I think I really want at this point. The other primitives can be built pretty quickly on top of next token prediction once you know the recipes.
$25 per user/month billed annually
$30 per user/month billed monthly
https://openai.com/chatgpt/pricing It is very clear on the highlight.
* Higher message cap. * Create and Share GPTs within workspace. * Admin console. * No training.
Last one I remember was OpenAI GPT-4 API
Now normal software is priced to squeeze as much money as you can, enterprises can afford more, so are charged more. Individuals are highly price sensitive, so has to be very cheap.
GenAI is quite different in that its not 0 marginal cost, the marginal costs are probably at least 50% of the price. So the price difference between enterprise and individual plans will be far smaller than usual, due to the common cost base.
WinRAR is 30 EUR per user when buying a single license, 9 EUR when buying 100 licenses.
Until I realized Perplexity will give you a decent amount of Mistral Medium for free through their partnership.
Who is sama kidding they’re still leading here? Mistral Medium destroys the 4.5 preview. And Perplexity wouldn’t be giving it away in any quantity if it had a cost structure like 4.5, Mistral hasn’t raised enough.
Speculation is risky but fuck it: Mistral is the new “RenTech of AI”, DPO and Alibi and sliding window and modern mixtures are well-understood so the money is in the lag between some new edge and TheBloke having it quantized for a Mac Mini or 4070 Super, and the enterprise didn’t love the weird structure, remembers how much fun it was to be over a barrel to MSFT, and can afford to dabble until it’s affordable and operable on-premise.
“Hate to see you go, love to watch you leave”.
- mixtral-8x7 or 8x7: Open source model by Mistral AI.
- Dolphin: An uncensored version of the mistral model
- 3.5-turbo: GPT-3.5 Turbo, the cheapest API from OpenAI
- 4-series preview OR "4.5 preview": GPT-4 Turbo, the most capable API from OpenAI
- mistral-medium: A new model by Mistral AI that they are only serving through AI. It's in private beta and there's a waiting list to access it.
- Perplexity: A new search engine that is challenging Google by applying LLM to search
- Sama: Sam Altman, CEO of OpenAI
- RenTech: Renaissance Technologies, a secretive hedge fund known for delivering impressive returns improving on the work of others
- DPO: Direct Preference Optimization. It is a technique that leverages AI feedback to optimize the performance of smaller, open-source models like Zephyr-7B1.
- Alibi: a Python library that provides tools for machine learning model inspection and interpretation2. It can be used to explain the predictions of any black-box model, including LLMs.
- Sliding window: a type of attention mechanism introduced by Mistral-7B3. It is used to support longer sequences in LLMs.
- Modern mixtures: The process of using multiple models together, like "mixtral" is a mixture of several mistral models.
- TheBloke: Open source developer that is very quick at quantizing all new models that come out
- Quantize: Decreasing memory requirements of a new model by decreasing the precision of weights, typically with just minor performance degradation.
- 4070 Super: NVIDIA 4070 Super, new graphics card announced just a week ago
- MSFT: Microsoft
I've set up my system to use several AI models: the open-source Mixtral-8x7, Dolphin (an uncensored version of Mixtral), GPT-3.5 Turbo (a cost-effective option from OpenAI), and the latest GPT-4 Turbo from OpenAI. I can easily compare their performances in Emacs. Lately, I've noticed that GPT-4 Turbo is starting to outperform Mixtral-8x7, which wasn't the case until recently. However, I'm still waiting for access to Mistral-Medium, a new, more exclusive AI model by Mistral AI.
I just found out that Perplexity, a new search engine competing with Google, is offering free access to Mistral Medium through their partnership. This makes me question Sam Altman, the CEO of OpenAI, and his claims about their technology. Mistral Medium seems superior to GPT-4 Turbo, and if it were expensive to run, Perplexity wouldn't be giving it away.
I'm guessing that Mistral AI could become the next Renaissance Technologies (a hedge fund known for its innovative strategies) of the AI world. Techniques like Direct Preference Optimization, which improves smaller models, along with other advancements like the Alibi Python library for understanding AI models, sliding windows for longer text sequences, and combining multiple models, are now well understood. The real opportunity lies in quickly adapting these new technologies before they become mainstream and affordable.
Big companies are cautious about adopting these new structures, remembering their dependence on Microsoft in the past. They're willing to experiment with AI until it becomes both affordable and easy to use in-house.
It's sad to see the old technology go, but exciting to see the new advancements take its place.
Love how deep the rabbithole has gone in just a year. I am unfortunately in the camp of understanding the post without needing a glossary. I should go outside more :|
[1]: https://arxiv.org/abs/2108.12409
[2]: n.b. Ofir Press is co-creator of ALiBi https://twitter.com/OfirPress/status/1654538361447522305
(but seriously: Thanks !)
Also mixtral medium - no idea of what he means by that.
Not to mention a claim that mixtral is as good as gpt-4. It’s on the quality of gpt3.5 at best, which is still amazing for an open source model, but a year behind openai
For a broad introduction to the field Karpathy's YouTube series is about as good as it gets.
If you've got a pretty solid grasp of attention architectures and want a lively overview of stuff that's gone from secret to a huge deal recently I like this treatment as a light but pretty detailed podcast-type format: https://arize.com/blog/mistral-ai
Hilariously neither knows who is sama (Sam Altman, the Drama King of OpenAI), nor do they recognize when they themselves are being discussed.
Reading the responses in full also gives you a glimpse on specific merits or weaknesses of these systems, namely how up to date is their knowledge and lingo, explaining capabilities, and ability to see through multiple layers of referencing. Also showcases whether the AIs are willing to venture guessing to piece together some possible interpretation for hoomans to think about.
Basically, he said he is happy with Mistral 8x7B and thinks it is on par/better comparing to OpenAI's closed source model.
On what metrics? LMSys shows it does well but 4-Turbo is still leading the field by a wide margin.
I am using 8x-7b internally for a lot of things and Mistral-7b fine-tunes for other specific applications. They're both excellent. But neither can touch GPT-4-turbo (preview) for wide-ranging needs or the strongest reasoning requirements.
https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboar...
EDIT: Neither does mistral-medium, which I didn't discuss, but is in the leaderboard link.
There's also very little if any credible literature on what constitutes statistically significant on MMLU or whatever. There's such a massive vested interest from so many parties (the YC ecosystem is invested in Sam, MSFT is invested in OpenAI, the US is invested in not-France, a bunch of academics are invested in GPT-is-borderline-AGI, Yud is either a Time Magazine cover author or a Harry Potter fanfic guy, etc.) in seeing GPT-4.5 at the top of those rankings and taking the bold one at < 10% lift as state of the art that I think everyone should just use a bunch of them and optimize per use case.
I have my own biases as well and freely admit that I love to see OpenAI stumble (no I didn't apply to work there, yes I know knuckleheads who go on about the fact they do).
And once you factor in "mixtral is aligned to the demands of the user and GPT balks at using profanity while happily taking sides on things Ilya has double-spoken on", even e.g. MMLU is nowhere near the whole picture.
It's easy and cheap to just try both these days, don't take my word for which one is better.
Goliath is too big for my system but Mixtral_34Bx2_MoE_60B[1] is giving me some really good results.
PSA to anyone that does not understand what we're talkign about: I was new to all of this until two weeks ago as well. If you want to get up to speed with the incredible innovation and home-tinkering happening with LLMs, you have to checkout - https://www.reddit.com/r/LocalLLaMA/
I believe we should be at GPT4 levels of intelligence locally sometime later this year (Possibly with the release of Llama3 or Mistral Medium open-model).
[1] - https://huggingface.co/TheBloke/Mixtral_34Bx2_MoE_60B-GGUF
"My hips don't lie."
The trouble with the jargon is that it obfuscates to a high degree even by the standards of the software space, and in a field where the impact on people's daily lives is at the high end of the range, even by the standards of the software space.
HN routinely front-pages stuff where the math and CS involved is much less accessible, but for understandable reasons a somewhat tone-deaf comment like mine is disproportionately disruptive: people know this stuff matters to them either now or soon, and it's moving as quickly as anything does, and it's graduate-level material.
If you have concrete questions about what probably looks like word salad I'll do my best to clarify (without the aid of an LLM).
I might not know half of the references like "sama" or "TheBloke", but I could understand the context of them all. Like:
"the lag between some new edge and TheBloke having it quantized for a Mac Mini or 4070 Super,"
Not sure who TheBloke is, but he obviously means "between some new (cutting) edge AI model, and some person scaling it to run on smaller computers with less memory".
Similarly, not sure who Perplexity is, but "Until I realized Perplexity will give you a decent amount of Mistral Medium for free through their partnership" basically spells out that they're a service provider of some kind, that they have partnered with Mistral AI, and you get to use the Mistral Medium model through opening a free account on Perplexity.
I mean, duh!
Basically let an AI hallucinate on some technical subject. It would make a great script for a new encabulator video.
I'm curious, because I'm gathering some usecases; so that I could share that internally in the company to provide better education on, what LLMs do and how they work.
It's a great tool they make available.
While I heavily rely on `emacs` as my primary interface to all this stuff, I'm slowly-but-surely working on a curated and opinionated collection of bindings and tools and themes and shit for all the major hacker tools (VSCode, `nvim`, even to a degree the JetBrains ecosystem). This is all broadly part of a project I'm calling `hyper-modern` which will be MIT if I get to a release candidate at all.
I have a `gRPC` service that wraps the outstanding work by the "`ggeranov` crew" loosely patterned on the sharded model-server architectures we used at FB/IG and mercilessly exploiting the really generous free-plan offered by the `buf.build` people (seriously, check out the `buf.build` people) in an effort to give hackers the best tools in a truly modern workflow.
It's also an opportunity to surface some of the outstanding models that seem to have sunk without a trace (top of mind would be Segment Anything out of Meta and StyleTTS which obsoletes a bunch of well-funded companies) in a curated collection of hacker-oriented capabilities that aren't clumsy bullshit like co-pilot.
Right now it's a name and a few thousand lines of code too rough to publish, but if I get it to a credible state the domain is `https://hyper-modern.ai` and the code will be MIT at `https://github.com/hyper-modern-ai/`.
Also, is anyone aware of a service that supplies API endpoints for dolphin? I'd love to experiment with it, but running locally exceeds my budget.
To my knowledge, and I searched to confirm, GPT-4.5 is not yet released. There were some rumors and a link to ChatGPT's answer about GPT-4.5 (could also be a hallucination) but Sam tweeted it was not true.
In all seriousness, are self hosted GPT alternatives really viable?
Do you have a source on Mistral/Mixtral using that?
It was just an example of a modern positional encoding. I regret that I implied inside knowledge about that level of detail. They're doing something clever on scalar pointwise positional encoding but as for what who knows.
We shouldn't assume a basic sentence capitalised word refers to a product.
If a reference to a product is intended, we should clarify that association some other way; i.e. MS Teams.
What's happening is that lowercase "sentence case" titles have become more popular and normalized so repeated exposure to that style can cause a subconscious heuristic of "Capitalized letter signifies a Brand Name or Proper Noun". You can try to advise people not to assume that but it doesn't change the type of "sentence case" titles people are now repeatedly exposed to.
The New York Times still uses "Title Case" but a lot of other newspapers switched to lowercase sentence case. Washington Post switched in 2009. And Los Angeles Times, The Boston Globe, the Chicago Tribune, the San Francisco Chronicle, Philadelphia Inquirer, etc all followed.
Other popular websites with lowercase titles include Vox, ArsTechnica, TechCrunch, etc.
Normally it would be ok to capitallize words to match what many other US publications use, but this capitalization introduces confusion. I can only speak for myself, but I made the assumption that this was an integration with MS Teams. This would have been avoided if the original title was kept.
I do agree that editors should read their topics critically and add disambiguating text (if possible and permitted).
We shouldn’t but many of us do. As a title word, there’s ambiguity if it’s a proper noun or not given title styling. Given the context in this case (HN, OpenAI, ChatGPT) it was pretty difficult for my brain to not assume it was referring to Microsoft Teams so it baited me in, perhaps unintentionally. I’m not too upset about it because I knew that going in but none the less, a quick read of the title should make it obvious to call it “ChatGPT for Collaboration” or something of that nature.
Claiming it is a title wouldn’t win the argument either. As it is not a rule that titles must have title casing. Both (title case vs first letter capital only) are valid typography of a title in English.
Are you also going to complain if someone releases a platform called "The"?
j/k but finding it pretty funny these days that more and more people are switching to lowercase, assuming it started from this @sama tweet: https://twitter.com/sama/status/1735123080564167048
I guess you can argue this is just a marginal add-on to their existing ChatGPT product but I can imagine seeing them go full Salesforce/Oracle/enterprise behemoth here.
I would say I'm very pro AI development and pro Sam reinstating but I've been starting to shake my head a bit. Their mission and their ambition are wildly different.
The mission changed when research ran into product market fit.
Sell AI products to fund making AI
The sooner we build a tool to filter out garbage the better.
FTFY
People could work around it but it might help