"We’re releasing it initially with our lightweight model version of LaMDA. This much smaller model requires significantly less computing power, enabling us to scale to more users, allowing for more feedback"
They are richer than Croesus, that is no reason for them to hold back and get stomped on because they field an inferior product. Like, being backed by an immensely profitable corporation should be an advantage for them, but now it looks like they are so afraid of eroding those profits that they are hampered instead.
At this exact moment, ChatGPT is giving me about a word every 2-3 seconds. It's basically useless. But even when the system is not overloaded, there's a noticeable delay.
How much quality would you be willing to trade off to get results instantly instead? Or the inverse, how much better would the results need to be to justify a 5x longer processing time? It seems hard to believe that ChatGPT happened to be released at exactly the optimal tradeoff. (And obviously it's also unlikely that Google launched with exactly the right tradeoff.)
[0]: https://generativereview.substack.com/p/the-generative-revie...
It's been weird. They started out a head-neck-and-torso above everyone else in competence. Interviews were neigh impossible, and it was a hyper-elite team and a dream work environment.
Then they screwed up, and hired a fifth of a million people.
> I am still under development, and I am not as chatty as ChatGPT or Bing Chat because I have not been trained on as much data. I am also not as good at understanding and responding to complex questions. However, I am learning and improving every day, and I will become more chatty over time.
My understanding is that producing shorter responses proportionally reduces the amount of computing—and thus the cost—required to produce it.
Then they’ll use those to train its final model.
https://www.reuters.com/technology/chatgpt-sets-record-faste...
When I asked Bard to list the months in alphabetical order, it failed to do so. I saw this on a tweet but confirmed that it can't sort the moths. Gpt passes. When you fail such a simple comprehension i lose faith in anything more complex where the incorrect results wouldn't be as obvious. I'm surprised they released this in such a state. My guess is they're panicking to reply. I can imagine simple queries like this weren't tested
> Transcript of a dialog, where the User interacts with an Assistant named Bob. Bob is helpful, kind, honest, good at writing, and never fails to answer the User's requests immediately and with precision.
User: Hello, Bob.
Bob: Hello. How may I help you today?
User: Please tell me the largest city in Europe.
Bob: Sure. The largest city in Europe is Moscow, the capital of Russia.
User:Can you list all the months in alphabetical order?
Bob: Yes. January February March April May June July August September October November December
edit Sorry for this useless post...months are in order, not alphabetical order.
> User: That's not in alphabetical order. Redo.
> Bob: Sorry about that. Here are the months again, this time in alphabetical order: April, August, December, February, July, June, March, May, November, October, September.
(Also LLaMA, 30B, llama.cpp on x86 Processor)
At least LLaMA sees its mistake and doesn't gaslight you (in this case). "I've been a good LLaMA." ;P
months in alphabetical order
Here are the months in alphabetical order:
April August December February January July June March May November October September
And largest city is an ambiguous answer: per area, moskow is, by population its istanbul.
this is what I don't believe anymore. Its strictly inferior and people should stop looking for defensive arguments and all that.
If it is simply a smaller/cheaper/less parametrized model than the competitor ones thats a fine explanation, but also admits being inferior in exchange for being cheaper to run.
From my angle, LLM’s are the next big step in search. If the company whose core business revolves around search can’t update their product to meaningfully compete with their competitors who view search as a side business, it should terrify anyone with invested time or money in Google.
This is an incredibly strong signal that the company is about to face some very difficult times ahead. I fully expect their next public showing of poor execution to be followed by investor uproar.
That's one way to look at it. The other is the immense advertising opportunity these systems can provide. You no longer need to perform multiple queries and various results to get your answer. You're now essentially in a walled garden. The search/LLM platform has your complete and undivided attention.
After providing an answer, the follow-up can be a relevant ad. It is important for their credibility to not make it seem like the ad is impacting the integrity of the result.
They already have a moat being an established player with existing agrements. Google was able to pull this off with traditional search. They are probably best currently positioned to do it again.
Edit: Something else to consider, does Google have the same DNA that they did in the early 2000s?
Yup. We are entering the age of Advertising God Mode.
Google and Facebook are both going to train these LLMs directly at individual users. They're going to feed it all your email, texts, posts, comments, and page views and these systems are going to fill your feeds with bespoke advertising content that is lovingly crafted by the AI to appeal to you. No longer just product->user matching, but actual ad copy and images will be AI-tailored to wedge its way into your psyche. We have no chance against this kind of commercial psy-ops.
If Meta's core business of advertising on social media becomes irrelevant, his company is in trouble. But if the company is already the leader of the technology that replaces social media—e.g. connections over VR—the company can remain a leader and keep growing.
Zuckerberg might be wrong that the "metaverse" is the next step (it absolutely feels more manufactured than a natural next step like with ChatGPT), but I see why he might want to get ahead of his core business losing its value to a new innovation—such as by releasing VR headsets and encouraging people like journalists to consider VR meetings.
In contrast, the Bard release is more reactive, rather than a release that takes the initiative to introduce a new technology.
I spent a few hours last night trying to work on a story outline with Bard's help and while individual responses were sometimes okay, Bard regularly forgot about plot points and character traits we decided on earlier in the conversation and that was frustrating. ChatGPT is much better at conversational memory, but provided much less interesting results to individual queries.
Unlike other commenters, I think these artificial limitations aren't about spending or competency, but rather out of fear. Google is afraid of their bot turning racist, or saying things that could embarrass them.
Then they should ask it how Google can effectively build a technology whose success will inevitably kill their company’s main source of profits. (Search)
The first major noticeable improvement is Bard is way more human like in it response. ChatGPT often spends time writing thing it doesn't need to write.
For example, If you ask them: "If you had to pick one movie to watch today what would you pick?"
Bard give you one single movie, "The Shawshank Redemption", and explains in depth why it liked the movie.
ChatGPT gives you boilerplate response saying it can't have an opinion because it an AI and then lists 5 popular movies with basic explanation of the movie. Not very useful.
A lot of the time using ChatGPT I'm waiting for it to get to the answer I'm looking for because it spends a lot of time writing useless text instead of being more human like and understanding what the user really want in the answer.
I know a lot of people shit on Google because they hate Google, but this is a good product if they can solve some of the bad knowledge but it's more impressive than ChatGPT IMHO simply because they are wiling to allow it to learn new data all the time. While ChatGPT is still using the same dataset from 2021.
This also makes me wonder whether Apple will come out with its own LLM chatbot which could be its opportunity to wean itself off Google.
And if Meta manages to successfully integrate its LLM into its products FB, IG, etc., then it keeps users there for web searches.
Either way, Google's future looks precarious right now.
Just like they do now! Show ads alongside the "results". It's website owners who don't advertise who fill feel the pain if Google stops sending people to them and instead keeps them on Google.com/Bard or whatever.
Google Bard waitlist - https://news.ycombinator.com/item?id=35246260 - March 2023 (386 comments)
Also related:
Bard uses a Hacker News comment as source to say that Bard has shut down - https://news.ycombinator.com/item?id=35255864 - March 2023 (194 comments)
Bard is much worse at puzzle solving than ChatGPT - https://news.ycombinator.com/item?id=35256867 - March 2023 (78 comments)
It lacks the spirit, sparkle, the pizzaz and surprising "intelligence" of ChatGPT.
The service, if released before ChatGPT, would not have made the splash ChatGPT did.
I asked it for information about a show and it summarized it correctly but then hallucinated the hosts as two entirely made up people with fake careers. It even linked websites to the show with the correct information but made up this false info.
It's even worse than Google search which is already middling.
That's not what Google says. Google describes it like ChatGPT, plus some meaningless fluff. The only thing. It says about "answers" is that they are wrong.
""" Meet Bard: your creative and helpful collaborator, here to supercharge your imagination, boost your productivity, and bring your ideas to life.
Bard is an experiment and may give inaccurate or inappropriate responses. """
Try to entirely avoid any of Google's products, including the search bar, Chrome and Android for 5 years, and I bet you will touch a Google product at least once.
The death of Google has been greatly exaggerated by OpenAI's marketing snake oil.
Google should be more proactive in getting public feedback from its research and productizing as soon as possible.
---
going back to Bard - I think Google is better off making Bard fast and properly setting the expectation that it can't do everything, and then slowly scaling up its abilities as it becomes better understood on how to use fewer parameters better.
Some things bother me about this article and the facts that warranted it in the first place. I have very similar reservations with regard to the Bing chatbot launch (detailed below - just substitute Bing for Bard).
> Google has a lot riding on this launch. Microsoft partnered with OpenAI to make an aggressive play for Google’s top spot in search. Meanwhile, Google blundered straight out of the gate when it first tried to respond. In a teaser clip for Bard that the company put out in February, the chatbot was shown making a factual error. Google’s value fell by $100 billion overnight.
1. LaMDA is not a new project. They are clearly releasing a chatbot based on it to the public to compete with MS/OpenAI. But if the tech existed some time ago (and as a developer I realize that iteration and time and attention tends to improve quality), why didn't they release it before? I am guessing - baselessly - that they saw that the quality of the output was quite poor (often factually wrong, for instance). But now that a competitor is threatening their market share, quality metrics goes out the window - revenue is once again king.
2. Funny how the valuation of the company dropped so quickly. It is because as shareholders we rely so much on short term gain. There is no focus on follow-through or long-term consequences. I certainly don't believe that capitalism is inherently bad and I am a huge fan of competition. But I think the way we practice it leaves many practical things to be desired.
> “We’ll get user feedback, and we will ramp it up over time based on that feedback,” says Google’s vice president of research, Zoubin Ghahramani. “We are mindful of all the things that can go wrong with large language models.”
>
> But Margaret Mitchell, chief ethics scientist at AI startup Hugging Face and former co-lead of Google’s AI ethics team, is skeptical of this framing. Google has been working on LaMDA for years, she says, and she thinks pitching Bard as an experiment “is a PR trick that larger companies use to reach millions of customers while also removing themselves from accountability if anything goes wrong.”
3. This quote, and the mention in the article of the "Google It" button below the Bard chat, the three versions of Bard's response ("drafts", FTA), the quote by the Google product director that says "There’s the sense of authoritativeness when you only see one example”... I could not agree more with what Margaret Mitchell has to say (I have never heard of her before, to my knowledge). Isn't it very clear by now that users don't have the time or attention to acknowledge implications? We are busy and we don't have the energy. If we see it on a screen, we take it as fact, copy and paste it as needed, and proceed with the knowledge that we have gleaned from said "facts". I suspect that if anybody really knows how misinformation works and the effects it has on society, it's data analysts at Google search. But the almighty revenue stream dictates that they push this not-necessarily-factual-information-producing-tool anyway.
If I can find the time, I'm quite curious to read that article that just pre-dates Bing chat and Bard about the dangers of using LLMs in search engines.