Monetization of AI hasn't even really started yet so I could be tragically wrong. It'll be an interesting ride no matter what.
For the same query ("assuming they lose search and advertising google cloud revenue likely to make up the difference in revenue?"), google responded with the following top hit:
> Simply Wall Street - Jul 16, 2023 — I assume that Google's Cloud revenue will grow at 18.5% for the next 5 years, in line with expectations of the industry's growth rate.
Long GOOG (3 years) might make sense though. Word in the valley is they're trying to increase margins by driving costs down. That should improve their multiplier even as they retreat from core markets and their revenue drops.
[edit: I summarized what it said wrong. Ad revenue is 80%. Everything else (including Android licensing and hardware sales) is 20%. It didn't give a percentage for cloud revenue.]
Remember when ChatGPT was integrated into Bing? Turns out that did nothing for search market share.
But, it is unclear whether there is room for advertising in chatGPT the way there is room for ads in search. But I suspect there will somehow be a way!
Besides, GPT was only available on Edge. Again, reputation.
Alternatively, if OpenAI thinks search queries are an important signal, the search engine could be free with no ads for everyone. For example, I was surprised that Amazon pays Prime members $20 for using Amazon Photos. I think they might be using Photos to get more data for AI training.
How about just a business model that isn't awful/predatory?
Whenever a stock dips, a whole bunch of people want an explanation why, so they google it and read the meaningless article sandwiched between layers of ads.
I have been using Kagi paid for the last 2 months and have found it an acceptable drop in replacement for Google. Before that I used Duck Duck Go and found myself using the !g operator about 50% of the time.
Please avoid "personalized" ads.
DuckDuckGo has proven that personalized ads are not necessary for success.
As for the MS thing, they did fix it:
https://techcrunch.com/2022/08/05/duckduckgo-microsoft-track...
and they were never part of the apps. They previously whitelisted microsoft ad and tracking domains in their browser.
Duck Duck Go proved search works fine without user profiling, and Google proved personalized search is a clusterfsck.
Try turning down that money for 20 bucks a months. Subscriptions will never scale due to one simple fact. When a subscriber pays only one party pays and has a ceiling. When advertisers pay an infinite number of parties can pay per user with no ceiling other than performance.
Ad tech is just getting started.
(Even correcting for market share isn't enough, as Google Search has over 90% penetration in the US [1].)
[0] https://www.statista.com/statistics/266206/googles-annual-gl...
Instead of Google search:
I’ve switched almost exclusively to Kagi for vanilla search. It feels like Google in 2015, and the forums/small web filters are great. I do rarely use !g, and usually I'm disappointed.
If I just need a quick question answered I’ll usually use Perplexity.
For coding questions, I mostly use copilot and documentation these days. I barely use search for coding questions, unless I run into a weird edge case, or need to find Github issues for a project. There’s too much link spam to wade through, and copilot is built right into VSCode, so I don’t have to context switch.
I was a big user and early adopter of ChatGPT, but I don’t use it much anymore. Special use models are better for specific use cases (e.g. copilot for coding), and if I want to learn general information about a topic or gather some opinions, I much prefer to search forums and blog posts or watch some YouTube videos (this is the one major bright spot for Alphabet imo). If I do need AI to do something for me, I prefer to use an API or local model. ChatGPT is inferior to the API/local products, especially if you know some Python. This is even more true when you take into consideration copilot (and starts to hint at the compounding opportunity of these tools).
If I want to do a deep dive on something I don’t even bother with most web content these days unless I'm researching a cutting edge topic. The general web is too noisy and inundated with inaccurate and low quality content. Most of the time I either buy a book or directly visit an authoritative source (e.g. the SEC, the courts, Wikipedia, etc.). Less frequently I'll find a niche forum or blog via Kagi's filters, or hit up Google Scholar/Arxiv (another bright spot for Alphabet, but very niche).
The most interesting thing to note here is that I’m paying for almost all of these things, rather than using free Google search, because the Google search user experience has degraded so badly. Kagi, copilot, LLM APIs, books, they all cost money. That doesn’t bode well for the Google search product long term.
Why is this? GPT-4 outperforms smaller models I can run locally. Is GPT-4 via API better than GPT-4 via webapp?
Google sucks but it still seems better than the other bigger players at least. Not a fanboy. I'd like to see something actually useful replace them all but that was my experience.
Glad Google is getting stormed by the competition.
Most managers (at all levels) could be replaced with a Magic 8 Ball, and you would get similar results. Although, in some cases, consistency would improve.
I always wonder why they keep him around when it's clear as day to anyone paying attention he's... not the best.. but.. that graph somewhat speaks for itself.
I can count on one hand how many legacy brave searches I've done this week because you have to wait 5 seconds for the new summary to load. When I do use traditional search instead of chatbot it usually comes out of respect for the free perplexity.ai chatbot I'm using, I feel bad asking it a legacy question like the acronym for something when i can just highlight>right click> search in new tab.
It uses GPT (I believe they're running their own fine-tuned version), and allows you to '@' items like your files, folders, and (most impressively) any documentation.
So, if I'm working on something that uses React Hook Form, I can paste the URL to the RHF documentation. Cursor will index it (presumably creating embeddings for each page), and then I just '@' the RHF documentation in the chat and it'll find the relevant pages and include it them the context for GPT to answer.
It's very useful. I already subscribe to ChatGPT Plus but Cursor adds something special that I'm more than happy to subscribe to Cursor too.
I'm happy to use a !bang to indicate what kind of response I want. I'm not the average user but people understand the difference between asking your roommate who's right there and asking a professional about something, so this doesn't seem like much of a stretch.
GPT-4 takes maybe 15-30 seconds to spit out a precise, detailed answer to exactly what I asked for, and it's usually the correct answer, or at least it's close enough. It would have to get a _lot_ slower to make me go back to Google.
Even if they provide a better service, if it's only feasible to run when subsidised by the AI hype-wave VC money firehose then it's not going to last.
I don't believe this will last.