It's because the integrations with existing products are arbitrary and poorly thought through, the same way that software imposed by executive fiat in BigCo offices for trend-chasing reasons has always been.
petekoomen made this point recently in a creative way: AI Horseless Carriages - https://news.ycombinator.com/item?id=43773813 - April 2025 (478 comments)
Having all these popups announcing new integrations with AI chatbots showing up while you are just trying to do your work is pretty annoying. It feels like this time we are fighting an army of Clippies.
I don't want shitty bolt-ons, I want to be able to give chatgtp/claude/gemini frontier models the ability to access my application data and make api calls for me to remotely drive tools.
The weirdest location I've found the most useful LLM-based feature so far has been Edge with it's automatic tab grouping. It doesn't always pick the best groups and probably uses some really small model, but it's significantly faster and easier than anything that I've had so far.
I hope they do bookmarks next and that someone copies the feature and makes it use a local model (like Safari or Firefox, I don't even care).
If you use it for writing, what is the point of writing in the first place? If you're writing to anyone you even slightly care about they should wipe their arse with it and send it back to you. And if it's writing at work or for work then you're just proving you are an employee they don't need.
I'm curious, do you find it easier to climb stairs or inclines now that you've tossed your brain in the trash?
Jesus F christ, please tell me you are trolling
Raise subscription prices, don’t deliver more value, bundle everything together so you can’t say no. I canceled a small Workspace org I use for my consulting business after the price hike last year; also migrating away everything we had on GCP. Google would have to pay me to do business with them again.
It's just rent-seeking. Nobody wants to actually build products for market anymore; it's a long process with a lot of risk behind it, and there's a chance you won't make shit for actual profit. If however you can create a "do anything" product that can be integrated with huge software suites, you can make a LOT of money and take a lot of mind-share without really lifting a finger. That's been my read on the "AI Industry" for a long time.
And to be clear, the integration part is the only part they give a shit about. Arguably especially for AI, since operating the product is so expensive compared to the vast majority of startups trying to scale. Serving JPEGs was never nearly as expensive for Instagram as responding to ChatGPT inquiries is for OpenAI, so they have every reason to diminish the number coming their way. Being the hip new tech that every CEO needs to ram into their product, irrespective of it does... well, anything useful, while also being so frustrating or obtuse for users to actually want to use, is arguably an incredibly good needle to thread, if they can manage it.
And the best part is, if OpenAI's products do actually do what they say on the tin, there's a good chance many lower rungs of employment will be replaced with their stupid chatbots, again irrespective of whether or not they actually do the job. Businesses run on "good enough." So it's great, if OpenAI fails, we get tons of useless tech injected into software products already creaking under the weight of so much bullhockety, and if they succeed, huge swaths of employees will be let go from entry level jobs, flooding the market, cratering the salary of entire categories of professions, and you'll never be able to get a fucking problem resolved with a startup company again. Not that you probably could anyway but it'll be even more frustrating.
And either way, all the people responsible for making all your technology worse every day will continue to get richer.
I think this is the key idea. Right now it doesn't work that well, but if it did work as advertised, that would also be bad.
Everyone nodding along, yup yup this all makes sense
This is the next great upset. Everyone's hair is on fire and it's anybody's ball game.
I wouldn't even count the hyperscalers as certain to emerge victorious. The unit economics of everything and how things are bought and sold might change.
We might have agents that scrub ads from everything and keep our inboxes clean. We might find content of all forms valued at zero, and have no need for social networking and search as they exist today.
And for better or worse, there might be zero moat around any of it.
This is called an ad blocker.
> keep our inboxes clean
This is called a spam filter.
The entire parent comment is just buzzword salad. In fact I am inclined to think it was written by an LLM itself.
Also, everyone who requires these sophisticated models now needs to send everything to the gatekeepers. You could argue that we already send a lot of data to public clouds. However, there was no economically viable way for cloud vendors to read, interpret, and reuse my data — my intellectual property and private information. With more and more companies forcing AI capabilities on us, it's often unclear who runs those models and who receives the data and what is really happening to the data.
This aggregation of power and centralisation of data worries me as much as the shortcomings of LLMs. The technology is still not accurate enough. But we want it to be accurate because we are lazy. So I fear that we will end up with many things of diminished quality in favour of cheaper operating costs — time will tell.
Open Source endeavors will have a hard time to bear the resources to train models that are competitive. Maybe we will see larger cooperatives, like a Apache Software Foundation for ML?
For the consumer side, you'll be the product, not the one paying in money just like before.
For the creator side, it will depend on how competition in the market sustains. Expect major regulatory capture efforts to eliminate all but a very few 'sanctioned' providers in the name of 'safety'. If only 2 or 3 remain, it might get realy expensive.
The scale issue isn't the LLM provider, it's the power grid. Worldwide, 250 W/capita. Your body is 100 W and you have a duty cycle of 25% thanks to the 8 hour work day and having weekends, so in practice some hypothetical AI trying to replace everyone in their workplaces today would need to be more energy efficient than the human body.
Even with the extraordinarily rapid roll-out of PV, I don't expect this to be able to be one-for-one replacement for all human workers before 2032, even if the best SOTA model was good enough to do so (and they're not, they've still got too many weak spots for that).
This also applies to open-weights models, which are already good enough to be useful even when SOTA private models are better.
> You could argue that we already send a lot of data to public clouds. However, there was no economically viable way for cloud vendors to read, interpret, and reuse my data — my intellectual property and private information. With more and more companies forcing AI capabilities on us, it's often unclear who runs those models and who receives the data and what is really happening to the data.
I dispute that it was not already a problem, due to the GDPR consent popups often asking to share my browsing behaviour with more "trusted partners" than there were pupils in my secondary school.
But I agree that the aggregation of power and centralisation of data is a pertinent risk.
I don't think this is true. A lot of people had no interest until smartphones arrived. Doing anything on a smartphone is a miserable experience compared to using a desktop computer, but it's more convenient. "Worse but more convenient" is the same sales pitch as for AI, so I can only assume that AI will be accepted by the masses too.
We sat yesterday and watched a table of 4 lads drinking beer each just watch their phones. At the slightest gap in conversation, out they came.
They’re ruining human interaction. (The phone, not the beer-drinking lad.)
So people didn't want to be walking around with a tether that allowed the whole world to call them where ever they were? Le Shock!
Now if they'd asked people if they'd like a small portable computer they could keep in touch with friends and read books, play games, play music and movies on where ever they went which also made phone calls. I suspect the answer might have been different.
It's bullshit.
I mean, sure: there were people who hated the Internet. There still are! They were very clearly a minority, and almost exclusively older people who didn't like change. Most of them were also unhappy about personal computers in general.
But the Internet caught on very fast, and was very, very popular. It was completely obvious how positive it was, and people were making businesses based on it left and right that didn't rely on grifting, artificial scarcity, or convincing people that replacing their own critical thinking skills with a glorified autocomplete engine was the solution to all their problems. (Yes, there were also plenty of scams and unsuccessful businesses. They did not in any way outweigh the legitimate successes.)
By contrast, generative AI, while it has a contingent of supporters that range from reasonable to rabid, is broadly disliked by the public. And a huge reason for that is how much it is being pushed on them against their will, replacing human interaction with companies and attempting to replace other things like search.
>By contrast, generative AI, while it has a contingent of supporters that range from reasonable to rabid, is broadly disliked by the public.
It is absolutely wild how people can just ignore something staring right at them, plain as day.
ChatGPT.com is the 5 most visited site on the planet and growing. It's the fastest growing software product ever, with over 500M Weekly active users and over a billion messages per day. Just ChatGPT. This is not information that requires corporate espionage. The barest minimum effort would have shown you how blatantly false you are.
What exactly is the difference between this and a LLM hallucination ?
Obviously saying “everyone” is hyperbole. There were luddites and skeptics about it just like with electricity and telephones. Nevertheless the dotcom boom is what every new industry hopes to be.
In 20 years AI will be pervasive and nobody will remember being one of the luddites.
There are open source or affordable, paid alternatives for everything the author mentioned. However, there are many places where you must use these things due to social pressure, lock-in with a service provider (health insurance co, perhaps), and yes unfortunately I see some of these things as soon or now unavoidable.
Another commenter mentioned that ChatGPT is one of the most popular websites on the internet and therefore users clearly do want this. I can easily think of two points that refute that: 1. The internet has shown us time and time again that popularity doesn’t indicate willingness to pay (which paid social networks had strong popularity…?) 2. There are many extremely popular websites that users wouldn’t want to be woven throughout the rest of their personal and professional digital lives
LLMs are not very predictable. And that's not just true for the output. Each change to the model impacts how it parses and computes the input. For someone claiming to be a "Prompt Engineer", this cannot work. There are so many variables that are simply unknown to the casual user: training methods, the training set, biases, ...
If I get the feeling I am creating good prompts for Gemini 2.5 Pro, the next version might render those prompts useless. And that might get even worse with dynamic, "self-improving" models.
So when we talk about "Vibe coding", aren't we just doing "Vibe prompting", too?
If you run an open source model from the same seed on the same hardware they are completely deterministic. It will spit out the same answer every time. So it’s not an issue with the technology and there’s nothing stopping you from writing repeatable prompts and promoting techniques.
If I have to do extensive subtle prompt engineering and use a lot of mental effort to solve my problem... I'll just solve the problem instead. Programming is a mental discipline - I don't need help typing, and if using an AI means putting in more brainpower, its fundamentally failed at improving my ability to engineer software
conceding that this may be the case, there are entire categories of problems that i am now able to approach that i have felt discouraged from in the past. even if the code is wrong (which, for the most part, it isn't), there is a value for me to have a team of over-eager puppies fearlessly leading me into the most uninviting problems, and somehow the mess they may or may not create makes solving the problem more accessible to me. even if i have to clean up almost every aspect of their work (i usually don't), the "get your feet wet" part is often the hardest part for me, even with a design and some prototyping. i don't have this problem at work really, but for personal projects it's been much more fun to work with the robots than always bouncing around my own head.
Please don't. I am going to read this email. Adding more text just makes me read more.
I am sure there's a common use case of people who get a ton of faintly important email from colleagues. But this is my personal account and the only people contacting me are friends. (Everyone else should not be summarized; they should be trashed. And to be fair I am very grateful for Gmail's excellent spam filtering.)
If you answer no, does that make you an unwilling user of social media? It’s the most visited sites in the world after all, how could randomly injecting it into your GPS navigation system be a poor fit?
All the anti-AI people I know are in their 30s. I think there are many in this age group that got use to nothing changing and are wishing it to stay that way.
We won’t solve climate change but we will have elaborate essays why we failed.
Or are they the only ones who understand that the rate of real information/(spam+disinformation+misinformation+lies) is worse than ever? And that in the past 2 years, this was thanks to AI, and people who never check what garbage AI spew out? And only they are who cares to not consume the shit? Because clearly above 50, most of them were completely fine with it for decades now. Do you say that below 30 most of the people are fine to consume garbage? I mean, seeing how many young people started to deny Holocaust, I can imagine it, but I would like some hard data, and not just some AI level guesswork.
I was searching for something on Omnissa Horizon here: https://docs.omnissa.com/
It has some kind of ChatGPT integration, and I tried it and it found the answer I was looking for straight away, after 10 minutes of googling and manual searching had failed.
Seems to be not working at the moment though :-/
I'm not unwilling to use AI in places where I choose. But let's not pretend that just because people do use it in one place, they are willing to have it shoved upon them in every other place.
I just don't participate in discussions about Facebook marketplace links friends share, or Instagram reels my D&D groups post.
So in a sense I agree with you, forcing AI into products is similar to forcing advertising into products.
Which is to say, there's already a history of AI features failing at a number of these larger companies. The public truly is frequently rejecting them.
I wonder how many uses of Chatgpt and such are malicious.
It is like Clippy, which no one wanted. Hopefully, like Clippy, "AI" will be scrapped at some point.
Of course it’s a bubble! Most new tech like this is until it gets to a point where the market is too saturated or has been monopolised.
I bet if you go back to the printing press, telegraph, telephone, etc. you will find people saying "it's only a bubble!".
The thing that really chafes me about this AI, irrespective of whether it is awesome or not, is emitting all of the information to some unknown server. To go with another Zappa reference, AI becomes The Central Scrutinizer[2].
I predict an increasing use of Free Software by discerning people who want to maintain more control of their information.
[1] https://www.youtube.com/watch?v=JPFIkty4Zvk
[2] https://en.wikipedia.org/wiki/Joe%27s_Garage#Lyrical_and_sto...
It seems here on the ground in non-tech bubble land, people use ChatGPT a ton and lean hard on AI features.
When Google judges the success of bolted on AI, they are looking at how Jane and John General Public use it, not how xleet007 uses it(or doesn't).
There is also the fact that AI is still just being bolted onto things now. The next iteration of this software will be AI native, and the revisions after that will iron out big wrinkles.
When settings menus and ribbon panels are optional because you can just tell the program what to do in plain English, that will be AI integration.
Marsha Blackburn's amendment to remove the "AI legislation moratorium" from the "Big Beautiful Bill" passed the Senate 99-1.
People are getting really fed up with "AI", "crypto" and other scams.
If you look at the survey results, a few things jump out.
Firstly, there's a strong age skew. The people most likely to benefit from AI features in their software are those who are judged directly on their computing productivity, i.e. the young. Around half of 18-35 year olds say they would pay extra, even . It's only amongst the old that this drops to 20%.
Secondly, when asked directly if they value a range of AI-driven features, they say yes.
The skew opens up because companies like OpenAI give AI services away for free. There's just a really strong expectation established by the tech industry that software is either free or paid for by a low and very price-stable monthly subscription. This is also true in AI: you only pay for ChatGPT if you want more features and smarter models. For the majority of things that people are doing with AI right now, the free version of ChatGPT is good enough. What remains is mostly low value stuff like better autocomplete, where indeed people are probably not that interested in paying more for it.
Unfortunately Ted Gioia tries to use this stat to imply people don't want AI at all, which is not only untrue but trivially untrue; ChatGPT is the fastest growing product in history.
In fact I also tried the communication part - outside of Outlook - but people don't like superficial AI polish
Only when I went to cancel[1], suddenly they made me aware that there was a "classic" subscription that was the normal price, without CoPilot. So they basically just upsized everyone to try to force uptake.
[1] - I'm in the AI business and am a user and abuser of AI daily, but I don't need it built directly into every app. I Already have AI subscriptions and local models and solutions.
Recently I tried to cancel notion account of some people in our org and it wouldn’t let me do it easily so just cancelled the whole notion subscription, really wish they would go out of business for doing these kind of things
Highways.
In my European country you have to pay a toll to use a highway. Most people opt to use them, instead of taking the old 2-lane road that existed before the highway and is still free.
This stuff costs so much, they need mass adoption. ASAP. I didn't think about it before, but I wonder how quickly they need the adoption.
Once upon a time, not too long ago, there was someone who would bag your groceries, and someone who would clean your window at the gas station. Now you do self-checkout. Has anyone asked for this? Your quality of life is worse, the companies are automating away humanity into something they think is more profitable for them.
In a society where you don't have government protection for such companies, there would be other companies who provide a better service whose competition would win. But when you have a fat corrupt government, lobbying makes sense, and crony-capitalism births monopolies which cannot have any competition. Then they do whatever they want to you and society at large, and they don't owe you, you owe them. Your tax dollars sponsor all of this even more than your direct payments do.
While government sponsored monopolies certainly exist, monopolies themselves are a natural outcome of competition.
Deregulation would break some monopolies while encouraging others to grow. The new monopolies may be far worse than the ones we had before.
https://www.sciotoanalysis.com/news/2024/7/12/how-much-do-yo...
maybe i'm doing something wrong here, but even ddg is annoying me with this.
It’s like IPV6, if it really was a huge benefit to the end user, we’d have adopted it already.
Just from current ARR announcements: 3b+ anthropic, 10b+ oai, whatever google makes, whatever ms makes, yeah people are already paying for it.
The top of the list has got to be that one of their testimonials presented to investors is from "DrDeflowerMe". It's also interesting to me because they list financials which position them as unbelievably tiny: 6,215 subscribing accounts, 400 average new accounts per month, which to me sounds like they have a lot of churn.
I'm in my third year of subscribing and I'm actively looking for a replacement. This "Start Engine" investment makes me even more confident that's the right decision. Over the years I've paid nearly $200/year for this and watched them fail to deliver basic functionality. They just don't have the team to deliver AI tooling. For example: 2 years ago I spoke with support about the screen that shows you your credit card numbers being nearly unreadable (very light grey numbers on a white background), which still isn't fixed. Around a year ago a bunch of my auto transfers disappeared, causing me hundreds of dollars in late fees. I contacted support and they eventually "recovered" all the missing auto-transfers, but it ended up with some of them doubled up, and support stopped responding when I asked them to fix that.
I question if they'll be able to implement the changes they want, let alone be able to support those features if they do.
I was hoping that, after going through a number of other "advanced money management" fintech banks over the years and them selling out, that going with a place that I directly paid to use would allow it to sustain independently and add features, but it seems like the other scenario I worried about became the issue: The subscription fee severely limited their membership pool.
I also feel an urge to build spaces in the internet just for humans, with some 'turrets' to protect against AI invasion and exploitation. I just don't know what content would be shared in those spaces because AI is already everywhere in content production.
I say I imagine it's annoying because I've yet to actually be annoyed much but I get the idea. I actually quite like the Google AI bit - you can always not read it if you don't want to. AI generated content on youtube is a bit of a mixed bag - it tends to be kinda bad but you can click stop and play another video. My office 2019 is gloriously out of date and does that stuff I want without the recent nonsense.
And of course there's no way to disable it without also losing calculator, unit conversions, and other useful functionality.
Also:
> As per SimilarWeb data 61.05% of ChatGPT's traffic comes from YouTube, which means from all the social media platforms YouTube viewers are the largest referral source of its user base,
That's deeply suspect.
If we talk about popular packages: - people want it - people enjoy it - people do not pay for that
But force-feeding with strict licenses like Ultralytics does works. Yes, it is force-feeding, but noone wants to pay the price, unless there is no other choice.
I use Kagi who returns excellent results, also when I need non AI verbatim queries.
Displaying what you searched for immediately is cannibalizing that market.
I'm guessing ads in AI results is the logical next step.
Badly summarise articles.
Outright invent local attractions that don’t exist.
Gave subtly wrong, misleading advice about employment rights.
All while coming across as confidently authoritative.
People don't know how to search, that's it. Even the HN population.
Every time this gets posted, I ask for one example of thing you tried to find and what keywords you used. So I'm giving you the same offer, give me for one thing you couldn't find easily on Google and the keywords you used, and I'll show you Google search is just fine.
How do you set up an encrypted file on linux that can be mounted and accessed same as a hard drive.
(note: luks, a few commands)
You will see a nonsensical ai summarization, lots of videos and junk websites being promoted then you'll likely find a few blogs with the actual commands needed. Nowhere is there a link to a manual for luks or similar.
This in the past had the no-ad straightforward blogs as first links, then some man pages, then other unrelated things for the same searches that i do now and get garbage.
I cannot take OP seriously when the post started like so. If you are using Microsoft services and products in 2025, well, it serves you right.
Big companies can force Microsoft, Google and alike to don't use companies data for AI training, small companies have no chance.
Everything nowadays is cloud based, all you need is internet and a browser. But nope, people and companies still using Windows, spending millions with AV software that they wouldn't have to if a decent Linux distro was being used instead.
By decent I mean user friendly such as Linux Mint or even worse Ubuntu (Ubuntu lost its way years ago, still a solid option for basic users, not for advanced users)
Software is loyal to the owner. If you don't own your software, software won't be loyal to you. It can be convenient for you, but as time passes and interest changes, if you don't own software it can turn against you. And you shouldn't blame Microsoft or it's utilities. It doesn't owe you anything just because you put effort in it and invested time in it. It'll work according to who it's loyal to, who owns it.
If it bothers you, choose software you can own. If you can't choose software you own now, change your life so you can in the future. And if you just can't, you have to accept the consequences.
Also the requests aren't answered locally. Your data is forwarded to the AI's DC, processed and the answer returned. You can be absolutely certain that they keep a copy of your data.
- me, a few years ago.
I find the whole situation with regard to AI utterly ridiculous and boring. While those algos might have some interesting applications, they're not as earth-shattering as we are made to believe, and their utility is, to me at least, questionable.
love this quote !
The whole sales-pitch for AI is predicated on FOMO - from developers being replaced by AI-enabled engineers to countries being left-behind by AI-slop. Like crypto, the idea is to get-big-fast, and become too big to fail. This worked for social-media but I find it hard to believe it can work for AI.
My hope is that: while some of the people can be fooled all the time, all the people cannot be fooled all the time.
As a data point, the "Stop Killing Games" one has passed the needed 1M signatures so is in good shape:
People would be less upset if ai is shown to support the person. This also allows that person to curate the output and ignore it if needed before sharing it, so it’s a win/win.
But is the big money in revolution?
Some are excited about it. Some are actually making something cool with AI. Very few are both.
"Most people won't pay for AI voluntarily-just 8% according to a recent survey. So they need to bundle it with some other essential product."
"You never get to decide."
Silicon Valley and Redmond have been operating this way for quite some time.
They have been effectively removing choice long before this "AI" push. Often accomplished through "defaults".
This "AI" nonsense may be the most bold example.
"But if AI is bundled into existing businesses, Silicon Valley CEOs can pretend that AI is a moneymaker, even if the public is lukewarm or hostile."
"The AI business model would collapse overnight if they needed consumer opt-in. Just pass that law, and see how quickly the bots disappear. "
"You don't get to choose. You're never asked. It just shows up. Now you have to deal with it."
"If they gave people a choice, they would reject this tyranny masquerading as innovation."
"The AI business model would collapse overnight if they needed consumer opt-in."
We never get to find out what would happen.
One comment I would like to add here.
By removing meaningful choice and creating fabricated "demand" these so-called "tech" companies (unnecessary intermediaries) when faced with antitrust allegations then try to argue something like, "Everyone is using it therefore everyone wants it." And, "This shows everyone prefers us over the alternatives."
"Frank Zappa offers a possible mission statement for Microsoft back in 1976, a few months after the company is founded."
RIP.
But using it heavily has a corollary effect: engineers learn less as a result of their dependence on it.
Less learning all around equals enshittification. Really not looking forward to this.
Tyranny is a real thing which exists in the world and is not exemplified by “product manager adding text expansion to word processor.”
The natural state of capitalism is trying things which get voted on by money. It’s always subject to boom-bust cycles and we are in a big boom. This will eventually correct itself once the public makes its position clear and the features which truly suck will get fixed or removed.
That is what the natural state of capitalism _would_ be in a world of honest businesspeople and politicians.
I don’t see the utility, all I see is slop and constant notifications in google.
You can say skill issue but that’s kind of the point; this was all dropped on me by people who don’t understand it themselves. I didn’t ask or want to built the skills to understand ai. Nor did my bosses: they are just following the latest wave. We are the blind leading the blind.
Like crypto ai will prove to be a dead end mistake that only enabled grifters
The reason your bosses are being obnoxious about making people use the internal AI tool is to push them into thinking about things like this. Perhaps at your company it’s genuinely not useful, but I’ve seen a lot of people say that who I’m pretty confident are wrong.
What about the impact on your audience? A lot of people are going to view your presentations more negatively based on their views about AI.
Dear administrator,
We recently added the best of Google AI to Workspace plans to help your teams accomplish more, faster. In addition, we added new, simple to use security insights and controls to help you keep your business data safe.
We also announced updated subscription pricing. Your subscription will be subject to this updated pricing starting July 7, 2025.
We’ve provided additional information below to guide you through this change. What this means for your organization
New Workspace features
Your updated pricing reflects the many new features now included in your Google Workspace edition. With these changes, you can:
Summarize long email threads, draft replies, and compose professional emails faster and easier with Help me write in Gmail
Write and refine documents with Gemini in Docs
Generate charts and insights with Gemini in Sheets
Automatically capture meeting notes so you can focus on the conversation with Take notes for me in Meet
Get AI assistance with brainstorming, researching, coding, data analysis, and more with Gemini Advanced
Accelerate learning by uploading your docs, PDFs, videos, websites, and more to get instant insights and podcast-style Audio Overviews with NotebookLM Plus
Enhance your organization’s security with security advisor, a new set of insights and tools. Use security advisor for threat defense with app access protection, account security with Gmail Enhanced Safe Browsing, and data protection capabilities
Customize email campaigns in Gmail. Add color schemes, logos, images, and other design elements
Starting as early as July 7, 2025, your Google Workspace Business Plus subscription price will be automatically updated to $22.00* per user, per month with an Annual/Fixed-Term Plan (or $26.40 if you have a monthly Flexible Plan).The specific date that your subscription price will increase depends on your plan type, number of user licenses, and other factors.
*Prices will be updated in all local payment currencies.
If you have an Annual/Fixed-Term Plan, your subscription will be subject to updated pricing on your next plan renewal starting July 7, 2025. We will provide you with more specific information at least 30 days before updates to your Google Workspace plan pricing are made. What you need to do
No action is required from you. Features have already rolled out to Google Workspace Business Plus subscriptions, including AI features in many additional languages, and subscription prices will be updated automatically starting July 7, 2025.
We know that data security and compliance are top priorities for business leaders when adopting AI, and we are committed to helping you keep your data safe. You can understand how to effectively utilize generative AI in your organization, and learn how to keep your data confidential and protected. We’re here to help
If you wish to make changes to your subscription or payment plan, please visit the Admin console. Find which edition and payment plan you have on Google Workspace Admin Help.
Refer to the Help Center for details regarding the AI features and price updates, including updated local currency pricing.
That's such a horrific new-speak way of saying your subscription price has been raised. Just say it! This soft bullshitty choice of words is infuriating.
The AI community treats potential customers as invaders. If you report a problem, the entire thing turns on you trying to convince you that you're wrong, or that you reported a problem because you hate the technology.
It's pathetic. It looks like a viper's nest. Who would want to do business with such people?
Actual promising AI tech doesn't even get the center stage, it doesn't get a chance to do it.
Better still, you could do that even with a hit air baloon and late middle-age technology! There is even a SF book series about that:
Any minor comment or constructive criticism is FUD and met with "oh better go destroy a loom there, Ned Ludd".
It's pathetic and I grow tired of it.
Thanks for pointing that out.
"I don’t want AI customer service—but I don’t get a choice.
I don’t want AI responses to my Google searches—but I don’t get a choice.
I don’t want AI integrated into my software—but I don’t get a choice.
I don’t want AI sending me emails—but I don’t get a choice.
I don’t want AI music on Spotify—but I don’t get a choice.
I don’t want AI books on Amazon—but I don’t get a choice."
The last is especially egregious. I don’t want poorly-written (by my standards) books cluttering up bookstores, but all my life I’ve walked into bookstores and found my favorite genres have lots of books I’m not interested in. Do I have some kind of right to have stores only stock products that I want?
The whole thing is just so damn entitled. If you don’t like something, don’t buy it. If you find the presence of some products offensive in a marketplace, don’t shop there. Spotify is not a human right.
Of course you can opt out. People live in the backwoods of Alaska. But if you want to live a semi normal life there is no option. And absolutely people should feel entitled to a normal life.
Six-plus months ago they put a chatbot in the bottom right corner of their website that literally covers up buttons I use all the time for ordering, so that I have to scroll now in order to access those controls (Chrome, MacOS). After testing it with various queries it only seems to provide answers to questions in their pre-existing support documentation.
This is not about choice (see above, they are the only game in town), and it is not about entitlement (we're a tiny shop trying to serve our customers' often obscure book requests). They seemed to literally place the chatbot buttons onto their website with no polling of their users. This is an anecdotal report about Ingram specifically.
I don't think it's entitlement to make a well-mannered complaint about how little choice we actually have when it comes to the whims of the tech giants.
The OP's point is that increasingly, we don't have that choice, for example, because AI slop masquerades as if it were authored by human beings (that's, in fact, its purpose!), or because the software applications you rely on suddenly start pushing "AI companions" on you, whether you want them or not, or because you have no viable alternatives to the software applications you use, so you must put up with those "AI companions," whether you want them in your life or not.
It's ridiculous to compare bad human books with bad AI books because there many human books which are life-changing, but there isn't a single AI book which isn't trash.
Probably no one enjoys AI books though. I did my best at devils advocate on that above.
The whole point is that "just don't buy it" as a strategy doesn't work anymore for consumers to guide the market when the companies have employed the rock-for-dessert gambit to avoid having to try to sell their products on their merits.
I said no. Respect my preferences.
Is not false statistics. "Nobody wanted or asked for this" is literally true.
People going to lord it over others in the pursuit of what they think is proper.
Society is over-rated, once it gets beyond a certain size.
Along the same lines, I am currently starting my morning with blocking ranges of IP addresses to get Internet service back, due to someone's current desire to SYN Flood my webserver, which being hosted in my office, affects my office Internet.
It may soon come to a point where I choose to block all IP addresses except a few to get work done.
People gonna be people.
sigh.
The same issue plagues many private companies. I’ve seen employees spend days drafting documents that a free tool like Mistral could generate in seconds, leaving them 30-60 minutes to review and refine. There's a lot of resistance from the public. They're probably thinking that their job will be saved if they refuse to adopt AI tools.
What I have seen is employees spending days asking the model again and again to actually generate the document they need, and then submit it without reviewing it, only for a problem to explode a month later because no one noticed a glaring absurdity in the middle of the AI-polished garbage.
AI is the worst kind of liar: a bullshitter.