But you will still need to sustain ex-workers if they can't get normal jobs, and those same people at the top will not tolerate the taxes required to sustain a basic level of living for much wider population. They already can't tolerate the idea of a much smaller population using food assistance or healthcare from the government.
That leads me to think this is not really a visionary statement, but just a signal that Mark isn't intentionally trying to bring about a new dystopia, and here's his proof. And if a dystopia happens to come about, you can't blame him because he had pure intentions; clearly it was everyone else who just didn't agree with him and it's their fault.
Maybe make Meta a not-for-profit and there might be some credibility here.
If so, the logical choice would be to change the name from "Meta" to "AGI".
From Zuckerberg’s behavior, since the beginning, it’s clear what he wants is power, and if you have the kind of mental health disorder where you believe you know better than everyone and deserve power over others, then that’s not dystopian at all.
Everything he says is PR virtue signaling. Judge the man on his actions.
Kind of an unrelated topic but I'm reminded of a video essay in which the creator talks about this. They put it very kindly, IMO:
> Rich and powerful people have quite a different attitude and approach to truth and lies and games compared to ordinary people.
Which sounds like a really nice way of saying that rich and powerful people are dishonest by ordinary standards.
Apart form you of course, so I'm sure you'd be ok if the government would tax your higher than average tech wage till your take home pay would match that of a train conductor's or bus driver's, like in Western Europe, and therefore fix the wage gap you hate so much. Would you like that solution?
Caption this: It's only a problem when the people who earn more than me are greedy, but my greed is fine, it's OK for me to out-earn others because "I've earned it", not like Zuckerberg, he didn't earn it.
I live in Europe and earn ca. 6 times more than my friend who is a bus driver in the same city. We both have access to free education and, if we wish, also free healthcare, for which I am paying slightly more, but I really don't mind.
Here we go, predictably pulling the oldest trick in the book, just two weeks after it was reported [1] that the Superintelligence leadership was discussing moving to closed source for their best models, not for any risk mitigation reason, but for competitive reasons.
Also,
> As recently as 200 years ago, 90% of people were farmers growing food to survive. Advances in technology have steadily freed much of humanity to focus less on subsistence and more on the pursuits we choose. At each step, people have used our newfound productivity to achieve more than was previously possible, pushing the frontiers of science and health, as well as spending more time on creativity, culture, relationships, and enjoying life.
Yea about that... Sure Mark can choose to just fly on his private Hawaiian Island, or is Tahoe bunker and mess around with metaverse and AI and whatever he chooses. 99.9% of the population has an old regular job that they go to for subsistence. Michael from north dakota has not been doing bookeeping for SMEs because this was always the pursuit of his dreams. I also see no reason at all to believe we spend more time on creativity, culture, relationships or enjoying life than before. Especially that last point is in free fall over the last 50 years by the look every single mental well being metric around.
[1]: https://www.nytimes.com/2025/07/14/technology/meta-superinte...
I wish this were true for the average person, but I'm not sure that it is.
That's not pulling a trick, that's doing precisely what Zuck said he would do. In April 2024 Zuck on Dwarkesh said that models are a commodity right now, but if models became the biggest differentiator, that Meta would stop open sourcing them.
At the time he also said that the Model itself was probably not the most valuable part of an ultimate future product, but he was open to changing his mind on that too.
You can whine about that anyway, but he's not tricking anyone. He has always been frank about this!
> Open Source AI is the Path Forward.
> Meta is committed to open source AI. I’ll outline why I believe open source is the best development stack for you, why open sourcing Llama is good for Meta, and why open source AI is good for the world and therefore a platform that will be around for the long term.
> We need to control our own destiny and not get locked into a closed vendor.
> We need to protect our data.
> We want to invest in the ecosystem that’s going to be the standard for the long term.
> There is an ongoing debate about the safety of open source AI models, and my view is that open source AI will be safer than the alternatives.
> I think it will be better to live in a world where AI is widely deployed so that larger actors can check the power of smaller bad actors [...] As long as everyone has access to similar generations of models – which open source promotes – then governments and institutions with more compute resources will be able to check bad actors with less compute.
> The bottom line is that open source AI represents the world’s best shot at harnessing this technology to create the greatest economic opportunity and security for everyone.
> I hope you’ll join us on this journey to bring the benefits of AI to everyone in the world.
> Mark Zuckerberg
Pulling the "Closed source for safety" card, once it makes economic sense for you, after having clearly outlined why you think open source is safer, and how you are "committed" to it "for the long term" and for the "good for the world", is mainly where my criticism is coming from. If he was upfront in the new blog post about closing source for competitive reason, I would still find it a distasteful bait and switch but much less so than trying to just put the safety sticker on it after having (correctly) trashed others for doing so.
https://about.fb.com/news/2024/07/open-source-ai-is-the-path...
Oh, is it now? So you know for a fact that intelligence comes from token prediction, do you, Mark?
Look, multi-bit screwdrivers have been improving steadily as well. I've got one that stores all it's bits in the handle, and one with over three dozen bits in a handy carrying case! But they're never going to suddenly, magically become an ur-tool, capable of handling any task. They're just going to get better and better as screwdrivers.
(Well, they make a handy hammer in a pinch, but that's using them off-spec. The analogy probably fits here, too, though.)
My POINT, to be crystal clear, is that Mark is saying that A is getting better, so eventually it will turn into B. It's ludicrous on its face, and he deserves the ridicule he's getting in the comments here.
But I also want to go one step further and maybe turn the mirror around a bit. There's also an odd tendency here to do a very similar thing: to observe critical limitations that LLM tools have, that they have always have, and that are very likely baked into the technology and science powering these tools, and then to do the same thing as Mark, to just wave our hands, and say "But I'm sure they'll figure it out/fix it/perfect it soon."
I dunno, I don't see it. I think we're all holding incredible screwdrivers here, which are very impressive. Some people are using them to drive nails, which, okay, sure. But acting like a screwdriver will suddenly turn into precision calipers (and a saw, and a level, and...) if we just keep adding on more bits, I think that's just silly.
Facebook's mission of "connecting the world" turned out to be the absolute worst thing anyone should ever try to do. Humans are social creatures, yes, but every connection we make costs energy to maintain, and at a certain point (Dunbar's Number) we apply the minimal amount of energy and effort. With Internet anonymity, that means we are actually incapable of treating each other as people on the Internet, leading to the rise of toxicity and much, much worse.
Mark has never understood this, and as his fortune is built around not understanding this, he never will.
There is nothing good that will come from Meta's "superintelligence" and this vision is proof.
The core problem is gamification of social interaction. The 'Like' button and everything like it for things people say or show is hands down the worst thing to happen on the internet. Everywhere they can, people whore for karma (unless they spend a lot of mental effort to fight back that urge). How primitive the related moderation systems are directly affects how much primitive shit gets rewarded and alas, most moderation systems are ridiculously primitive.
So, dopamine hits for saying primitive shit.
Well, that's because there aren't people on the internet! I mean, yes, us technologists understand that there are often people pulling knobs and levers behind the scenes as an implementation detail, so technically they are there. But they are only implementation details, not what makes it what it is. If you replaced the implementation with another algorithm that functions just as well, nobody would notice. In that sense, it is just software.
> leading to the rise of toxicity and much, much worse.
It is not so much that it has lead to anything different, but that those who used to be in the forest yelling at animals as if they were human moved into civilized areas when they started yelling at computers as if they were human. That has taken their mental disorders to where it is much more visible.
>We believe the benefits of superintelligence should be shared with the world as broadly as possible.
So... ads.
I think it would be back to income based tiers though. You want more assistance, pay $200 per month. Even more, maybe $2000 (for companies). Then, if you dont want to pay, you get contextual ads (which would work here because llms can contextualize far better), and a lower quality of service.
Any time a CEO publishes such empty, wordy essays, it's probably earnings reporting time. I can't shake the feeling it's a public subreply at one of or a cluster of doubting investors, who started to doubt the CEOs vision for the company, or find the lack of one on a certain topic concerning.
[1]https://www.anthropic.com/research/project-vend-1?ref=blog.m...
Model since then have been able to run it profitably. Incredible how fast things are progressing.
Meanwhile I can't properly find items that are listed on FB marketplace.
It is always abundance for the super rich, scarcity for those in jobs.
How can I be free to do my gardening whenever I want when the landlord is asking for $11K rent in my SF flat?
So eventually they will do the opposite of this 'vision' and put this super intelligence to replace jobs.
Also, what happened to the Metaverse that Meta invested hundreds of billions as per their namesake?
They bought a shit ton of GPUs before the LLM boom, which gave them a running start on training their own model. Zuck talks about it in an interview with Lex Friedman.
This is the fatal flaw. It's been recognized explicitly for at least 140 years that the price of land rent rises in lockstep with productivity increases, guaranteeing there is no "escape velocity" for the labor class regardless of how good technology gets.
But it was ultimately lost in translation. The layman heard: Go to college/university to become a more appealing laborer to employers. And thus nothing improved for the people; the promises of things like higher income never occurred — incomes have held stagnant.
Technology increases aren't there so you work less hours for the same pay, they're there so your business owner gets more money form you working the same hours.
If a machine gets invented that can do your job it's not like you can now go home and relax for the rest of your life and still keep receiving your pay cheques. This utopia doesn't exist.
You can work his fields in exchange for most of the harvest of course!
With the metaverse it won't matter that you live in a 3x3m cubicle because you will use your VR headset to pretend you live in a spacious and comfortable place.
That's how it was in snow crash anyway, where the term comes from.
just running on metas servers with metas software and metas tracking and algorithms.
> "This is distinct from others in the industry who believe superintelligence should be directed centrally towards automating all valuable work, and then humanity will live on a dole"
says the guy who spend most of the last 3 years laying people off.
theres just too much sliminess to dissect. i leave it at that.
one thing for sure, these evil megacorps will use this tech in a dystopian and extractive way. nothing ever changes.
I don't think the author of that book is unbiased, and after some healthy debate with friends, imagine there are a number of different perspectives on the facts. But it seems clear that, well before it was public knowledge outside of the company, there was clearly visibility of and ignorance over harms being caused by the platform inside of it.
Facebook (now Meta) turned human attention into a product. They optimized for engagement over wellbeing and knew that their platforms were amplifying division and did it anyway because the metrics looked good.
It's funny, because I aspire to many of the same things cited in this vision -- helping realize the best in each individual, giving them more freedom, and critically, helping them be wise in a world that very clearly would prefer them not to be.
But the vision is being pitched by the company that already knows too much about us and has consistently used that knowledge for extraction rather than empowerment.
Does the average American worker today spend a ton of time in productivity software?
I know and Zuckerberg surely knows the impact on labor will be much more pervasive than that, so it seems like an odd way to frame the future.
"Average?" No. But many millions of people, yes.
The majority of people in my company spend their day tied to Microsoft Office.
Which bring its own problems when managers don't understand that building a computer program isn't the same speed, complexity, and skill level as making a PowerPoint presentation.
Considering that the most common use for "AI" is to take jobs away from creators like artists, musicians, illustrators, writers, and such, I find this statement hard to believe.
So far, all I've seen is AI taking money away from the least-paid workers (artists, et.al.) and giving it to tech billionaires.
Creating what? AI slop?
- Maximum data extraction - Behavioral modification for profit - Attention capture and addiction maintenance
"Personal superintelligence" serves all three perfectly while appearing to do the opposite.
"We believe the benefits of superintelligence should be shared with the world as broadly as possible. That said, superintelligence will raise novel safety concerns. We'll need to be rigorous about mitigating these risks and careful about what we choose to open source. Still, we believe that building a free society requires that we aim to empower people as much as possible."
So maybe no more open source because of "safety"?
It also makes you wonder what they do with all of that information. But surely this is altruism.
Instead, you insinuate and play into fantasy and wishful thinking.
Just to add some food for thought: Is superintelligence simply a very high IQ, a higher than top humans one? If so, we'd need a way to measure that, since existing IQ tests are designed for human intelligence. Or is superintelligence about scale/order-of-magnitude: many high-IQ minds working together? That would imply a different kind of threshold. But perhaps the key idea is that superintelligence is inherently uncapped, that is once we reach a level we consider "superintelligent" we can still imagine something even more advanced that fits the same label.
- their eyesight is too poor to read
- their paws are not designed for fine manipulations so they cannot write or type
- their throats and mouths are not nearly as nimble as ours, so they cannot vocally communicate detailed information
Even if there was an Newton-level dog, they wouldn't be able to access the ideas of an earlier Euclid-level dog. Human knowledge is not just about our big brains, we've developed many physical features that make transmission of information far easier than other species.
OTOH dogs do have a good intuitive "common-sense" understanding of arithmetic, geometry, and physics. It is the unique gift of humans that we can formalize and then extend this intuition, but this ability (and intelligence as a whole) relies on nonverbal common sense.
But at least he’s trying to signal benevolence. People getting trapped into their projected image is a thing, so in this day and age I’m going to take this as a win.
Can you put a date on this please?
Thanks, tantalor
Does anyone know what this is referring to?
I don't think anyone knows what he is referring to. Maybe AlphaEvolve? Certainly not Llama.
The company should be broken up, its assets auctioned, its IP destroyed.
Sorry, but Jevon's Paradox[1] returns yet again.
If you make workers more efficient, then we won't be freed up to spend more time creating and connecting. There will be more work.
Creating more efficient steam engines didn't reduce coal consumption, it just made there be more steam engines. The second order effects of efficiency don 't work the way we think they work.
1. LLMs and "AI" broadly can become a very useful and powerful technology that can have a transformative effect on industry and so on.
2. Talk of "superintelligence" is total horseshit.
What has intelligence (let alone superintelligence) or lack of, got to do with the last two. All these discussions about AGI seems to have reduced what it means to be a human being to a token generator.
EDIT:
to clarify, this is sarcasm
More people today are dying from starvation than people existed on earth 200 years ago. Celebrating our achievements in making shareholders rich is one thing, but to take credit for freeing the people. Yikes. Mark is more out of touch than seems possible.