Edit: there it is. `vibe-coded and deployed with Claude Code`
...and then you read something like this, and realize, "Yeah, no, I have room for improvement, sure, but thank f*** I'm not like this."
The entire post reads like someone high on their own supply. Just when I think they're getting to a point, they pull out every fifty-dollar word and concept they possibly can (explaining none of it, nor linking to any Wikipedia articles to help readers understand) to ostensibly sound smarter-than-you and therefore entitled to authority.
I'm sure there's a law/rule/principle for this concept somewhere, but if you can't explain your point simply, you don't understand the topic you're trying to communicate. This one-off, vibe-coded (RETCH), slop-slinger is a prime example of such.
Pay no attention to the charlatan cosplaying as tenured academia.
The article “The World Is Ending. Welcome to the Spooner Revolution” from Aethn delves into the transformative impact of advanced AI models on the global socio-economic landscape.
The author critiques the belief in a static “end of history,” suggesting that recent advancements in AI, particularly large language models (LLMs), are catalyzing a profound shift in how work and economic structures function. These AI models have evolved beyond previous limitations, enabling automation of tasks across various knowledge-based professions without extensive fine-tuning.
This technological leap is diminishing the traditional value of wages, as individuals can now leverage AI to perform tasks that previously required entire teams. Consequently, there’s a growing trend toward self-employment and entrepreneurial ventures, echoing the ideals of 19th-century thinker Lysander Spooner, who advocated for a society where individuals operate independently of wage-based systems.
The article posits that this “Spooner Revolution” will lead to a surge in small enterprises, a decline in traditional corporate structures, and a reevaluation of educational and economic institutions to accommodate this new paradigm.
I’m not sure I understand your argument for early stage venture going away? There’s more proven before funding with AI so the round has to be bigger since it’s inherently later stage by the time it reaches VC?
The self-employed is now able to rapidly develop their ideas while employed reducing their risk and cost.
Per VC, that's right and secondarily it's more difficult to justify earlier funding when one no longer needs to hire a substantial team. I expect the current major VC firms to get even bigger.
Nothing interesting, they're just trying to vibe code their way into being a thought leader. Which is both hilarious and sad.
Even if we did restructure society around everyone being their own business owner (and man, are the powers that be trying to do so for a whole slew of awful, worker-screwing reasons), the level of competition OP tries to describe (of sole proprietors competing 1:1 against corporate behemoths, as if corporations somehow wouldn't also benefit from such AI-amplification of output) is highly unlikely. The driver of massive AI investment at present is to lessen the impact of workers in an attempt to drive down wages and consolidate power within the hands of the few, prime corporate players; such a future filled with self-starting entrepreneurs would still be limited to those who can afford the tokens or subscription plans to the largest AI models (or have the hardware and expertise at home to roll their own), and so you'd just have a global gig economy for workers with three or four companies hoovering up most of the excess for themselves through AI costs.
Look, I'd love nothing more than to run my own little IT Services firm with a sociable partner to handle the social networking aspect, while I get to tinker, build, and make people smile all day long at how cool technology can be. AI, however, ain't it. If anything, it threatens the survival of corporate knowledge workers and entrepreneurs both, by grossly devaluing their labor at a time of massive wealth inequality and severe employment precarity.
We can do better than that.
In this world the neurodivergent is empowered, they no longer are charged with persuasion of their ideas to a team or corporation. They can build their ideas themselves and form a limited partnership with someone with the talents they lack.
Though I make it a point to describe the new economic order without regard to its desirability. However, other authors on the subject (e.g. Lysander Spooner, Rothbard, etc) would be pleased by the development in terms of its social welfare.
I don't know why so many people are obsessed with Amdahl's law as some universal argument. The quoted section is not only 100% incorrect, it sweeps the blatantly obvious energy problem under the rug.
Imagine going to a local forest and pointing at a crow and shouting "penguin!", while there are squirrels running around.
What Amdahl's law says is that given a fixed problem size and infinite processors, the parallel section will cease to be a bottleneck. This is irrelevant for AI, because people throw more hardware at bigger problems. It's also irrelevant for a whole bunch of other problems. Self driving cars aren't all connected to a supercomputer. They have local processors that don't even communicate with each other.
>The latest innovations go far beyond logarithmic gains: there is now GPT-based software which replaces much of the work of CAD Designers, Illustrators, Video Editors, Electrical Engineers, Software Engineers, Financial Analysts, and Radiologists, to name a few.
>And yet these perinatal automatons are totally eviscerating all knowledge based work as the relaxation of the original hysterics arrives.
These two sentences contradict each other. You can't eviscerate something and only mostly "replace" it.
This is a very disappointing blog post that focuses on wankery over substance.
> This is irrelevant for AI, because people throw more hardware at bigger problems
GAI is a fixed problem which is Solomonoff Induction. Further Amdahl's law is a limitation on neither software nor a super computer.
Both inference and training rely on parallelization, LLM inference has multiple serialization points per layer. Vegh et al 2019 quantifies how Amdahl's law limits success in neural networks[1]. He further states:
"A general misconception (introduced by successors of Amdahl) is to assume that Amdahl’s law is valid for software only". It would apply to a neural network as it does equally to the problem of self-driving cars.
> These two sentences contradict each other
There is no contradiction only a misunderstanding of what "eviscerates" means and even with that incorrect definition resulting in your threshold test, it still remains applicable.
1. https://pmc.ncbi.nlm.nih.gov/articles/PMC6458202/
Further reading on Amdahl's law w.r.t LLM:
2. https://medium.com/@TitanML/harmonizing-multi-gpus-efficient...
3. https://pages.cs.wisc.edu/~sinclair/papers/spati-iiswc23-tot...
negentropic" verbiage makes
The world go 'round, no?
#Haiku
That might be why people assume this is AI writing and frankly the same thing occurred to me about eight paragraphs in.
And your stated experience of working at FAANG and starting a multi-billion dollar company is so widespread you might actually be able to fill an entire boutique hotel with people who share it. As stego-tech says, you might wanna investigate the labor experience of people who, I dunno, have country or folk songs written about their jobs.
I mean, you're obviously a bright cat, but unfortunately merely being bright isn't the key to good communications. It's wanting other people to care about what you're saying and learning how to make them care.
"Replaces". Uh-huh.