That sounds like an onset of a certain type of dark age. Eventually the shiny bits will too fall off when the underlying foundation crumbles. It would be massively ironic if the age of the "electronic brains" brought about the demise of technological advancement.
Windows is maintained by morons, and gets shitter every year.
Linux is still written by a couple of people.
Once people like that die, nobody will know how to write operating systems. I certainly couldn’t remake Linux. There’s no way anyone born after 2000 could, their brains are mush.
All software is just shit piled on top of shit. Backends in JavaScript, interfaces which use an entire web browser behind the scenes…
Eventually you’ll have lead engineers at Apple who don’t know what computers really are anymore, but just keep trying to slop more JavaScript in layer 15 of their OS.
Nowadays there are no tastemakers, and thus you need to be a public figure in order to even find your audience / niche in the first place.
That's always been the case depending on what you're trying to do, though. If you want to be Corporation Employee #41,737, or work for the government, you don't need a "personal brand"; just a small social network who knows your skills is good enough. If you're in your early 20s and trying to get 9 figures of investment in your AI startup, yeah you need to project an image as Roy from the article is doing.
It's amplified a bit in the social media world, but remember that only ~0.5% of people actively comment or post on social media. 99.5% of the world is invisible and doing just fine.
That being dismissed as a "nice to have" is like watching people waving flags while strapping c4 to civilizational progress.
He writes COBOL and maintains a banking system that keeps the world running. Literally like a billion people die if the system he maintains fails. I maintain a VC funded webpage that only works half the time. I make more than him, a lot more.
This has to be an exaggeration.
Turing's view, in fact, is similar: "There would be great opposition [to AI] from the intellectuals [read programmers in the context of this thread] who were afraid of being put out of a job. It is probable though that the intellectuals would be mistaken about this. There would be plenty to do, i.e. in trying to keep one’s intelligence up to the standard set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits."
[0] Thomas Bernhard's The Loser is a fantastic account of the opposite standpoint---of the second best piano student, who cannot stand existing in a world with Glenn Gould.
I find this a great choice for an opener. If linesman across the nation go on strike, its a week before the power is off everywhere. A lot of people seem to think the world is simple, and a reading of 'I, Pencil' would go far enlighten them as to how complicated things are.
> secure the internet...
Here, again, are we doing a good job? We keep stacking up turtles, layers and layers of abstraction rather than replace things at the root to eliminate the host of problems that we have.
Look at docker, Look at flat packs... We have turned these into methods to "install software" (now with added features) because it was easier to stack another turtle than it was to fix the underlying issues...
I am a fan of the LLM derived tools, use them every day, love them. I dont buy into the AGI hype, and I think it is ultimately harmful to our industry. At some point were going to need more back to basics efforts (like system d) to replace and refine some of these tools from the bottom up rather than add yet another layer to the stack.
I also think that agents are going to destroy business models: cancel this service I cant use, get this information out of this walled garden, summarize the news so I dont see all the ad's.
The AI bubble will "burst", much like the Dotcom one. We're going to see a lot of interesting and great things come out of the other side. It's those with "agency" and "motivation" to make those real foundational changes that are going to find success.
Maybe it will be worse now but I kind of feel like the 90% is just more visible than it used to be.
In addition to the limits of human planning and intellect, I'd also add incentives:
as cynical as it sounds, you won't get rewarded for building a more safe, robust and reliable machine or system, until it is agreed upon that the risks or problems you address actually occur, and that the costs for prevention actually pays off.
For example, there would be no insurances without laws and governments, because no person or company ever would pay into a promise that has never been held.
It's not even limited to modern technology. If you go talk to certain grievance-driven individuals from tribal backgrounds (for lack of a better term) who have produced nothing for the last 10000 years, they will levy similar accusations against the very institutions that are providing them with healthcare their ancestors could only have dreamed of. In some areas, even agriculture is seen as suspect. It's ridiculous.
It's scary to me how both sides of the American political aisle have suddenly turned anti-tech and are buying into the same arguments. Gross.
P.S. but these chinese robots are really scary
I'm glad you appreciate the contributions of compiler engineers, but seeing as my current job is writing compilers for AI chips... I am proud everytime I see someone use AI, in their business, in their life, etc,, because it's my small contribution to the ever-growing American economy and the forward march of human progress.
I'm also so tired of people making fun of techbros. I'm glad techbros exist. They actually make the world a novel place to live in. People who want to go back to living in the dark ages should go move in with the Amish. The sudden turnaround of tech workers (supposedly paragons of human progress) into unquestioning Luddites is disappointing
Taking a sober look at the state of software, we observe a few things.
The services offered by modern software to users, as a whole, have remained largely the same over the past ~5 years. The state of software quality is in rapid decline, with enshittification and rent-seeking running extraordinarily rampant. Software security has been in the same disaster-state it has been for the past 20 years, where software resilience is in stagnation, governments and private institutions stockpile vulnerabilities, and security researchers and auditors can consistently find new vulnerabilities. The rest of American society outside of the tech sector is currently facing a standards of living nosedive, and clearly they have not benefited from the tech sector's financial proliferation in the AI space.
Realistically, I cannot help the feeling that we're headed towards a reality where the 4th amendment is dead, and machine learning models process everything about you to ultimately extract more from you. No privacy for you! No agency for you! Only indentured servitude, and constant fear.
I fully recognize my take is ahead of its time, but I concur that the systems-oriented point of view is our way out of this hell. Specifically, software should be conceived under the following ideals: (1) software should be as simple as possible, and provide its intended services with as little bloat as possible; (2) specifications of software should be as concise and simple as possible; (3) specifications should be should be expressive enough to capture security-relevant guarantees, e.g. cryptographic security properties; (4) proofs verifying that software satisfies its specifications should live intrinsically to the implementation, and should be as simple as possible; (5) proof-checkers should be verified. I feel the academic Formal Methods, Programming Languages, Systems, Security, and Cryptography communities, as well as the internet standardization community, are slowly converging to this ideology consensus, but I also think in other ways we are farther off than ever. With respect to these ideals, the "building" mindset that twitter has adopted is deeply toxic. And obviously Silicon Valley has their heads in the sand when it comes to this.
I do have faith the state of software (and society) will improve, but whether that future is compatible with the rent-seeking hyper-capitalist reality Silicon Valley and Wall Street have synthesized is yet to be seen.
> Individual intelligence will mean nothing once we have superhuman AI, at which point the difference between an obscenely talented giga-nerd and an ordinary six-pack-drinking bozo will be about as meaningful as the difference between any two ants. If what you do involves anything related to the human capacity for reason, reflection, insight, creativity, or thought, you will be meat for the coltan mines.
Believing this feels incredibly unwise to me. I think it's going to do more damage than the AI itself will.
To any impressionable students reading this: the most valuable and important thing you can learn will be to think critically and communicate well. No AI can take it away from you, and the more powerful AI will get the more you will be able to harness it's potential. Don't let these people saying this ahit discourage you from building a good life.
I have heard some form this advice for over 30 years. Not one single penny I have earned in my career came from my critical thinking. It came from someone taking a big financial risk with the hope that they will come out ahead. In fact, I've had jobs that actively discouraged critical thinking. I have also been told that the advice to think critically wasn't meant for me.
I can't help but wonder whether the person who gave you advice "to think critically wasn't for [you]" didn't have YOUR best interests at heart, and/or wasn't a wise person.
I also worked jobs where I was actively discouraged to think critically. Those jobs made me itchy and I moved on. Every time I did it was one step back, three steps forward. My career has been a weird zigzag like that but trended up exponentially over 25 years.
We all have our anecdotes we can share. But ask yourself this: if you get better at making decisions and communicating with other people, who is that most likely to benefit?
/s if not obvious
This. Just thinking that those with power would even allow that leveling seems on the verge of impossible. In a sense, you can already see it practice. Online models are carefully 'made safe' ( neutered is my preferred term ), while online inference is increasingly more expensive.
And that does not even account for whether, 'bozo' will be able to use the tool right.. because an expert with a tool will steal beat a non-expert.
It is a brain race. It may differ in details, but the shape remains very much the same.
The author is describing it, not necessarily ensorsing it.
But whether they really believe this or not, the point is that most wouldn't be given any opportunity to "harness is potential", whether they're "obscenely talented giga-nerds" or not, because they'd be economically redundant.
Imagination knows no negation.
The first time an LLM solves a truly significant, longstanding problem without help is when we will know we are at AGI.
Maybe if you read past these paragraph it would have been clearer?
I genuinely like the author's style ( not in the quote above; its here for a different reason ). It paints a picture in a way that I still am unable to. I suck at stories.
Anyway, back to the quote. If that is true, then we are in pickle. Claw and its security issues is just a symptom of that 'break things' spirit. And yes, this has been true for a while, but we keep increasing both in terms of speed and scale. I am not sure what the breaking point is, but at certain point real world may balk.
Yes, sometimes people who barrel forward can create a mess, and there are places where careful deliberation and planning really pay off, but in most cases, my observation has been that the "do-ers" produce a lot of good work, letting the structure of the problem space reveal itself as they go along and adapting as needed, without getting hung up on academic purity or aesthetically perfect code; in contrast, some others can fall into pathological over-thinking and over-planning, slowing down the team with nitpicks that don't ultimately matter, demanding to know what your contingencies are for x y z and w without accepting "I'll figure it out when or if any of those actually happen" - meanwhile their own output is much slower, and while it may be more likely to work according to their own plan the first time without bugs, it wasn't worth the extra time compared to the first approach. It's premature optimization but applied to the whole development process instead of just a piece of code.
I think the over-thinkers are more prone to shun AI because they can't be sure that every line of code was done exactly how they would do it, and they see (perhaps an unwarranted) value in everything being structured according to a perfect human-approved plan and within their full understanding; I do plan out the important parts of my architecture to a degree before starting, and that's a large part of my job as a lead/architect, but overall I find the most value in the do-er approach I described, which AI is fantastic at helping iterate on. I don't feel like I'm committing some philosophical sin when it makes some module as a blackbox and it works without me carefully combing through it - the important part is that it works without blowing up resource usage and I can move on to the next thing.
The way the interviewed person described fast iteration with feedback has always been how I learned best - I had a lot of fun and foundational learning playing with the (then-brand-new) HTML5 stuff like making games on canvas elements and using 3D rendering libraries. And this results in a lot of learning by osmosis, and I can confirm that's also the case using AI to iterate on something you're unfamiliar with - shaders in my example very recently. Starting off with a fully working shader that did most of the cool things I wanted it to do, generated by a prompt, was super cool and motivating to me - and then as I iterated on it and incorporated different things into it, with or without the AI, I learned a lot about shaders.
Overall, I don't think the author's appraisal is entirely wrong, but the result isn't necessarily a bad thing - motivation to accomplish things has always been the most important factor, and now other factors are somewhat diminished while the motivation factor is amplified. Intelligence and expertise can't be discounted, but the important of front-loading them can easily be overstated.
I recently traveled to San Francisco and as an outsider this was pretty much the reaction I had.
(on the other hand, in DC there's ads on the metro for new engine upgrades for fighter jets, and i've gotten used to that.)
I do get that it is not nice to be constantly reminded of work. Trees would make a nicer view.
Witnessed this first hand on the train the other day. A woman on her laptop. On the left half of the screen, Microsoft Word. On the right, ChatGPT. Text being dragged directly from one to the other.
I'm not sure how to feel about the fact that people with useless bullshit jobs have found a way to become even more useless than they already were before. It's impressive, in a way.
Why wouldn't investors give these people money? It's not like being an investor implies having morales, all they care about is making money whether it's legal or not and luckily for them crime not only pays but it's legal now too.
And of course, there's no downside for the investors. If you backed a con artist, you're not culpable - you're a victim.
Most VCs have no idea how to accuratly judge startups based on their core merit, or how to make good decision in startups (though they may think they do), so instead they focus on things like "will this founder be able to hype up this startup and sell the next round so I can mark it up on my books".
I can believe in that. But just a couple of years ago it was clearly happening because the VCs wanted those people to sell the companies into some mark and return real money to them. I wonder when did the investors became the marks?
Linux gets some fame and recognition, meanwhile OpenBSD and FreeBSD are the ones they power routers, CDNs and so many other cool shit while also being legit good systems that even deserve attention for the desktop.
What these dialects of the Unix operating system do lack is a licence which ensured their success.
Linux won in the end as much from its copyleft licence as from its development methodology or personalities involved.
In SF though, it’s as if the previous culture of the place has just been overwritten entirely. Hard to believe that it’s the same city which Kerouac, the Beats or Hippies ran around in. Or even the historically wealthy but cultural old money class, like Lewis Lapham’s family, or Michael Douglas’s character in The Game. Nope, all gone, and certainly no one there has ever read On the Road.
I suppose you could probably just blame this on how the people at the top behave: totally uninterested in funding culture, unlike the billionaires of yesteryear that built concert halls and libraries. And so a city which is hyper focused on one economic activity has no space for anything else.
https://monoskop.org/images/d/dc/Barbrook_Richard_Cameron_An...
Today's Bay Area has a direct lineage to all of that. Blank Space by W. David Marx does a great job of explaining how the post-2000 parts happened.
https://www.amazon.com/dp/B0DXMVK94H
It's all part of the same long, strange trip.
To be fair to Jack Kerouac, I was young when I read it but even at my advanced age I don't think I want to reread it.
Also, the old hippie culture sort of moved out of SF and into the surrounding bay, I think especially toward East Bay.
But if you're immersed in the modern tech world, you're just ignoring all that.
So you're saying migration changes a society's culture, sometimes to the point of ruination?
There was a high-profile example of this phenomenon recently in NYC, where a 35yo nobody managed to win the mayoral election with fake smiles and empty promises, because 40% of the city is now foreign-born. Had only native-born Americans (not even just those born in NYC) voted, he would have lost.
And it was telling how differently his opponents presented themselves, emphasizing, in their dying outer borough accents, their "toughness"--an attribute once thought essential for the mayor of America's largest city to possess, especially for anyone with a memory of the city before (and during) 9/11. Now? Apparently superfluous. And the victor's ever-present smile, rather than off-putting to the city's voters, who in the past might have perceived it at best as phony, and at worst, as more befitting one the city's countless mentally ill transients, instead unexpectedly found it endearing.
Ignorant on so many levels, I truly feel sorry for people who have been brainwashed by their media to think so uncritically.
And why does it matter in any way whatsoever what would have happened if immigrants who gained citizenship couldn't vote? They can vote, and did. So? That is about as relevant as the observation that if Mamdani wouldn't have won if he ran for mayor of Tampa. So? What's the point? I'm truly curious.
America is a multiracial democracy fueled by waves of immigration, NYC especially. Those people live there and are citizens. What's your point?
What has changed the city's culture is money. As mentioned in the article, virtually every billboard and advertising surface downtown is for some SAAS or B2B company. Every startup that gets capitalized dumps a load of money into saturation advertising making itself look like the new hotness, and the corresponding rise in advertising prices means nothing is advertised but tech and ways to make money with tech. A lot of the adverts even look the same.
That's not the product of migrants. SF is turning into a ghost town because the entire downtown area increasingly feels like the inside of a conference center. There isn't anything fun to do or places to go besides work, nothing that might appeal to youth, nothing that isn't business focused. Can you imagine being a teenager in SF? You go to the middle of town and every advert is just an elevator pitch for HR services or devops or model training, and most of the them aren't even visually interesting to look at. Entire subway stations are taken over with adverts touting how agentic or accelerant some new brand is. It's boring. A Japanese acquaintance of mine who visited SF recently asked 'don't people here think about anything but work?'
How you ended up blaming this humanity-free environment on 'too many migrants' is beyond me.
This assumption is remarkably out of step with the people who actually inhabit the city’s public space. At a bus stop, I saw a poster that read: today, soc 2 is done before your ai girlfriend breaks up with you. it’s done in delve. Beneath it, a man squatted on the pavement, staring at nothing in particular, a glass pipe drooping from his fingers. I don’t know if he needed SOC 2 done any more than I did."
I call this the Lockheed Effect. In Washington, D.C., Lockheed Martin runs advertisements in the subways for the F-35 Joint Strike Fighter. Most of the people on those subways are not in the market for a fighter jet, but the advertisement isn't for them. It's for the general making purchasing recommendations or the congressperson promoting the appropriations bill that will allocate funds for the jets. They will be on that train and see the ad, and they might be swayed by it, and they are one of but a handful of people whose decisions can result in billions in jet plane sales, and that's what counts in terms of whether the ad does its job.
Clueless.
Fat was demonized to push sugar. "Protein" was then pushed because you can just load up stuff like "protein bars" with sugar.
Historical aristocracy were defined by eating meat, while their subjects ate grain. "Beef" for the Normans, "cows" raised and slaughtered by the Anglo-Saxons.
San Francisco is a tolerant place. Tolerance is how you get Juicero or Theranos and whatever Cluely seems to have pivoted to, but it’s also how you get Twitter, Uber, Dropbox.. and thousands of others.
So it is crucial to consider proportionality. Taking some bad with some good results in getting a little bit of bad and a hell of a lot of good. But if you aren’t careful, all you’ll see is the bad.
It felt like the author was punching down, too. This Cluely founder seems largely unsuccessful and, as the boat guy says at the end, just a kid. A chud of a kid, but a kid nonetheless.
I do have a deep fondness for SF billboards being building-stuff oriented. I don't care for consumerism.
The vapidity of the products created is remarkable, however.
Basically: nobody wants AI, but soon everyone needs AI to sort through all the garbage being generated by AI. Eventually you spend more time managing your AI that you have no time for anything else, your town has built extra power generators just to support all the AI, and your stuff is more disorganized before AI was ever invented.
Anyone familiar with what work this is referring to?
In general long meandering semi-factual pieces like this, with odd historical excursions, are one of his things and I don't know anyone else that does it quite the same. (Hmm... oddly enough Scott Alexander, who he cites here, also does some similarly Borgesian stuff, but with a different bent.) One of my favorite writers and I recommend pretty much everything he's done since the early 2010s.
But in general, Sam Kriss tends to weave fiction and nonfiction together in his writing.
https://open.substack.com/pub/samkriss/p/numb-at-burning-man
There is a red line and it is AI. People viscerally hate it and pushing it will just make people question whether they need computers or the Internet at all (hint, they do not).
CEOs fell validated by the mediocre psychopath parts of their developers who always push the latest fad in order to gain an advantage and control better developers. Fads generally last about two years, and this is it.
It will be very gratifying if the AI hubris is Silicon Valley's downfall and completely needlessly ruins the industry just because the same CEOs who read a couple of science fiction books and had rocket envy now have AI envy.
I'm not sure I can trust the author's characterization of Roy, though. I got the impression that they don't like any of the people they interviewed (which, you know, fair), but that doesn't get even close to the depths of hatred towards Roy that they sub-textually exude throughout the article.
If their portrayal is even half accurate, though, that's a perfectly reasonable amount of hate.
For a longer and more biting critique of SF one should read
Private Citizens (2016) by Tony Tulathimutte
“ Capturing the anxious, self-aware mood of young college grads in the aughts, Private Citizens embraces the contradictions of our new century: call it a loving satire.”
I think the "agency" the article talks about is really just "willingness to take risks". And the reason some people are high outliers on that scale is a combination of:
* Coming from such a level of privilege that they will be completely fine even if they lose over and over again.
* Willingness to push any losses onto other undeserving people without experiencing guilt.
* A psychological compulsion towards impulsive behavior and inability to think about long-term consequences.
In short, rich selfish sociopaths.
Some amount of risk-taking is necessary for innovation. But the level we are seeing today is clearly unsustainable and destructive to the fabric of society. It's the difference between confining a series of little bangs to produce an internal combustion engine versus just throwing hand grenades around the public square. The willingness to take chances needs to be surrounded by a structure that minimizes the blast radius of failure.
To be a little more generous, this third point is actually a classic symptom of ADHD. I've known some (non-CEO) folks like this and the kind of risks they take in their personal lives seemed completely alien to me.
If there's nothing but upside to enterprising, and less opportunity-cost (vs subsistence), we might see some innovative or very strange things.
Unsurprisingly, people are more willing to try starting a business if doing so and having it fail doesn't mean you might lose access to healthcare and die from an easily curable malady.
JFC kill me now that is NOT a future I want to live in.
With the corpse of meritocracy too rotted to deny at this point the elite simply seem to have run out of lies for placating the people.
Or, more likely the people are so sickeningly impotent, that’s there’s no need for the lies anymore. The new aristocracy will prevail over liberalism and everything the west lied of being part of the their values for years.
However, I think we are entering an age of geopolitical chaos. And that will be a darwinian struggle of functioning governance systems.
“If we are to have another contest in the near future of our national existence, I predict that the dividing line will not be Mason and Dixon's but between patriotism and intelligence on the one side, and superstition, ambition and ignorance on the other.” ― Ulysses S. Grant
It's weird that homo sapiens sapiens has been around for approximately 300,000 years and it's never happened once. Not even once.
Now consider Reddit.
On r/hacking people tend to understand the danger of mindlessness and support war against it: https://www.reddit.com/r/hacking/comments/1r55wvg/poison_fou...
In constrast r/programming is full of, let's call them "bot-heads", who are all-in on mindlessness: https://www.reddit.com/r/programming/comments/1r8oxt9/poison...
A project that you spam in every of your comments.
Poison Fountain is top of mind currently so it's understandable I talk about it constantly. Even to my wife. Also I think it's highly relevant to the excellent Harper's article we're reading today.
Whether the Redditors "like the project or not" reflects whether or not they think there is a problem with mindlessness.
What they actually say is almost immaterial. Either it's FUD about malware or illegality or something they imagined without evidence about how easy the poison is to filter. These fictions are just a manifestation of their opposition to the idea.
You can see that among the bot-heads on r/programming (perhaps forced to embrace mindlessness by career considerations) there's nothing that can be said without attack. A dozen downvotes immediately. They actually logged into Hacker News and posted FUD directly to the HN post I linked to. Spectacular.
The opposite is true on r/hacking. Except for a few in opposition (some of whom did unsuccessfully attempt to DDOS the fountain) most people sympathize and agree. They don't want to be dependent on Sam Altman or Elon Musk for their cognition.
The generation of code and images fits right into this; the famous, historical "astronaut on a horse" is, in substance, a collage of images, images produced by other humans and "assembled".
On a broader scale, this means that humanity will more or less be able to count on Conrad Gessner's Universal Library/Biblioteca Universalis/Library of Babel, and generally speaking, we can aim for a future where humans produce knowledge and machines put it into practice. Like any evolution, this will lead to some losses while gaining something else.
The current explosion is mostly hype and a nazi-managerial wet dream; as for universities, the reality is that they are largely obsolete, so it's only natural that students, rather than seeking knowledge, which is of little use to them as it's disconnected from the present, are just looking for a piece of paper to build a career otherwise.
I'm glad I went to school when people learned how to think.
A 2-cycle ouroboros. Man-machine-man-etc. Consuming each-other's secretions. Forever.
Answering “agentic” is the most “mimetic” answer you could give.
The most “agentic” response is probably “Fuck you”.
I'm not sure how many AI researchers would find this accurate. It seems to me that under conditions of ambiguity people often default to describing their preferred version of reality.
The way he understands and captures the dynamics makes you think he's a native to the "bay area" tech scene or immersed in TPOT. Yet here's a complete outsider, pinpointing the unstated core premises and paradoxes of these communities.
Silicon Valley has been a parody of itself for long time now
Tangential, but this sounds an awful lot like Disgustipated (‘The Cries of the Carrots’) a ‘hidden’ song on the Tool album Undertow, including the exaltation part: the narrator of the song is a preacher.
Most regulations achieve exactly the opposite of what they claim.
Noticed this during the crypto hype as well and the articles about SBF-and-friends' Bahamas lifestyle. Are there more "startups" that feel more like VC-funded frat houses than actual businesses?
What people really think about Silicon Valley. Not so fun to devalue people now is it? Tech is biggest group of assholes.
Billionaire fortunes have grown at a rate three times faster than the previous five years since the election of Donald Trump in November 2024. While US billionaires have seen the sharpest growth in their fortunes, billionaires in the rest of the world have also seen double digit increases. The number of billionaires has surpassed 3,000 for the first time, and the level of billionaire wealth is now higher than at any time in history. Meanwhile, one in four people globally face hunger. https://www.oxfam.org/en/resisting-rule-rich
And I believe this is useful and thought-provoking reading in this context of how unbridled Capitalism is exacerbating the divide between the rich and the poor, the haves and have nots.
Wage slavery: The illusion of freedom: Exploitation Under Capitalism: Marx’s Analysis of Labor and Profit:
https://philosophy.institute/social-political/exploitation-u...
https://davidlingenfelter.substack.com/p/the-normalization-o...
And no, the solution to the problems are not blind unchecked communism (which itself leads to fascism), but perhaps some more ethical & humane methods are needed for an overhaul of world society, and economic & geopolitical regimes.
Sucks to be a wordcel. The school yard bullies won.
This assumption is remarkably out of step with the people who actually inhabit the city’s public space. At a bus stop, I saw a poster that read: today, soc 2 is done before your ai girlfriend breaks up with you. it’s done in delve. Beneath it, a man squatted on the pavement, staring at nothing in particular, a glass pipe drooping from his fingers.
I'm fascinated by hackernews' etiquette, both explicit and implicit, that think 10,000 words of turgid prose that reek with dismissiveness and contempt "(Rationalists, like termites, live in eusocial mounds.)" are valuable, but your curt dismissal of it is rude.