The authors then break down the mechanisms by which AI achieves these outcomes (that seem quite reductive and dated compared to the frontier, for example they take it as granted that AI cannot be creative, that it can only work prospectively and can't react to new situations and events etc.), as well as exemplifying those mechanism already at work in a few areas like journalism and academia.
Not that I think there is a lot of thinking going on now anyway, thanks to our beloved smartphones.
But just think about a time when human ability to reason has atrophied globally. AI might even give us true Idiocracy!
No sir, there was nothing there to begin with - if you read recent history, you'll see that it's full of stupidity, and a few rabble rousers leaving entire nations by the nose.
With the mollification off the smartphone, we've merely taken off the edge of this killing machine.
I had a wake up call on this yesterday. After a recent HN thread about Zed editor, I decided to give it another try, so I loaded it up, disabled AI, and tried writing some code from scratch. No AI completion, no intellisence. Two things came to mind. First, my editor seems so much more peaceful without being told what to do. Second, it was a bit scary how lost I felt. It was obvious that my own ability to communicate through code had declined a bit since I began using AI coding assistants. It turns out that as expected, coding assistants really are competitive cognitive artifacts. After that experience, I've decided that I am going to do at least part of my coding with all completions turned off. Unfortunately at work you are paid to produce quickly, so I think my AI free editor will have to be reserved for personal projects.
Further related to your statement about thought, the hallucinations persist, and even last night I got a response about 80's pop culture that was over 50% bullshit. Just imagine what intentional persuasion through LLM models will do to society. Independent thought has never been more important.
Regulations do more harm than learning process from mistakes.
It's like for some reason we thought that like some good percentage of us aren't just tribal worker drones who fundamentally just want fats, sugars, salts, dopamine and seratonin. People actively vote against things like UBI, higher corporate taxes, making utilities public. People actively choose to believe misinformation because it suits their own personal tribal narratives.
SSRN is where most draft law review/journal articles are published, which may be the source of confusion.
For most other fields, it is a source of draft/published science papers, but for law, it's pretty much any kind of article that is going to show up in a law review/journal.
From what you're saying it seems that for an insider this is clear. I guess that makes more sense then
I guess i'll start with calling two well known law professors "$500 an hour nitpickers" when they don't earn 500 an hour and have been professors for 15+ years (20+ in Jessica's case), so aren't earning anything close to 500 an hour, is not a great start?
I don't know if they are nitpickers, i've never taken their classes :)
Also, this is an op-ed, not a science paper. Which you'd know if you had bothered to read it at all.
You say elsewhere you didn't bother to read anything other than the abstract, because "you didn't need to", so besides being a totally uninformed opinion, complaining about something else being speculation when you are literally speculating on the contents of the paper is pretty ironic.
I also find it amazingly humorous given that Jessica's previous papers on IP has been celebrated by HN, in part because she roughly believes copyright/patents as they currently exist are all glorified BS that doesn't help anything, and has written many papers as to why :)
1. It is entirely based on speculation of what is going to happen in the future.
2. The authors have a clear financial (and status based) interest in the outcome.
3. I have a negative opinion of lawyers and universities due to personal experience. (This is, of course, the weakest point by far.)
Speculation on future outcomes is not by itself a bad thing, but when that speculation is formatted like a scientific paper describing an experimental result I immediately feel I am being manipulated by appeal to authority. And the conflict of interest of the authors is about as irrelevant as pointing out that a paper on why Oxycodone is not addictive is paid for by Perdue Pharma. Perhaps Jessica's papers on IP are respected because they do not suffer from these obvious flaws? I owe the author no deference for the quality of her previous writing nor for her status as a professor.
On the other hand when countries feel the need to legislate a new law enforcing writing documents in the human understandable language, one doesn't need to be an expert to suspect there was a systemic rot in those industries. It is totally valid to cry foul when even a parliament is concerned about about reading texts they produce for 500$/h.
https://www.congress.gov/bill/111th-congress/house-bill/946
https://www.legislation.govt.nz/act/public/2022/0054/latest/...
2. Get owned out of court because you couldn't afford the $100K (minimum) that you have to pay to the lawyer's cartel to even be able to make your argument in front of a judge.
I'll take number 1. At least you have a fighting chance. And it's only going to get better. LLMs today are the worst they will ever be, whereas the lawyer's cartel rarely gets better and never cuts its prices.
You can criticise the hourly cost of lawyers all you like, and it should be a beautiful demonstration to people like you that no, "high costs means more people go into the profession and lower the costs" is not and has never been a reality. But to think that any AI could ever be efficient in a system such common law, the most batshit insane, inefficient, "rethoric matters more than logic" system is delusional.
Did you read the paper?
when one of the juniors makes a mistake, i can talk to them about it and help them understand where they went wrong, if they continue to make mistakes we can change their position to something more suited for them. we can always let them go if they have too much hubris to learn.
who do we hold to account when a model makes a mistake? we’re already beginning to see, after major fuckups, companies blackhole nullrouting accountability into “not our fault, don’t look at us, ai was wrong”
the other thing is, if you have done a good job selecting your team, you’ll have people who understand their limits, who understand when to ask for help, who understand when they don’t know something. a major problem with current models is that it will always just guess or stretch toward random rather than halt.
so yes, people will make mistakes, but at least you can count on being able to mitigate for those after.
First we stop anthromorphising the program as capable of making a "mistake". We recognise it merely as machine providing incorrect output, so we see the only mistake was made by the human who chose to rely upon it.
The courts so far agree. Judges are punishing the gulled lawyers rather than their faux-intelligent tools.
https://www.pewresearch.org/politics/2025/12/04/public-trust...
In the 1960s, trust in institutions was around 70%.
In 2025 it's about 17%.
This is not something like we had 70% until 2023 and then AI dropped our trust suddenly. If anything, AI doesnt even register on the graph.
So correlation here is practically non-existent. Gallup and Pew have the similar trends for journalists and universities.
You dont get to blame AI for this.
Interesting bump and question. How did clinton improve reputation and then bush destroy it? Or is that a false hump?
I am not sure if I am off-topic, but I am having a lot of trouble with this statement. Institutions are often opaque, and I have never belonged to an institution that empowered me to "take intellectual risks and challenge the status quo." Quite the contrary.
Recently, it so happened that I spent an hour reverse engineering and documenting a piece of a system. A co-worker asked a LLM to do the same. It generated some really nice documentation.
The difference is, I (as a team member) now have the understanding. Generating the documentation does not increase the understanding of the team.
I haven’t read through the whole thing yet, but so far the parts of the argument I can pull out are about how Institutions actually work, as in a collection of humans. AI, as it currently stands, interacts with humans themselves in ways that hollow out the kind of behavior we want from institutions.
“ Perhaps if human nature were a little less vulnerable to the siren’s call of shortcuts, then AI could achieve the potential its creators envisioned for it. But that is not the world we live in. Short-term political and financial incentives amplify the worst aspects of AI systems, including domination of human will, abrogation of accountability, delegation of responsibility, and obfuscation of knowledge and control”
An analogy that I find increasingly useful is that of someone using a forklift to lift weights at gym. There is an observable tendency when using LLMs, to cede agency entirely to the machine.
I can't see how, given that potential is 99% shorcuts.
> They (AI systems) delegitimization knowledge, inhibit cognitive development, short-circuit long term thinking processes, and isolate humans by displacing or degrading long term human connection.
This is a pretty good summary of the worry I’ve seen expressed about extensive use to build large pieces of software. Large pieces of software aren’t just the code that describes them. They exist in some sense in their authors as well.
Effective projects seek to expand understanding of software systems across the organization so that relevant decisions can be made about their future. By relegating much of the decision making around structure of the software to AI you lose the systemic knowledge shared across the organization.
How Generative AI is destroying society https://garymarcus.substack.com/p/how-generative-ai-is-destr... (https://news.ycombinator.com/item?id=46616713)
Similarly, today's critics, often from within the very institutions they defend, frame AI as a threat to "expertise" and "civic life" when in reality, they fear it as a threat to their own status as the sole arbiters of truth. Their resistance is less a principled defense of democracy and more a desperate attempt to protect a crumbling monopoly on knowledge.
More like a war on the traditional, human-based knowledge, leveraged by people who believe that via coveting the world's supply of RAM, SSDs, GPUs, and what not, can achieve their own monopoly on knowledge under the pretense of liberating it. Note that running your own LLM becomes impossible if you can no longer afford the hardware to run it on.
Crypto currency makers can have artificial limits but no amount of limiting gpt-next access is cutting access to good enough.
We just have to start printing our own money and buying us some pocket armies and puppet politicians first.
Not all of them, but given the same questionable or outright false assumptions (e.g. AI companies are doing interference at a loss, the exaggerated water consumption number, etc) keeping getting repeated on YouTube, Reddit and even HN where the user base is far more tech-savvy than the population, I think misinformation is the primary reason.
The linguists who call AI a "stochastic parrot" are the perfect example. Their panic isn't for the public good; it's the existential terror of seeing a machine master language without needing their decades of grammatical theory. They are watching their entire intellectual paradigm—their very claim to authority—be rendered obsolete.
This isn't a grassroots movement. It's an immune response from the cognitive elite, desperately trying to delegitimize a technology that threatens to trivialize their expertise. They aren't defending society; they're defending their status.
Martin Luther used it to spread his influence extremely quickly for example. Similarly, the clergy used new innovations in book layout and writing to spread Christianity across Europe a thousand years before that.
What is weird about LLMs though, is that it isn't a simple catalyst of human labor. The printing press or the internet can be used to spread information quickly that you have previously compiled or created. These technologies both have a democratizing effect and have objectively created new opportunities.
But LLMs are to some degree parasitical to human labor. I feel like their centralizing effect is stronger than their democratizing one.
Currently companies start to shift from enhancing productivity of their employees with giving them access to LLMs, they start to offshore to lower cost countries and give the cheap labor LLMs to bypass language and quality barriers. The position isn't lost, it's just moving somewhere else.
In the field of software development this won't be a an anxiety of an elite or threat to expertise or status, but rather a direct consequence to livelihood when people won't be hired and lose access to the economy until they retrain for a different field. So a layer on top of that you can argue with authority and control, but it rather has economic factors to it that produce the anxiety.
In that sense, doesn't any knowledge work have a monopoly on knowledge? It is the entire point to have experts in fields that know the details and have the experience, so that things can be done as expected, since not many have the time nor the capabilities to get into the critical details.
If you believe there is any good will when you can centralize that knowledge to the hands of even less people, you produce the same pattern you are complaining about, especially when it comes to how businesses are tweaking their margins. It really is a force multiplier and equalizer, but a tool, that can be used in good ways or bad ways depending on how you look at it.
It absolutely makes sense to criticize the author's background.
It is hard though. When someone makes an extraordinary claim I feel the urge to look them up. It is a shortcut to some legitimacy to that claim.
Before the printing press, only the clergy could "identity" witches but the printing press "democratized knowledge" of witch identification at larger scale.
The algorithmic version of "It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so" is going to cause huge trouble in the short and medium term.
So is the AI better?
No. It's quicker, easier, more seductive.
the church did the thinking for the peasants. the church decided what the peasants heard, etc… this is moving absolutely in that direction.
the models now do the thinking for us, the ai companies decide what we get to see, these companies decide how much we pay to access it. this is the future.
It's not so simple that we can say "printing press good, nobody speak ill of the printing press."
I disagree. The backbone of democratic life are the rule of law and freedom of speech, which makes a big difference. The press has historically been a counter-power inquiring into privileges and breaches of the rule of the law and thus promoted freedom of speech but almost only inasmuch it served the interest of the emerging merchant bourgeoisie . And we are long past that. Universities never have been liberal forces: they backed the Church and refused paradigm shifts. They still are very conservative even though in a peculiar sense, as leftist conservatives.
I'm in the UK and I don't think any institutions have been destroyed or even noticeably harmed by AI. In the US there is general chaos under Trump so it may be hard to differentiate.
Stopped reading here, as these people still believe in that fairytale of theirs.
Later edit: For good measure, Zuckerberg, too [2]
This affordability is HEAVILY subsidized by billionaires who want to destroy institutions for selfish and ideological reasons.
Every large enough corporate wants to become the new Oracle.
(By the way, are you confusing affordance, the UX concept, with affordability?)
ZIRP, Covid, Anti-nuclear power, immigration crisis across the west, debt enslavement of future generations to buy votes, socializing losses and privatizing gains... Nancy is a better investor than Warren.
I am not defending billionaires, the vast majority of them are grifting scum. But to put this at their feet is not the right level of analysis when the institutions themselves are actively working to undermine the populace for the benefit of those that are supposed to be stewards of said institutions.
Working as intended. WONTFIX.