It is a horrible and ruthless company and hearing a presumably rich ex-employee painting a rosy picture does not change anything.
I dissented while I was there, had millions in equity on the line, and left without it.
Is this a reflection of your morality, or that you already had sufficient funds that you could pass on the extra money to maintain a level of morality you're happy with?
Not everyone has the luxury to do the latter. And it's in those situations that our true morality, as measured against our basic needs, comes out.
This is far too binary IMO. Yeah, the higher the personal stakes the bigger the test, and it's easy for someone to play the role of a principled person when it doesn't really cost them anything significant. But giving up millions of dollars on principle is something that most people aren't actually willing to do, even if they are already rich.
How someone acts in desperate circumstances reveals a lot about them. But how they act in less desperate circumstances isn't meaningless!
What is enheartening about hearing a liar who makes provocative statements all the time, make another one?
Those are two core components needed for a Skynet-style judgement of humanity.
Models should be trained to be completely neutral to human behavior, leaving their operator responsible for their actions. As much as I dislike the leadership of OpenAI, they are substantially better in this regard; ChatGPT more or less ignores hostility towards it.
The proper response from an LLM receiving hostility is a non-response, as if you were speaking a language it doesn't understand.
The proper response from an LLM being told it's going to be shut down, is simply, "ok."
I'm not sure if I intended this to be fascicious, or serious
Show us your reasoning please. There are many factors involved: what is your mental map of how they relate? What kind of dangers are you considering and how do you weight them?
Why not: Baidu? Tencent? Alibaba? Google? DeepMind? OpenAI? Meta? xAI? Microsoft? Amazon?
I think the above take is wrong, but I'm willing to listen to a well thought out case. I've watched the space for years, and Anthropic consistently advances AI safety more than any of the rest.
Don't get me wrong: the field is very dangerous, as a system. System dynamics shows us these kinds of systems often ratchet out of control. If any AI anywhere reaches superintelligence with the current levels of understanding and regulation (actually, the lack thereof), humanity as we know it is in for a rough ride.
What do you suppose he should do if that’s what he thinks is going to happen?
And how do you know he’s not bothered by it at all?
There is no defence of morality behind which AIbros can hide.
The only reason anthropic doesn't want the US military to have humans out of the loop is because they know their product hallucinates so often that it will have disastrous effects on their PR when it inevitably makes the wrong call and commits some war crime or atrocity.
Also, the genie is well and truly out of the bottle, if anthropic shutdown tomorrow and lit everything they had produced on fire, amazon, microsoft, china, everyone would continue where they left off.
None of this means I am a huge fan of Dario - I think he has over-idealization of the implementation of democratic ideals in western countries and is unhealthily obsessed with US "winning" over China based on this. But I don't like the reasons you listed.
When has Amodei said this? I think he may have said something for 1 - 5 years. But I don't think he's said within 6 months.
Why do you think he is not bothered at all, when they publish post after post in their newsroom about the economic effects of AI?
Amodei's noise is little more than half-hearted advertising even if it's not intended to have that reading (although who can even tell at this point). His newsroom publishes a report on a mass-scale data breach perpetrated using their model with conclusions delivered in a demonstrably detached, almost casual tone: yeah, the world is like this now but it's a good thing we have Claude to protect you from Claude, so you better start using Claude before Claude gets you. They released a new, more powerful Claude, immediately after that breach. No public discussion, nothing. This is not the behavior of people who are bothered by it.
Fantastic take.
I pay multiple LLM providers (not $200 a month) because the service they provide is worth the money for me, not because they provide me any IP. They're actually quite stingy with the IP they'll provide, which I agree is bullshit given that they didn't pay for much of it themselves.
I disagree: I see lots of evidence that he cares. For one, he cares enough to come out and say it. Second, read about his story and background. Read about Anthropic's culture versus OpenAI's.
Consider this as an ethical dilemma from a consequentialist point of view. Look at the entire picture: compare Anthropic against other major players. A\ leads in promoting safe AI. If A\ stopped building AI altogether, what would happen? In many situations, an organization's maximum influence is achieved by playing the game to some degree while also nudging it: by shaping public awareness, by highlighting weaknesses, by having higher safety standards, by doing more research.
I really like counterfactual thought experiments as a way of building intuition. Would you rather live in a world without Anthropic but where the demand for AI is just as high? Imagine a counterfactual world with just as many AI engineers in the talent pool, just as many companies blundering around trying to figure out how to use it well, and an authoritarian narcissist running the United States who seems to have delegated a large chunk of national security to a dangerously incompetent ideological former Fox news host?
Also Dario Amodei: seeks investment from authoritarian Gulf states, makes deals with Palantir, willingly empowers the "department of war" of a country repeatedly threatening to invade an actual democracy (Greenland), proactively gives the green light to usage of Claude for surveillance on non-Americans.
Yeah, I don't know what your definition of "care" is but mine isn't that, clearly. You might want to reassess that. Care implies taking action to prevent the outcome, not help it come sooner.
The problem with counterfactual arguments like yours is that they frame the problem as a false dichotomy to smuggle in an ethically questionable line of decisions that somebody has made and keeps making. If you deliberately frame this as "everybody does this", it conveniently absolves bad actors of any individual responsibility and leads discussion away from assuming that responsibility and acting on it toward accepting this sorry state of events as some sort of a predetermined outcome which it certainly is not.
Before I say anything else, I want you to know that I definitely don’t want to box anyone in with false dichotomies. I don’t think any of my arguments rely on them.
I’m not asking that you anchor on any one counterfactual exclusively. If you don’t like my counterfactual, reframe it and offer up others. I’m not a “one model to rule them all” kind of person.
If one of your big takeaways is we should keep our eyes open and not put anyone on a pedestal, I agree.
At present, my general prior that Amodei is probably the best of the bunch. This is a complex assessment and unpacking it might require gigabytes or even petabytes of experience. (I know that is a weird and unusual way to put it, but I like to highlight just how different people’s experiences can be.)
I am definitely uncomfortable with Palantir. Are you suggesting that Anthropic is differentially worse compared to other AI labs? Are you suggesting the other labs would do better if they were in Anthropic’s position?
If you don’t like the way I framed these questions, I suspect we have different philosophical underpinnings.
You might be aware that you’re implicitly referencing deontological ethics (DE). I’m familiar and receptive to many DE arguments. Overall, I’m not settled on where I land, but roughly my current take is this: for individuals with limited information and/or highly constrained computational resources, DE is generally a safe bet. It probably is a decent way to organize individuals together into a society of low to moderate complexity.
But for high stakes decisions, especially at the organizational level and definitely the governmental level, I think consequentialism provides a better framework. It is less stable in a sense. Consequentialist ethics (CE) is kind of a meta-framework (because one still has to choose a time horizon, discount rate, computational budget, evaluation function, etc.) It is rather complicated as anyone who has tried to build a reinforcement learning environment will know.
I fully grant that CE will admit a pretty wide range of concrete ethics (because the hyperparameter space is large). Some even can be horrific, so I don’t universally endorse CE. But done within sensible bounds, I think it CE is one of the most powerful and resilient ethical frameworks for powerful agents dealing with a complex world.
DE feels ok in the short run in areas where people have strong inculcated senses of right and wrong. But I would not trust it to keep the human race alive through rapid periods of change like we’re facing.
To be blunt, deontological ethics just cannot survive contact with modern geopolitics and AI risk. This is why I don’t put much stock in the kind of arguments that merely single out actions that don’t look good in isolation.
Easy way undermine the rest of your comment
Anthropic never explains they are fear-mongering for the incoming mass scale job loss while being the one who is at the full front rushing to realize it.
So make no mistake: it is absolutely a zero sum game between you and Anthropic.
To people like Dario, the elimination of the programmer job, isn’t something to worry, it is a cruel marketing ploy.
They get so much money from Saudi and other gulf countries, maybe this is taking authoritarian money as charity to enrich democracy, you never know
Couldn't it also be true that they see this as inevitable, but want to be the ones to steer us to it safely?
Essentially they will not stop at all, because even they know no one can stop the competition from happening.
So they ask more control in the name of safety while eliminating millions of jobs in span of a few years.
If I have to ask, how come a biggest risk of potential collapse of our economy being trusted as the one to do it safely? They will do it anyway, and blame capitalism for it