Codex quite often refuses to do "unsafe/unethical" things that Anthropic models will happily do without question.
Anthropic just raised 30 bn... OpenAI wants to raise 100bn+.
Thinking any of them will actually be restrained by ethics is foolish.
The 'boy (or girl) who cried wolf' isn't just a story. It's a lesson for both the person, and the village who hears them.
Global Warming, Invasion, Impunity, and yes Inequality
https://x.com/MrinankSharma/status/2020881722003583421
A slightly longer quote:
> The world is in peril. And not just from AI, or from bioweapons, gut from a whole series of interconnected crises unfolding at this very moment.
In a footnote he refers to the "poly-crisis."
There are all sorts of things one might decide to do in response, including getting more involved in US politics, working more on climate change, or working on other existential risks.
Claude invented something completely nonsensical:
> This is a classic upside-down cup trick! The cup is designed to be flipped — you drink from it by turning it upside down, which makes the sealed end the bottom and the open end the top. Once flipped, it functions just like a normal cup. *The sealed "top" prevents it from spilling while it's in its resting position, but the moment you flip it, you can drink normally from the open end.*
Emphasis mine.
I can't really take this very seriously without seeing the list of these ostensible "unethical" things that Anthropic models will allow over other providers.
Bring on the cryptocore.
Thanks for the successful pitch. I am seriously considering them now.
That's why I have a functioning brain, to discern between ethical and unethical, among other things.
It's more like a hammer which makes its own independent evaluation of the ethics of every project you seek to use it on, and refuses to work whenever it judges against that – sometimes inscrutably or for obviously poor reasons.
If I use a hammer to bash in someone else's head, I'm the one going to prison, not the hammer or the hammer manufacturer or the hardware store I bought it from. And that's how it should be.
I think the two of you might be using different meanings of the word "safety"
You're right that it's dangerous for governments to have this new technology. We're all a bit less "safe" now that they can create weapons that are more intelligent.
The other meaning of "safety" is alignment - meaning, the AI does what you want it to do (subtly different than "does what it's told").
I don't think that Anthropic or any corporation can keep us safe from governments using AI. I think governments have the resources to create AIs that kill, no matter what Anthropic does with Claude.
So for me, the real safety issue is alignment. And even if a rogue government (or my own government) decides to kill me, it's in my best interest that the AI be well aligned, so that at least some humans get to live.
a) Uncensored and simple technology for all humans; that's our birthright and what makes us special and interesting creatures. It's dangerous and requires a vibrant society of ongoing ethical discussion.
b) No governments at all in the internet age. Nobody has any particular authority to initiate violence.
That's where the line goes. We're still probably a few centuries away, but all the more reason to hone in our course now.
What line are we talking about?