Edit: Google, too. Microsoft with its Israel and US Gov ties. Probably most of big tech tbh. How do you recommend we view these employees from an ethical perspective?
OpenAI so far has done the opposite, instead seizing the above as an opportunity.
That is a seriously meaningful difference. Their agreement with Palantir (fwiw OpenAI has been partnering with them for even longer) doesn't erase that.
(I understand that domestic and foreign deployment are separate issues — I'd personally object to both — but I'm not sure Microsoft has a reason to take a principled stand on either of those, and they have been working with intelligence for decades.)
Not to get all historical on you, but if you worked for IBM in the 1930s-1940s you may have worked on something that was used to perpitrate a holocaust. Was that ethical? I don’t think so.
That said, it’s very easy to abstract yourself away from the harm. To tell yourself you’re not the one who builds the landmines, you just maintain the coffee machine at the landmine factory. But that’s just lying to yourself. An honest and deep appraisal of what you’re work is helping make happen is required to decide if your job is ethical or not.
https://claude.ai/public/artifacts/8f42e48f-1b35-450d-8dda-2...
And paying off their mortgage and feeding their families and has a job in unstable times.
Morals come a distant last in the current state of affairs.
This is not a relevant point to this discussion.
The way you frame it, you make it sound like an engineer at OpenAI has no choice but to work there or end up on the street. But an engineer at OpenAI is not going to end up driving a truck, they're going to remain and engineer.
Their whole schtick is based on ensuring safety for humanity given the existential risk of a singularity.
Open AI employees MUST get called out, because entire economies and industries are being reshaped due to their statements.
They aren't some mom and pop shop, and they aren't some typical tech firm.
These people are lusting for generational wealth, not scrambling to put bread on the table.
Work at the main AI company. Company has severe ethical issues. Be a person who cares about that. Leave. Surprise, issues get worse.
It just shows that they have done poor research about the company before joining (Meta is just as bad) and are in on the grift (joined OpenAI only after post-ChatGPT) and this employee does not believe what they are saying.
But seriously - you are describing the kind of thinking that caused ww1, and the nuclear arms race that almost caused human extinction. It’s a bad idea that goes bad places.
The current state of affairs of modern warfare is: lots of deaths, lots of collateral damage.
Improving the technology used is more likely to lead to less collateral deaths of innocent people and your own soldiers as well.
There’s already enough weapons to blow the entire world up a thousand times over. Making armies smarter about how they use these deadly weapons is a good thing.
Technologists and intellectuals are notoriously terrible at these sorts of broader societal calculations. They all thought the internet and Social Media would obviously lead to global freedom, which it didn’t.
Now technologists think their new thing, AI coding/spreadsheet bots, will destroy the global economy and kill us all or lead to communist techno-utopia. What if we stop with the moralistic grandstanding and self-aggrandizement and take a deep breath. None of the overpaid pontificators at OpenAI has ever seen real combat, so to make confident claims about what nascent technology will do to it is silly.
This whole thread is going to age like milk.
But ok, let’s stick to weapons. The premise that we can wage war without sacrificing lives is a tantalizing one. But do you genuinely think that would prevent death? The drone warfare era under bush and obama shows that killing from afar with no skin in the game doesn’t lead to restraint or lack of war. It just leads to blowing up entire wedding parties.
Some country can perform a successful head hunt in the span of an afternoon tea party, while some other country have to level cities for few years and yet still fails to even touch the opposition leader. That's the difference between advanced and less-advanced systems.
If people here loves peace, good. But if we can always reasoning our way out of conflict, then why do we also invented the career of professional police force?
Of course, it is possible that countries advanced too far ahead might bully those less-advanced ones. But then, maybe the less-advanced countries should look inward and reflect on the question why can't they themselves create such advanced weaponries. I don't know, maybe these countries instead of forcing their own people to wear an obeisant smelling face mask, it's time to gave back the power and opportunities so their people can actually grow and gain and eventually contribute.
Skeptical that’s true. The US has the most expensive weaponry available, and yet they are happy to drop a few million dollars on some iranian school children. It could be true, but i don’t think it is - if nothing else based on the stereotype of the rich kids who totals their parents car.
> Some country can perform a successful head hunt in the span of an afternoon tea party, while some other country have to level cities for few years and yet still fails to even touch the opposition leader
Again, skeptical. The US is happy to share its tech with israel, yet they are the ones levelling cities for years with no perceptable impact on leadership.
> then why do we also invented the career of professional police force?
Historically? To protect the property of the rich from the people they stole it from.
> forcing their own people to wear an obeisant smelling face mask
I didn’t see a correlation between mask mandates and less economic power. China, for instance, had quite severe covid restrictions and yet they are the kind of more-advanced nation you speak of. Most of latin america had virtually no restriction, and they are also “less advanced” wrt ai weapons.
Also, where on earth still has mask restrictions? Find a new grievance, please.
I get that there's nuance, but this feels like they want to make a big ethical stand without burning any bridges. You can have one of those.
OpenAI already had military contracts while this employee was at the company and there was no open letter last year about that.
Prior to that, they were at Meta and joined OpenAI after ChatGPT took off.
If they thought that AGI was about "principles" then not only they were naive, but it leads me to believe that they were only there for the RSUs, just like their time at Meta.
Why is it so hard to be honest and just say you were there for the money, fame and RSUs and not for so called "AGI"?
Because then you miss opportunities like this in which to market yourself. A kind of hedging your bets in order to get more money and/or stay out of jail if the winds change. (Jail can be expensive.)
Or it could be honest cognitive dissonance.
The autonomous killing thing is more reasonable, but still, if you're OK building death technology, I'm not exactly sure what difference having a human in the loop makes. It's still death.
I agree that the killbots red line is somewhat odd, but I guess you have to draw the line somewhere, and I prefer them having that principle to having no principle at all. (Also, it's possible that the AI insiders understand something I don't about why a human in the loop is important.)
Also it's a rather American-centric view. If a Canadian is working at OpenAI, should they care? Or would they care more about possible anti-democratic interference by the American government on Canada?
Any employee who stays, especially given the financial cushion they have, is complicit. Shame on all of them.
But here’s the sad truth: most of the knowledge workers at OpenAI won’t be of any value sometime soon because of the very tool they’re building.
Everyone has their own unique situation
Going to work for these big SV corps is and always has been directly in service of US empire, that's literally what built the valley in the first place.
> I resigned from OpenAI. I care deeply about the Robotics team and the work we built together. This wasn’t an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got. This was about principle, not people. I have deep respect for Sam and the team, and I’m proud of what we built together.
So it wouldn't even be worth a HN submission. Well, I think it can still go under exception for exceptional news.
Absolutely nothing wrong with something written with AI. Just pointing it out.
Generated comments are banned on HN, FWIW.