>” The Silenced No More Act bans confidentiality provisions in settlement agreements relating to the disclosure of underlying factual information relating to any type of harassment, discrimination or retaliation at work”
I find it hard to understand that in a country that tends to take freedom of expression so seriously (and I say this unironically, American democracy may have flaws but that is definitely a strength) it can be legal to silence someone for the rest of their life.
Its quite common for companies to put tons of extremely restrictive terms in an NDA they can't actually legally enforce to scare off potential future ex-employees from creating a problem.
From the article:
“““
It turns out there’s a very clear reason for [why no one who had once worked at OpenAI was talking]. I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.
If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company, which is likely worth millions of dollars. One former employee, Daniel Kokotajlo, who posted that he quit OpenAI “due to losing confidence that it would behave responsibly around the time of AGI,” has confirmed publicly that he had to surrender what would have likely turned out to be a huge sum of money in order to quit without signing the document.
”””
[0]: https://www.vox.com/future-perfect/2024/5/17/24158478/openai...
It takes a man of real principle to stand up against that and tell them to keep their money if they can't speak ill of a potentially toxic work environment.
This is the kind of thing a cult demands of its followers, or an authoritarian government demands of its citizens. I don't know why people would think it's okay for a business to demand this from its employees.
Perfect! So it's so incredibly overreaching that any judge in California would deem the entire NDA unenforceable..
Either that or, in your effort to overstate a point, you exaggerated in a way that undermines the point you were trying to make.
This should not be legal.
Idk. Folks much smarter than I seem worried so maybe I should be too but it just seems like such a long shot.
I think it may time for something like this: https://www.openailetter.org/
All due respect to Jan here, though. He's being (perhaps dangerously) honest, genuinely believes in AI safety, and is an actual research expert, unlike me.
1)OpenAI wouldn't want the negative PR of pursuing legal action against someone top in their field; his peers would take note of it and be less willing to work for them.
2)The stuff he signed was almost certainly different from what rank and file signed, if only because he would have sufficient power to negotiate those contracts.
Delusional.
Large language models are not "smart". They do not have thought. They don't have intelligence despite the "AI" moniker, etc.
They vomit words based off very fancy statistics.
There is no path from that to "thought" and "intelligence."
Maybe the agreement is "we will accelerate vesting of your unvested equity if you sign this new agreement"? If that's the case then it doesn't sound nearly so coercive to me.
Equity adds a wrinkle here, but I suspect if the effect of canceling equity is to cause a forfeiture of earned wages, then ultimately whatever contract is signed under that threat is void.
(Don’t have X) - is there a timeline? Can I curse out the company on my deathbed, or would their lawyers have the legal right to try and clawback the equity from the estate?
As for other companies that can pay: I can only assume that the cost to bribe skilled workers isn't worth the perceived risk and cost of lawsuits from the downfall (which they may or may not be able to settle). Generative AI is still very young and under a lot of scrutiny on all fronts, so the risk of a whistle blower at this stage may shape the entire future of the industry at large.
https://www.vox.com/future-perfect/2024/5/17/24158478/openai...
This thread is full of comments making statements around this looking like some level of criminal enterprise (ranging from “no way that document holds up” to “everyone knows Sam is a crook”).
The level of stuff ranging from vitriol to overwhelming if maybe circumstantial (but conclusive that my personal satisfaction) evidence of direct reprisal has just been surreal, but it’s surreal in a different way to see people talking about this like it was never even controversial to be skeptical/critical/hostile to thing thing.
I’ve been saying that this looks like the next Enron, minimum, for easily five years, arguably double that.
Is this the last straw where I stop getting messed around over this?
I know better than to expect a ticker tape parade for having both called this and having the guts to stand up to these folks, but I do hold out a little hope for even a grudging acknowledgment.
What made you think it was the next Enron five years ago?
I appreciate you having the guts to stand up to them.
They will have many successes in the short run, but, their long run future suddenly looks a little murky.
>in regards to recent stuff about how openai handles equity:
>we have never clawed back anyone's vested equity, nor will we do that if people do not sign a separation agreement (or don't agree to a non-disparagement agreement). vested equity is vested equity, full stop.
>there was a provision about potential equity cancellation in our previous exit docs; although we never clawed anything back, it should never have been something we had in any documents or communication. this is on me and one of the few times i've been genuinely embarrassed running openai; i did not know this was happening and i should have.
>the team was already in the process of fixing the standard exit paperwork over the past month or so. if any former employee who signed one of those old agreements is worried about it, they can contact me and we'll fix that too. very sorry about this. https://x.com/sama/status/1791936857594581428
I hope ex-employees sue and don’t contact him personally. The damage is done. Don’t be dumb folks.
However they could add this to new employee contracts.
Pushing unenforceable scare-copy to get employees to self-censor sounds on-brand.
Doesn’t mean that that’s legal, of course, but I’d doubt that the legality would hinge on a lack of consideration.
Many year ago I signed a NDA/non-disparagement agreement as part of a severance package when I was fired from a startup for political reasons. I didn't want to sign it... but my family needed the money and I swallowed my pride. There was a lot of unethical stuff going on within the company in terms of fiducial responsibility to investors and BoD. The BoD eventually figured out what was going on and "cleaned house".
With OpenAI, I am concerned this is turning into huge power/money grab with little care for humanity... and "power tends to corrupt and absolute power corrupts absolutely".
The power grab happened a while ago (the shenanigans concerning the board) and is now complete. Care for humanity was just marketing or a cute thought at best.
Maybe humanity will survive life long enough that a company "caring about humanity" becomes possible, I'm not saying it's not worth trying or aspiring to such ideals, but everyone should be extremely surprised if any organization managed to resist such amounts of money to maintain any goal or ideal whatever...
Sound like a better solution?
Well... I know first hand that many well-informed, tech-literate people still think that all products from OpenAI are open-source. Lying works, even in that most egregious of fashion.
Unfortunately Orwellian propoganda works.
Diamond multi-million dollar hand-cuffs which OpenAI has bound lifetime secret service-level NDAs which are another unusual company setting after their so-called "non-profit" founding and their contradictory name.
Even an ex-employee saying 'ClosedAI' could see their PPUs evaporate in front of them to zero or they could never be allowed to sell them and have them taken away.
Companies can cancel your vested equity for any reason. Read your employment contract carefully. For example, most RSU grants have a 7 year expiration. Even for shares that are vested, regardless of whether you leave the company or not, if 7 years have elapsed since they were granted, they are now worthless.
Once vested, RSUs are the same as regular stock purchased through the market. The company cannot claw them back, nor do they "expire".
But none of this means the company can just cancel your RSUs unless you agreed to them being cancelled for specific reason in your equity agreement. I have worked at several big pre-IPO companies that had big exits. I made sure there were no clawback clauses in the equity contract before accepting the offers.
Turns out they're right, they can put whatever they want in a contract. And again, they are correct that their wage slaves will 99.99% of the time sign whatever paper he pushes in front of them while saying "as a condition of your continued employment, [...]".
But also it turns out that just because you signed something doesn't mean that's it. My friends (all of us young twenty-something software engineers much more familiar with transaction isolation semantics than with contract law) consulted with an attorney.
The TLDR is that:
- nothing in contract law is in perpetuity
- there MUST be consideration for each side (where "consideration" means getting something. something real. like USD. "continued employment" is not consideration.)
- if nothing is perpetual, then how long can it last supposing both sides do get ongoing consideration from it? the answer is, the judge will figure it out.
- and when it comes to employers and employees, the employee had damn well better be getting a good deal out of it, especially if you are trying to prevent the employee (or ex-employee) from working.
A common pattern ended up emerging: our employer would put something perpetual in the contract, and offer no consideration. Our attorney would tell us this isn't even a valid contract and not to worry about it. Employer would offer an employee some nominal amount of USD in severance and put something in perpetuity into the contract. Our attorney tells us the judge would likely use "blue ink rule" to add in "for a period of one year", or, it would be prorated based on the amount of money they were given relative to their former salary.
(I don't work there anymore, naturally).
Isn't that the reason more competent lawyers put in the royal lives[1] clause? It specifies the contract is valid until 21 years after the death of the last currently-living royal descendant; I believe the youngest one is currently 1 year old, and they all have good healthcare, so it's almost certainly will be beyond the lifetime of any currently-employed persons.
Even lowest level fast food workers can choose a different employer. An engineer working at OpenAI certainly has a lot of opportunities to choose from. Even when I only had three years in the industry, mid at best, I asked to change the contract I was presented with because non-compete was too restrictive — and they did it. The caliber of talent that OpenAI is attracting (or hopes to attract) can certainly do this too.
Personally I'd say there needs to be a general restriction against including blatantly unenforceable terms in a contract document, especially unilateral "terms". The drafter is essentially pushing incorrect legal advice.
Hmmmn. Most of the humans where I work do things physically with their hands. I don't see what AI will achieve in their area.
Can AI paint the walls in my house, fix the boiler and swap out the rotten windows? If so I think a subscription to chat GPT is very reasonably priced!
But if your job is mostly sitting at a computer, I would be a bit worried.
Now it’s a money grab.
Sad because some amazing tech and people now getting corrupted into a toxic culture that didn’t have to be that way
Hey hey hey! Sam founded a 4th most popular social networking site in 2005 called Loopt. Don't you forget that! (After that he joined YC and founded nothing ever since)
It's quite natural, that a co-founder, being forced out of the company wouldn't be exactly willing to forfeit his equity. So, what, now he cannot… talk? That has some Mexican cartel vibes.
If OpenAI and ChatGPT is so far ahead for everyone else, and their product is so complex, it doesn't matter what a few disgruntled employees do or say, so the rule is not required.
What's the consideration for this contract?
There's more info on how SpaceX uses a scheme like this[0] to force compliance, and seeing as Musk had a hand in creating both orgs, they're bound to be similar.
[0] https://techcrunch.com/2024/03/15/spacex-employee-stock-sale...
Consideration is almost meaningless as an obstacle here. They can give the other party a peppercorn, and that would be enough to count as consideration.
https://en.wikipedia.org/wiki/Peppercorn_(law)
There might be other legal challenges here, but 'consideration' is unlikely to be one of them. Unless OpenAI has idiots for lawyers.
Speculation: maybe the options they earn when they work there have some provision like this. In return for the NDA the options get extended.
I’m not saying it’s right or that I agree with it, however.
> After publication, an OpenAI spokesperson sent me this statement: “We have never canceled any current or former employee’s vested equity nor will we if people do not sign a release or nondisparagement agreement when they exit.”
- Updated May 17, 2024, 11:20pm EDT
I've noticed that both Sam Altman personally, and official statements from OpenAI sound like they've been written by Aes Sedai: Not a single untrue word while simultaneously thoroughly deceptive.[1]
Let's try translating some statements, as if we were listening to an evil person that can only make true statements:
"We have never canceled any current or former employee’s vested equity" => "But we can and will if we want to. We just haven't yet."
"...if people do not sign a release or nondisparagement agreement when they exit." => "But we're making everyone sign the agreement."
[1] I've wondered if they use a not-for-public-use version of GPT for this purpose. You know, a model that's not quite as aligned as the chat bots, with more "flexible" morals.
I know many people on this site will not like what I am about to write as Sam is worshiped but let's face it: The head of this company is a master scammer who will do everything under the sun and the moon to earn a buck, including but notwithstanding to destroying himself along with his entire fortune if necessary in his quest of making sure other people don't get a dime;
So far he has done it all it: attempt to regulatory capture, hostile take over as the CEO, thrown out all other top engineers and partners and ensured the company remains closed despite its "open" name.
Now he is simply attempting to tie up all the loos ends and ensuring his employees remain loyal and are kept on a tight leash. It's a brilliant strategy, preventing any insider from blowing the whistle should OpenAI ever decides to do anything questionable, such as selling AI capabilities to hostile governments.
I simply hope that open source wins this battle so that we are not all completely reliant on OpenAI for the future, despite Sam's attempt.
This is the article that the author talks about on X.
If someone shares something that's a lie and defamatory, then they could still be sued of course.
The Ben Shapiro-Daily Wire vs. Candace Owens is another scenario where the truth and conversation would benefit all of society - OpenAI and DailyWire arguably being on topics of pinnacle importance; instead the discussions are suppressed.
It seems that standard practice would dictate that you sign an NDA before even signing the employment contract.
[1]https://www.lesswrong.com/posts/kovCotfpTFWFXaxwi/simeon_c-s...
There’s a very real/significant risk that AGI either literally destroys the human race, or makes life much shittier for most humans by making most of us obsolete. These risks are precisely why OpenAI was founded as a very open company with a charter that would firmly put the needs of humanity over their own pocketbooks, highly focused on the alignment problem. Instead they’ve closed up, become your standard company looking to make themselves ultra wealthy, and they seem like an extra vicious, “win at any cost” one at that. This plus their AI alignment people leaving in droves (and being muzzled on the way out) should be scary to pretty much everyone.
I'm not sure this is true. If all the things people are doing are done so much more cheaply they're almost free, that would be good for us, as we're also the buyers as well as the workers.
However, I also doubt the premise.
If this were true, intelligent people would have taken over society by now. Those in power will never relinquish it to a computer just as they refuse to relinquish it to more competent people. For the vast majority of people, AI not only doesn't pose a risk but will only help reveal the incompetence of the ruling class.
He clearly states why he left. He believes that OpenAI leadership is prioritizing shiny product releases over safety and that this is a mistake.
Even with the best intentions , it’s easy for a strong CEO like Altman to loose sight of more subtly important things like safety and optimize for growth and winning, eventually at all cost. Winning is a super-addictive feedback loop.
They can totally deal with appearing petty and thin-skinned.
So yes, they're that fragile.
It’s yet another sign that the AI bubble will soon burst. The laughable release of “GPT-4o” was just a small red flag.
Got to keep the soldiers in check while the bean counters prep the books for an IPO and eventual early investor exit.
Almost smells like a SoftBank-esque failure in the near future.
This would be not only unethical viewed in Germany, i could see how a CEO would go to prison for such a thing.
I know a manager for an EV project at a big German auto company who also had to sign one when he was let go and was compensated handsomely to keep quiet and not say a word or face legal consequences.
IIRC he got ~12 months wages. After a year of not doing anything at work anyway. Bought a house in the south with it. Good gig.
I also work hard not to print gossip and hearsay (I try not to even mention so much as a first name, I think I might have slipped one or twice on that though not in connection with an accusation of wrongdoing), there’s more than enough credible journalism to paint a picture, any person whose bias (and I have my own but it’s not like, over being snubbed for a job or something it’s a philosophical/ethical/political agenda) has not utterly robbed them of objectivity can acknowledged that “this looks really bad and worse all the time” on the basis of purely public primary sources and credible journalism.
I think some of the inside baseball I try very hard not to put in writing might be what cranks it up to “people are doing time”.
I’ve caught more than a little “less than a great time” over being a vocal critic, but I’m curious if having gone pretty far down the road and saying something is rotten, why you’d declare a willingness to defy a grand jury or a judge?
I’ve never been in court, let alone held in contempt, but I gather it’s fairly hard time to openly defy a judge.
I have friends I’d go to jail for, but not very many and none who work at OpenAI.
Yes, but:
(1) OpenAI salaries are not low like early stage startup salaries. Essentially these are highly paid jobs (high salary and high equity) that require an NDA.
(2) Apple has also clawed back equity from employees who violate NDA. So this isn't all that unusual.
After all, at this point, OpenAI:
- Is not open with models
- Is not open with plans
- Does not let former employees be open.
It sure does give us a glimpse into the Future of how Open AI will be!
Also, when secrets or truthful disparaging information is leaked anonymously without a metadata trail, I'm thinking there's probably little or no recourse.
Fucking monkeys.
0. https://en.wikipedia.org/wiki/Civil_Disobedience_(Thoreau)
Individualistic
No body depends on you, I hope
If an Ex-OpenAI tweet from official account a link to anonymous post of cat videos that later gets edited to some sanctioned content, in a way that is authentic to the community, would this still be deniable in court?
If there is something unenforceable about these contracts, we have the court system to settle these disputes. I’m tired of living in a society where everyone’s dirty laundry is aired out for everyone to judge. If there is a crime committed, then sure, it should become a matter of public record.
Otherwise, it really isn’t your business.
It's absolutely normal not to spill internals.
Based on these companies' arguments that copyrighted material is not actually reproduced by these models, and that any seemingly-infringing use is the responsibility of the user of the model rather than those who produced it, anyone could freely generate an infinite number of high-truthiness OpenAI anecdotes, freshly laundered by the inference engine, that couldn't be used against the original authors without OpenAI invalidating their own legal stance with respect to their own models.
I feel that this particular case is just another reminder of that, and now would make me require a preemptory “no equity clawbacks” clause in any contract.
Once again, we see the difference between the public narrative and the actions in a legal context.
Keep building your disruptive, game-changing, YC-applicant startup on the APIs of this sociopathic corporation whose products are destined to destroy all trust humans have in other humans so that everyone can be replaced by chatbots.
It's all fine. Everything's fine.
I am curious how long it will take for Sam to go from being perceived as a hero to a villain and then on to supervillain.
Even if they had a massive, successful and public safety team, and got alignment right (which I am highly doubtful about being possible) it is still going to happen as massive portions of white collar workers loose their jobs.
Mass protests are coming and he will be an obvious focus point for their ire.
As for 'invalid because no consideration' - there is practically zero probability OpenAI lawyers are dumb enough to not give any consideration. There is a very large probability this reporter misunderstood the contract. OpenAI have likely just given some non-vested equity, which in some cases is worth a lot of money. So yeah, some (former) employees are getting paid a lot to shut up. That's the least unique contract ever and there is nothing morally or legally wrong with it.
Are employees being mislead about the contract terms at time of signing the contract? Because, obviously, the original contract needs to have some clause regarding the equity situation, right? We can not just make that up at the end. So... are we claiming fraud?
What I suspect is happening, is that we are confusing an option to forgo equity for an option to talk openly about OpenAI stuff (an option that does not even have to exist in the initial agreement, I would assume).
Is this overreach? Is this whole thing necessary? That seems besides the point. Two parties agreed to the terms when signing the contract. I have a hard time thinking of top AI researchers as coerced to take a job at OpenAI or unable to understand a contract, or understand that they should pay someone to explain it to them – so if that's not a free decision, I don't know what is.
Which leads me to: If we think the whole deal is pretty shady – well, it took two.