Ultimately it makes the whole Ethical AI department look more like a rubber stamp for Google.
It's one thing for reviewers, even anonymous reviewers, to reject a paper on its merits; it's another, in Timnit's own words [0], to be told "'it has been decided'" through "a privileged and confidential document to HR" despite clearing the subject matter beforehand. In light of a more general frustration, it's very reasonable for Timnit to escalate the situation by putting her own career on the table, simply to request that people engage with the paper rather than flat-out rejecting it.
And if Jeff wants to respond by immediately cutting ties, and by putting out a document that doesn't even address the situation at hand (edit: much less the underlying issues of unequal treatment for women that Timnit describes)... that's a reflection of his ethics and the ethics of the company that stands behind him.
[0] For those who haven't read Timnit's memo that Jeff references in the OP, it's worth reading: https://www.platformer.news/p/the-withering-email-that-got-a...
EDIT 2: follow https://twitter.com/timnitGebru to see more of her side of the story. She retweeted https://www.wired.com/story/prominent-ai-ethics-researcher-s... as a good explanation of the situation for laypeople.
>And you are told after a while, that your manager can read you a privileged and confidential document
Emphasis mine. Showing your employee that you don't even trust her with a written copy of the rejection of her paper is not a great way to engender a good working relationship. Note that this pretty clearly seems to have happened before Gebru sent the email that Dean characterized as an ultimatum.
The fact that she issued an ultimatum for the identities of the reviewers suggests that management was correct to have safeguarded them in the first place.
I didn't read that. I read the person _demanded_ who said a particular critical feedback, or questioned the approaches instead of addressing them. The person gave the ultimatum to resign if details were not shared.
The critique here appears to have been fairly minor, too. Failing to cite some recent research is rarely grounds for rejection.
--Nicolas Le Roux
It was only ‘at the last second’ because Gebru chose not to follow the normal procedure.
If the paper genuinely can't be ready until one day before the external deadline, the right thing to do is engage with the reviewers in advance, explain the problem, and provide them with drafts and work in progress, so that they can complete their work a few hours after yours.
What Gebru did is the equivalent of bypassing code review and pushing to prod on Friday afternoon.
This is a essentially false. The author submitted the paper the day before publishing, given there at least was some form of standard review - the actions by Google could not be construed as 'roadblock'.
There is no 'roadblocking' and the review was certainly not 'unexpected.
The constant misrepresentation of the facts in this situation is harmful for those ostensibly wanting to do good.
"This is why understanding who raised these concerns is important."
Since there was no roadblock - this answer makes no sense.
The answer more likely that the researcher wanted a named list of what she perceived to be as her personal enemies.
"Failing to cite some recent research is rarely grounds for rejection."
There doesn't seem to be any reasonable cause for major concern in this whole issue - it seems the company raised some points and she could have managed them reasonably in professional terms.
Google stepped in and changed the procedure for this paper, because they wanted to spike it because they were embarrassed by it.
Asking for the identity of people that have the authority to ask for a withdrawal of your research without stating their issues with it seems understandable, if excessive.
But maybe I misunderstood something.
If people have an expectation of Google to turn out academically pure research then I certainly respect the position and encourage it in reality. But thinking like that means life is going to contain a bunch of surprises that really shouldn't be surprising. Google is simply not going to employ people who they recognise as undermining the success of Google. It is not feasible to run a company that way; roughly speaking companies can choose between ruthlessness and bankruptcy. If you expect tolerance of radicals and debate, look to the universities.
The possibly shady part is that they could be suppressing evidence that they broke the law, but, like your said, they can decide how to run their own business. I'm not even sure if the researcher would be a whistleblower if they didn't intend to report something illegal.
To makes matters worse, in this case at least, the law or laws they may be breaking were established to protect a class of people the researcher is a member of.
Universities used to tolerate radicals and debate. But going by the copious media reports of the last few years, that doesn't seem to be how they operate any more.
You can't have it both ways.
And then when there was backlash they "promised to do better" and Sundar Pichai came out with some "principles" that the company would follow for AI.
Another 1-2 years later and here we are again - this just proves that whatever "AI Ethics Board" they might set-up, it will end up being a sham, because they'd never allow that board to stop them from using AI however they like if it's in the interest of the company's profit growth.
If we want real AI oversight we need to demand it from outside nonprofits or even government agencies (why not both?!) - and there should be zero affiliation between the company being monitored and those organizations/agencies.
It might be that they follow ethics because the appearance to do so has a monetary public relations value. It always comes down to that, and for publicly traded companies that set up things like an "AI Ethics Board" it is always for show since the incentives don't allow for anything else.
At the end of the day someones compensation depends on these things and you can't be hurting the bottom line.
The founders have a controlling stake.