For example this article says the following: "Members of Google’s Ethical AI team sent additional demands to Pichai, calling for policy changes, among other things.". If you look at these demands it includes the following:
* The dismissal of the company vice president.
* The reinstatement of Gebru at a higher level than she previously had.
The question as to whether these demands are ethical is not clear at all to me.
Another case in point, anecdotally I saw this a lot at the place I used to work, was the call for regulation to control AI research, and indeed this article talks about this in the section: "Scrap self-regulation". Taken from a different political perspective, someone might argue that giving more control to the state for the purpose of pursuing legitimised violence against AI researchers is unethical (note this is not my view it is an example). I saw people air views like this in the ethics discussions and given very unfavourable treatment, bordering on unethical.
It would be great if AI ethics was a field that produced solutions to problems such as verified datasets, best codes of practice and testing tools. Instead it just seems to be a way to highlight issues and advocate for political positions, which isn't in itself worthless but it feels like the first step in what should be a journey. Unfortunately highlighting problems is really easy, fixing them is the challenging part.
I often got the feeling that the people interested in this just cared about directing the labour of other people for their own interests, e.g. leading from behind rather than leading from the front. All seemed rather unethical to me.
These people are not interested in meta-ethics or any critical examination of their own chosen beliefs. They just want things to be the way they want them to be, and manipulating the AI is a means to that end.
I have no view on whether that is happening at Google. Just a general observation.
https://www.overcomingbias.com/2013/08/inequality-is-about-g...
You left out some vital pieces of context from the same article[0]:
> The note centers on the departure of Google AI ethics researcher Timnit Gebru, which set off protests inside the company. Citing that situation, the employees called for a company vice president, Megan Kacholia, to no longer be part of their reporting chain. “We have lost trust in her as a leader,” the researchers wrote, according to a copy of the later obtained by Bloomberg.
> “Google’s short-sighted decision to fire and retaliate against a core member of the Ethical AI team makes it clear that we need swift and structural changes if this work is to continue, and if the legitimacy of the field as a whole is to persevere,” the letter reads.
> “This research must be able to contest the company’s short-term interests and immediate revenue agendas, as well as to investigate AI that is deployed by Google’s competitors with similar ethical motives,” the researchers added.
As I understand it, the demands themselves are not directly ethical, but in order to do their job (dealing with ethics), they need the VP to stop blocking them in order to make money. As for Gebru receiving a higher position, my take is that, as is with most companies, being a higher-level in the hierarchy allows one to affect more change within the company (or, perhaps in this case, even just to get the job done?).
[0]: https://www.bloomberg.com/amp/news/articles/2020-12-16/googl...
I understand that Gebru may feel dehumanized but without seeing the email that she references, it’s hard for me to understand her statement. If the CEO of Google painted her as “an angry black woman” then that’s a really big deal. If she feels that way, then that’s not as unique to me as people frequently feel upset and about ways that are unique to them, but not interesting to me.
Similarly with the fire/resign thing. These stories seem to all say “fire” even though that seems contested by Google for what happened. Without reading her actual ultimatum email then it’s hard for me to tell.
So it’s frustrating that what I want to know, these stories don’t cover. They are statements from Gebru that seem vague, easily substantiated, but not substantiated. So the articles seem to just focus on someone’s feelings.
I did end up finding Pichai’s email and while it was bland corporate speak, I don’t understand how it was particularly damning. I’m not sure how anyone could expect Google to apologize when they don’t think they are wrong, or especially through a company wide email. If anything it’s a positive that the CEO is taking such interest and spending such energy on this.
Articles like this would hold more power if the bias was removed or at least acknowledged with pointers to alternative takes.
I understand that Google accepted her resignation in the letter, and did not fire her outright.
You are wrong in the simplest sense, a little bit more wrong somewhat more deeply, and possibly even more wrong even more deeply.
> I understand that Google accepted her resignation in the letter
First, you cannot resign with an completely unspecific effective date; a conditional statement that in should particular circumstances you would work with someone to set a final date is not a resignation. So there was no resignation to accept.
Second, even though characterizing it initially as accepting her resignation, Google in the termination notice then immediately turned around and identified that she was being terminated immediately for conduct separate from the resignation.
Thirdly, the fundamental dispute leading up to the whole “resignation/firing” end game was conduct that, from the descriptions throughout Google AI that have come out in response to the management story, is quite arguably the kind of targeted campaign of hostile treatment that defines constructive termination, which would make even an explicit resignation at the end still a firing for many legal purposes.
Would you share some information on where this comes from. It sounds pretty authoritative. I’ve worked with people who said “I quit” and walked out. It seemed to succeed as they never came back. Could they come back the next day and say “well I didn’t specify a date therefore my resignation wasn’t valid?”
I’m not familiar with labor law, but I thought that resignation was a loose term that could fit many patterns.
This definition from feeling lucky on google [0] just says it’s the formal notice of voluntary termination and there are no laws governing resignations.
But it sounds like you have more info on this and I’d like to learn more about this.
https://www.bizjournals.com/bizjournals/how-to/human-resourc...
"Most state unemployment agencies consider a separation to be a voluntary quit (initiated by the employee) if the last day of work is within two weeks of the resignation date. So, if an employee gives two weeks’ notice and the employer accepts it immediately, the individual probably won’t be eligible for unemployment.
However, if an employee gave two months’ notice and the employer accepted it immediately, the former employee may be eligible for unemployment."
Gebru’s behavior seems wildly inappropriate, and my take on the actual research article is that it was indeed very weak, sanctimonious and legitimately deserving of the rejection for approval to publish externally until it had been edited significantly.
Writing a workplace letter in which you tell your coworkers to stop working is just unacceptable in this circumstance. If Google had fired her, it would be 1000% the right move, but they didn’t, they just accepted her bluff resignation, and that is totally on her.
Here were some of my thoughts on the actual publication,
And by “misinformation” I mean it’s either in the podcast or in this article. You be the judge.
So sorry to see you go, "ethics", but we're just not ready for you yet. Perhaps someday when we're more evolved!8-))
We need government action to apply reasonable guardrails on the actions of the largest tech companies, a clear legal structure that puts society first and still allows growth and profits. I am a member of the Libertarian Party (recent, basically my disgust with the DNC and RNC), and I feel a bit like a hypocrite for taking a non-Libertarian view of the current situation with Google and Facebook.
There's no 'whistle blowing' there, this is not even remotely on the 'needs intervention' radar.
'The Current Situation' with G and FB is complex, just because there might be some anti-trust issues over here, or some 'bad thing over there', does not mean that 'this issue here' needs intervention or even remedy.
With respect to 'Bias and AI' - companies are pretty self aware of this, and don't mind adjusting. I suggest the best way is to have independent researchers publishing, and letting those facts + some voices do the work. The researcher in question here actually published some work highlighting the challenges of inequality in data and G responded well to that. So that ironically might have been a better model than what they are doing now.