/**
* These author ID lists are used purely for metrics collection. We track how often we are
* serving Tweets from these authors and how often their tweets are being impressed by users.
* This helps us validate in our A/B experimentation platform that we do not ship changes
* that negatively impacts one group over others.
*/
[0]: https://github.com/twitter/the-algorithm/blob/7f90d0ca342b92...It doesn't have to be in the algorithm for the systems to be tweaked to please Elon vanity metrics.
[I've been running lots of ML AB tests over the years, some in organizations of similar size & complexity as Twitter]
It definitely isn't just metrics. Any algorithm change that negatively affected Musk was clearly not going live.
Separately, which of these groups do you think that they use as a control?
1. Your system does nothing to actually segment this specific group by their identity.
2. You are confident that the systems you have set up to reward good behavior and punish bad behavior are accurate.
If both of those are true, you know that even if the group is being disproportionately negatively impacted by some form of recommendation/moderation, that it is only because that group disproportionately participates in behavior that is bad for the platform. That isn't a problem. It would actually be worse for the platform overall if you did anything to appease that group.
That is exactly what Twitter's stance has been all along (in the pre-Elon era) and it IS a problem for the product because people being silenced due to their own bad behavior (example: misgendering transgender people) feel an injustice is being done. The rule-makers get to set the range of acceptable discourse on Twitter and those to the right of center have felt unfairly disadvantaged by the way it was done in the past.
Over time this has eroded trust in the product. Just because people aren't being labeled and ranked based on whether they are red team or blue team, the people deciding what "good" and "bad" behavior looks like on the platform have the power to disproportionately impact these groups.
There’s benefit of the doubt then there’s just… whatever the polar opposite of that is.
Now one side can spew as much disinfo and incitement to violence as it likes, and any algorithm change that prevents this shit from getting amplified will be rejected as bias.
BSaaS = Both Sides as a Service
I was wondering why I see so many tweets by him, and what his "Group's" impression quote is.
This is actually pretty hilarious.
I suspect the flag corresponds to weights not present in the repo.