But I see your point, and certainly would like to see more constructive suggestions than mine.
Because I know very well how easy it is for people to think "oh, well, but that one little thing isn't so bad, when faced with bills to pay, or a raging boss. Many of which really aren't all that bad in isolation. Except it doesn't take all that many "one little things" before you have a total privacy disaster.
Alternatively a union can put pressure companies to never ask for certain things or to meet a standard for any privacy issues. Unions are usually seen with hostility in the tech industry, but they are just another tool; a union can be made for specific purposes, and ignore e.g. wage or anything else.
Developers are not sweatshop workers beholden to the company store. They have a plethora of employment options. If they willingly choose to work for such a company, the case could be made that they have made themselves legitimate targets for having made this choice.
Normally in a market system you want to keep the chain between cause and damage short enough to be comprehensible for the people causing it; otherwise, there's no good way to make them avoid it.
This is a general problem with creating an algorithm to supplement or replace anything previously done by humans. Even if the algorithm is given accurate and unbiased data (which is rare), the choice itself to use an algorithm in the first place and the design of the algorithm also contain bias.
Sometimes this bias is intentional such as "redlining" where housing loans were denied to blacks using various proxies for race. I suspect that in most cases the bias is accidental, which is why it is very important to check the results carefully for any unintended bias. In situations like Facebook, simply asking their users first (opt-in) if they would like to participate in "local friend discovery" would be a great start.
No they are not. For example, it is now common knowledge that Mark Z bought off all nearby houses in every direction to get more privacy. [1] Do you and I have similar access to resources?
Suppose your identity is stolen and you find yourself penniless because someone hacked into Facebook which also affected your friend who works at Facebook. Who is more likely to be in great financial distress the next day? Who is more likely to know the full impact of the situation?
Also, if someone in Facebook were to be negatively affected in some way, they probably have friends inside who can help them out. Do you and I have a direct line to a similar friend? In fact, we are likely to be the very last people to know of any such exploitation.
Besides, the closer you are to the algorithm, the more likely that you know how to circumvent it, even exploiting some simple bugs that others are not aware of.
And how about opting out? As a technologist, how hard do you think it would be for an insider to add himself/herself to the opt-out database, and also make sure that there were no hiccups in the process? Contrast that to something as simple as opting out of junk mail - have you been 100% successful?
I just made four observations about how you and I do not possess the same advantages as an insider at Facebook. What are the odds that, something can slip through four different test cases you set up and still turn into a bug in production? Minimal, don't you think?
You make really good points about not countering immoral action with more immoral action. But your notion that FB employees could somehow become unwitting victims of their own technology sounds seriously far-fetched to me.
[1] http://www.businessinsider.in/Mark-Zuckerberg-Just-Spent-Mor...