* Facebook's Free Basics initiative in India: http://www.bbc.com/news/technology-35169226
* The Stingray phone surveillance device: https://en.wikipedia.org/wiki/Stingray_phone_tracker
* North Korea's Red Star operating system: https://media.ccc.de/v/32c3-7174-lifting_the_fog_on_red_star_os
* Hacking Team's surveillance software (sold to countries with a poor human rights track record): https://theintercept.com/2015/07/07/leaked-documents-confirm-hacking-team-sells-spyware-repressive-countries/
Do you think that the software engineers who consented to work on the above projects acted ethically?
I'm not trying to trap you. I'm willing to work with whatever definition you specify, and I won't try to play semantic games with the definition if it's at least close enough to something specific to work with. I'm not asking for a universality claim. But without some specification of what you mean the question is vague to the point of unanswerability.
The North Korean programmers may well have truly believed in what they were doing. A utilitarian may well truly believe that even if Facebook Free Basics isn't a perfect program, it's a net good for the participants. The "surveillance software" can be seen as just a tool and whether the tool makers are responsible for its misuse is ethically debatable. (And let me be clear I mean that literally, not as an attempt to rhetorically state a position. I could write a coherent argument both ways.) After all, many people even in free countries end up calling for strict regulation of corporations and that same "surveillance software" is pretty much the way you instantiate such regulation, so, is it really clear that it's intrinsically evil?
And again let me emphasize the point I'm making here is just the width of possible arguments about ethics that can be made. My previous paragraph is itself ethically incoherent, inasmuch I'm not even trying to take a consistent stand overall, but merely trying to highlight the most obvious problem per issue where ill-defined "ethics" makes it hard to even debate the matter.
It's mighty hard to pin down a universal definition for "art", "love", and even "game"- and yet we use these words regularly and mostly very successfully to communicate.
Discussing "ethics" is difficult too, but I am unconvinced by arguments of the "it's obviously all subjective" variety.
My question is precisely asking you to define that.
One common ethical stance is utilitarianism. This stance purports to optimize collective welfare. I don't know any NSA people, but it seems likely that they share a utilitarian approach to ethical decision making, and they define happiness as security from evildoers. (I'm speculating about their stance, not claiming it to be my stance.) Selfish interest is part of utilitarianism.
For example, if you have a utilitarian stance you might choose to refrain from having sex with strangers to protect your health and theirs.
Another ethical stance is deontology. In this stance you refrain from having sex with strangers because it's externally defined as wrong. For example, "Do not commit adultery" shows up in one common collection of externally defined rules. If you say to someone, "that's illegal" you're speaking from a deontological stance.
A third stance is altruism. When operating in that stance, people value the welfare of others above their own welfare. Many who donate blood do so from an altruistic stance.
Real people have a combination of actual stances. And for most of us, our real, operative stances are often not quite aligned with what we say our stances are. That's just reality. Almost nobody completely walks their talk on this stuff.
An engineer who works through the night to repair a critical defect probably has a combination of ethical attitudes. Trying to make users happy is altruism. It may also be deontological -- they're violating their quality agreements. It may also be selfish and utilitarian: losing face, losing revenue and getting fired are to be avoided. All that is fine.
Life is harder when different people have contradictory stances. The life and death of Aaron Swartz is a tragic example of that.
Immanuel Kant proposed the "categorical imperative". (Oversimplifying) he suggests that we should live and behave ourselves the way we WISH everybody would live and behave. Professional codes of ethics attempt to employ the idea of the categorical imperative to create a shared ethical stance.
Codes of ethics are helpful precisely because of the slipperiness of ethics. Good codes of ethics offer a common language. And they serve to convert various ethical stances into deontological stances--written external collections of rules to follow. They make it easier for us to predict each others' ethical behavior.
So, a plea: when questions of ethics are up for grabs, let's be explicit about our own ethical stances and generous when trying to interpret other peoples' stances.
I can't put all the blame on the software engineers involved: the decision to do this surely lies higher up the chain. But did anyone of the coders object or blow the whistle beforehand? Or were they all fine with "just following orders"?
Now that more is known, have any come forward or even given an anonymous interview about it afterwards? AFAIK,no.
See here for a more complete story: https://media.ccc.de/v/32c3-7331-the_exhaust_emissions_scand...
I personally have quit a job in the past because (among sever other important reasons) I felt the projects' primary application (surveillance) was not something I wanted to be associated with. I have also chosen not to voluntarily participate in the patent process for any software I've designed (foregoing those slimy patent "bonuses" other engineers seem to like to gobble up). But not every software developer shares my particular set of ethics. How would you come up with a definitive list of what does and does not violate the "Software Hippocratic Oath"?
I think a doctor loses his medical license if he does something bad. A lawyer can low his license to practice law. At least in the US, you can't lose your programmer license... write a virus? Tis okay, in a month you can go work at a bank...
In all of those cases, it it is conceivable that programmers could easily justify the "larger purpose" in their own moral/ethical framework. This a problem inherent in all collective efforts.
"There's always the same amount of good luck and bad luck in the world. If one person doesn't get the bad luck, somebody else will have to get it in their place. There's always the same amount of good and evil, too. We can't eradicate evil, we can only evict it, force it to move across town. And when evil moves, some good always goes with it. But we can never alter the ratio of good to evil. All we can do is keep things stirred up so neither good nor evil solidifies. That's when things get scary. Life is like a stew, you have to stir it frequently, or all the scum rises to the top."[1]
[1]: http://www.goodreads.com/quotes/12020-there-s-always-the-sam...
I realize these kind of projects are big consulting firms' bread and butter, but if you as an engineer continue to work on a project that's being managed with an eye towards "Failure is okay, because our lawyers wrote the contract to cover that contingency" then that's pretty scummy from an ethical perspective.
https://en.wikipedia.org/wiki/Virtual_Case_File
Edit: On the flipside, I feel a lot more comfortable when employers I've worked for have sat down with customers and had the "Look, this just isn't working out. We recommend you cancel this project and we tie off our relationship, because it's not going to end well for either of us if we continue" talk.
- Anyone who works for or supports militaries who wage aggressive wars or military actions. Arguably, depending on your stance on pacifism, anyone who works for or supports the military at all.
- Anyone who supports torture or "intensive interrogation".
- Anyone who helps to imprison people for victimless crimes, or otherwise supports the injustices of the legal system.
- Anyone who works in or supports unsolicited advertising which tries to manipulate people in to buying junk they don't need, often lying in the process.
- Anyone who works to support the security, surveillance, or police state aparatus.
Given certain viewpoints these are ethically indefensible. From other viewpoints these can be not only ethical, but righteous. A lot depends on where you're coming from and what your politics are.
Unfortunately, I don't think most people care or think much about the ethical implications of their work or what they do. And if they do think about it, they usually just throw up their hands and say there's nothing they can do about it, or everyone else is doing it and if they don't do it somebody else will, or that they got to make a living somehow, or that at least they're not doing something even worse, or that they were just following orders. The excuses, even in the rare cases when people recognize there might be an ethical problem with their actions, are legion.
And then there is Java EE...
You can't fault someone for making a morally wrong choice if they never had a morally right option to choose.
They might also genuinely believe--perhaps due to misinformation--that they are doing good rather than evil.
Besides that, I believe that it is acceptable to perform an action for the benefit of a mutually loyal relationship at the expense of anonymous strangers.
It may be that someone who has a moral objection to IMSI catchers would, instead of endangering himself for the sake of strangers, anonymously pass a tip to a defense attorney containing a suggestion for discovery. Would that person then be condemned as unethical for supporting the technology, or expiated by peeling away one of the veils of secrecy? There is never any shortage of skilled workers who can be psychologically manipulated into acting against their own interests, after all. If the morally conflicted person never worked there, the secrecy may have remained intact longer, creating a larger window for potential abuse.
It's very difficult to condemn someone for trying to collect a regular wage. "I was only following orders" is not exculpatory, but if the alternative to following the order is sufficiently dire, it is sufficient to discourage me from adding moral condemnation to any of the other consequences that may result.
In a mad scramble to create some impromptu how-to instructions for a customer's users, the person made screenshots that revealed a bit of personal and confidential information.
The person doing that knew it was wrong at the time. This took place in a dodgy ethical milieu, where the customer's management refused to support the publication of high-quality how-to materials, and the customer's users were failing and flailing, and themselves emailing screenshots showing private data.
In my belief, this kind of breach of ethics is much more common than colossal systematic moral collapse. The trouble is this: It contributes to a morally slippery workplace.
In short, most people won't perceive their job as unethical at all. Those who see the problems, won't relate them to the tiny bit they work on but to the whole that doesn't depend on them (and are probably right).
And those who get enough remorse to stop doing their tiny bit, will be replaced by someone else who will happily continue it anyway.
So I don't blame the engineers working on those systems despite the results being so immoral.
Appeals to ethics are not very useful. Otherwise most of the Tech giants of this world, not only the NSA, would have an awful time finding someone to work for them.
What do you hope to achieve by finding real world examples? People do Bad Things for some definition of bad all the time, no matter their profession, but good luck trying to make any claim to absolute wrongdoing without the discussion devolving into semantics.
It will be very difficult for you to ascribe a moral position to any real world example because the real world isn't binary. To a first approximation, debates about ethics are usually won by vindicating the majority in-group's opinion about something they disagree with.
As you can tell, I'm basically saying this discussion isn't productive. You'll probably get some trendy answers like "NSA" or something to do with "surveillance" but I'll bite - how about the blackhat hackers who are employed by organized crime to de-anonymize "problem people" so those people can be found and "dealt with"? But that's just my opinion.
One of the biggest problems with this sort of thing is that no matter how powerfully you might believe someone working at e.g. the NSA is doing something evil, that individual likely feels just as powerfully that they are working to increase the net good in the world. In fact, they probably have an objectively coherent argument in favor of their position.
Well, not really about SOFTWARE engineers, I suppose.
Are the scientists that develop new drugs which have potential side affects, even death, murders, are they ethically challenged, morally corrupt? Some people died using their invention, some could have even have been intentionally killed or accidentally killed. Rogue governments could use the drugs to hurt or abuse people. I'd say no, the scientists (like engineers) aren't the problem, they did something that served some valid purpose but their invention (creativity) can be abused or misused.
Stingray devices, while I totally disagree with how they are used, I can see that validity for their use for some specific law enforcement cases. The problem to me isn't the technology, but the lack of ethics and morales in the people using them. Lying to the judges about the usage, lying about it's capabilities etc.
Hacking Team's surveillance software, having read just a little about it in the past. It appeared again that the software was being used appropriately for a genuine and valid purpose, at least at first. However, then it was sold to people who planned to mis-use it and to people who have a track record of abusing human rights. So is it the software's creation that is the problem or the assholes that sold it to dictators and abusers?
A syringe isn't an evil device, it is necessary for the medical community to do its' job, but it is also a device that can be used to kill, commit suicide, overdose on drugs etc. Is the person/people who invented it morally corrupt/ethically challenged, or does their invention just have a potential to be abused?
You might say that you are not directly responsible and that there are second or third order effects. Or you might even say every man has his own judgement, I cannot be made ethically responsible for his actions with the software I created.
Sure, no you are not responsible, but you were an enabler of his behaviour, you brought that man/company whatever one step closer.
To answer your question, I think you don't need to create separation between the two, they are intertwined. The software would be less likely to have been created if you didn't write it in the first place and you probably wouldn't have written it if someone wasn't going to use it.
So as a software engineer I think it's reasonable to expect to try to understand who is it that you are creating the software for, how are they planning to use it, who is selling it to whom, what kind of people they are etc.
but investing your work in a company which sells surveillance equipment to dictators is downright disgusting. people are being tortured and killed because their governments managed to track them down thanks to that company. that seems like complete moral ambivalence at work. even if one could have somehow rationalized enabling their own government to spy, upon finding out who the company sells to, they must quit. and perhaps also leak everything.
Google outs transgender woman: http://www.theinquirer.net/inquirer/news/2321446/transgender...
Anyone who worked to aid the US government to circumvent Tor.
The problem with ethical actions is that it depends upon one's framework. Now there are areas we can tend to all agree on. Taking money to sabotage software to kill someone we agree is innocent is probably a good example. But once you get past the clear cut examples, it begins to be clear that ethical behavior is subjective and our subjective views do not always agree. For example, I can find people who would agree strongly with one of the above two that I listed while disagreeing strongly with the other, and I can find people who would do the opposite.
All three prey on people with addictive personality.
There are cases where it is obvious and people will quit over it, but if you are given the task, say, to change the firmware on this particular device to execute these external calls for conditions A, B, C, how do you know if this particular check is used for something you don't agree with?
What if somebody had you implement a bunch of logging and management interfaces telling you it's for QA when instead it is for mass surveillance? how are you going to know?
I think the ethics come more into play after the fact, if you find out that your company has done something you don't agree with (and you might have taken part in it without knowing), are you going to quit over it or not?
And I would personally argue for Microsoft's https://en.wikipedia.org/wiki/Embrace,_extend_and_extinguish program
Most engineers build something for a paycheck and the company paying them ends up using it for something unethical or illegal.
Add to that list everyone who works for a social network company or who works for an ad company.
So, do you think providing a social networking site is inherently unethical, or just information selling advert stuff?
I ask because I've been kinda working on an alternative to a particular one, and if you think that is unethical, I'd like to know why, in case your reasons why are convincing.
(because I don't want to do something unethical, unless not doing it would be morally worse than doing it.)
Now a not ad funded network may not do these things. That removes my largest complaints about them. It is these things which make them so grossly unethical.
As for your project I wish you luck. You will need it.
https://en.wikipedia.org/wiki/Patrick_Naughton#Sex_crime_arr...