It is however acceptable to kill 20 enemy combatants to rescue a dignitary from a besieged embassy.
The phrase "The ends don't justify the means" is not hyperbole to discredit accomplishments, nor is it meant to remove the ends from the equation. It changes the equation to provide the necessity of weighing the ends against the methods rather than measuring the ends independent of the context used to accomplish them.
What is even more interesting is the opposite. The ends not justifying the means is typically used for weighing the actions necessary for a favorable outcome. If looked at from the opposite perspective, do the means justify the ends, it presents a more difficult scenario. If X people die, but only appropriate means were used (and others would have saved X people), should those that chose the appropriate actions be held responsible for the failure to provide a favorable outcome? Or should they be heralded for making the difficult decisions to only use morally acceptable practices?
Even if the dignitary was Von Ribbentrop?
http://en.wikipedia.org/wiki/Joachim_von_Ribbentrop
Semi-deontological ethical rules suffer the same problems as deontological ethical rules.
Here is the rule: "Do no harm."
Why should we follow it?
A rule-consequentialist would say, "Because if everyone followed that rule all the time it would lead to better consequences, on average, than if people tried to calculate the consequences of each individual action and act to maximize them."
Why would that lead to better consequences?
A rule-consequentialist could say, "Because people are quite bad at calculating consequences, especially in the tumultuous time before making a critical ethical decision. While we have had the time to think and properly calculate the hypothetical consequences of everyone following the rule."
I think that's exactly the argument Eliezer is making.
Seriously, I would extend his argument to assume that all 'hardware is corrupted' as he puts it, though I think it's more accurate to say 'all software is buggy'. For this amazing AI to even exist, it probably had to be pretty damn selfish, pushing out all other potential AI's to get to the top of the heap.
Just saying.
Not only do we condemn other innocents to death, but also ourselves.
The way I see it, it's about externalities. I push someone onto a train track to save ten people in a mineshaft, fine. Somebody else watches me push someone on a train track, doesn't see the mineshaft, and goes on to push 25 people onto train tracks in the future. Perhaps that's contrived, but when he describes coups in the same terminology it's a lot less so.
The author is saying that his answer to the the classic hypothetical dilemma in ethics of "is it right to do harm (murder one) to prevent a greater harm (death of five)" which usually framed in utilitarian-ish debates is:
a hypothetical incorruptible super awesome version of me would murder the innocent person but as I am not that type of being (and only that type of being can answer the question) I am not going to answer the question
But why not go all in and pose it thus: "is it better for one person to suffer eternal suffering to free all others from any type of suffering"? And that's why Jesus died for our sins you know. And that really happened. And Jesus was smarter than a hypothetical incorruptible super awesome version of you. Therefore ... I'm not sure where this is going.
Maybe what I'm trying to say is (and by all means argue the toss with me and don't shoot me down) Eliezer Yudkowsky sounds a lot smarter than he actually is.
I would also draw your attention to the first sentence of the next paragraph: "Now, to me this seems like a dodge." This isn't the core point of the essay, and the more I stare at it, the more it does seem like it's five paragraphs accidentally ripped from another essay ("And now the philosopher comes" -> "Now to me this seems like a dodge"); if you just cut those five out entirely it seems more focused, and those five paragraphs can spin off into another interesting essay. (One that would, I think, conclude that this is actually just a way of rephrasing the idea that philosophical hypotheticals are actually useless by virtue of being impossibly overspecified which itself comes from impossible oversimplification, and in general the hypothetical question "What if an absolutely mathematically impossible thing happened?" is not a fruitful line of thought.)
I did not highlight the distinction that in this class of hypotheticals that the lesser harm requires action and that the greater harm requires one to simply do nothing, to stand idly by as it were - oh my god, I can here the voice of my prof in my head from days gone by as I clarify this point. Still, the causal link remains, one can either choose to act or not (or insist that you cannot even begin to play the game as was done here). But apart from that action/inaction subtlety I have to disagree with you here, it is an accurate if not entirely straight-faced summary.
Look, you might have a good working understanding of "can't occupy the epistemological state" (oh really, why? because I haven't achieved the level of perfection of my future hypothetical self) but I find it fairly meaningless. Hint: substitue epistemological with ethical or even aesthetical to see if such an assertion becomes any more meaningful. Note: I am not saying that I am positioning myself against the "you can't even begin to play (or, I'm not playing) the game" stance or some variation thereof as my response to this dilemma would be probably something along these lines given my aversion to hypothetical thought experiments such as this which I feel contribute very little to the debates in morality and ethics.
This isn't the core point of the essay. But this is not the case surely. The essay makes many points, sure, but this chain of reasoning is I believe fairly central and although it could be excised I believe that the author formulated the whole essay this way for a reason. This post-singularity being's properties are analysed in the light of a very classic problem in philosophy. If you look at the comments you will see that a poster points out that "regular" philosophers invoke mythical beings such as 'angels' or 'ideally rational agents' which are non-tech versions of what is going on here. I don't think it's a dodge, it doesn't even seem like a dodge and the author didn't even need to point this out. Where I'm coming from is that this ground has been covered and it has been covered in language that is not obfuscated. The jargon salad does nothing more than communicate "look at me, I'm so clever" which is why I claim that the author sounds smarter than he actually is.
just a way of rephrasing the idea that philosophical hypotheticals are actually useless by virtue of being impossibly overspecified which itself comes from impossible oversimplification This would be something a logical positivist would say. It's something I'm very inclined towards. I agree that hypotheticals like this generate a good amount of noise and heat but they fail to be constructive or advance our understanding of ethical questions beyond perhaps showing what ethical norms a person subscribes to, to whit: all life is sacred and one is commanded by a supreme being to do no harm, all life has intrinsic worth/value so you shall never through action do harm, you shall optimize for the greater good, and so on and so on.
For instance, quite a few dictators seize power and suppress dissent because they honestly believe that's the best for the people. It's all too easy to convince oneself that something that is convenient for oneself but harmful to another is, ultimately, "for the greater good".
Of course, exceptions to the "do no harm" rule do exist. However, the probability of the current case being an exception may be vanishingly small, even if you are convinced it is an exception. If this probability is indeed very small, the "do no harm" rule produces a better expected outcome than a "pragmatic/utilitarian" point of view for a human (imperfect) agent.
In the philosopher's "100% sure" case, a true mathematical weighting can be made and (true) utilitarism wins; but no human can ever be that sure.
I was merely bringing the thought experiment to its logical utilitarian conclusion and showing that in doing so it has some mythical/biblical/primordial resonances.
If MI-5 kills 4 terrorists which where preparing to bomb a bus station with hundreds if not thousands of people, they're in the wrong for not respecting the rights of the terrorists? Or would they be in the wrong for not protecting the citizens of the country they're supposed to be protecting?
The ends will always justify the means in the mind of the person doing the act. What makes the actions of one person (or government) right is that in the minds of other people the means are also justifiable.
If someone is willing to not respect my rights and kill me to steal my wallet, I'm willing to forgo their rights because lets me honest, the ends do justify the means if it benefits not only yourself, but the rest of the population.
In the anti-terrorism case, if the presumed terrorists were in fact innocents, then they (or their champions) would have a right to retaliate against the aggressors or against their commanders.
In our minds and to ourselves, we are always justified, but we can't justify ourselves from a moral standpoint, nor plead our case if the innocents have decided to retaliate.
You've just offered another slogan. You're going to have to work harder than that if you want to argue against consequentialism --- the idea that what matters are the consequences of each action, not the principles it contravenes.
>..if we respect our fellows' human rights
This is a problematic clause. Whose rights do you disrespect: the one you pushed, or the 5 you could have saved?
What are said rights?
For example, do I have a right to food? If so, who is obligated to provide it? ("govt" isn't an answer.)
How about a right to live in the southwest US? (I may require a dry climate for health reasons.) How about with an ocean view for my piece of mind? How about a right to live near people who I like or away from folks whom I don't like?