Once again someone is calling for power to be moved away from people and centralised. What's labelled "morality as a feature" could just be "de facto censorship".
To put it another way: causing violence in Myanmar is the worst case scenario for freedom of speech, and even then is only a worst cause because of the violence itself; the speech alone would not make it a worst case. Why does the article not consider the worst case (or even, the likely case) of having speech governed by a central body?
We already consult tech for decisions with moral consequences when we google it. We also test ideas using tech by seeing whether they align with popular narratives by posting them on social media. I'd even say most educated people under the age of 40 are already entirely dependent on social media for moral guidance and approval. "Do it for the 'gram," summarizes it well. However, TV was the same way. When most of the reference relationships you have a mediated by tech, your reference point is going to be an artifact of that tech. Then as now, the medium is the message.
The other piece is Clarke's addage about sufficiently advanced tech being indistinguishable from magic becomes newly interesting when you see the effect of powerful tech causing people to optimize their lives and moral choices to curry favour with it - in effect, worshiping the magic. In this context, social media may just be another form of primitive magical worship that subordinates the human spirit to bargaining with flighty and mercurial gods, with superstitions about its workings in the place of a coherent ethical framework.
I think this is what Gaiman's "American Gods" was about.
I notice that almost all debates on these topics skip over the question of what moral framework will underpin the discussion.
It seems to me that a lot rides on that, for several reasons:
- There's a real chance that the participants don't actually agree on the unstated framework, and
- It makes it hard to actually argue from shared premises to a compelling conclusion.
So my impression is that most of these discussions end up with people arguing past each other, with little to show for it.
Am I missing something? I can't believe this is purely a modern phenomenon.
No. A lot of people that write about these subjects assume that everyone that reads what they say has the same moral framework as them. It is implicit, as you say. Not defining that framework explicitly means you won't get anywhere with a discussion, which is ironic because if you try to do so in a discussion folks will tell you that won't get you anywhere :)
Whilst not really settling the debate about which moral framework is used it does give a good lens by which to judge it, from the article:
There does not seem to be much reason to think that a single definition of morality will be applicable to all moral discussions. One reason for this is that “morality” seems to be used in two distinct broad senses: a descriptive sense and a normative sense. More particularly, the term “morality” can be used either
1.descriptively to refer to certain codes of conduct put forward by a society or a group (such as a religion), or accepted by an individual for her own behavior, or 2.normatively to refer to a code of conduct that, given specified conditions, would be put forward by all rational people.
With people speaking past one another, I don't think it's been as dysfunctional as it is right now. It obviously correlates with widespread internet access, but the addresable cause is probably more complex. One thing that happened before though was that future generations that hadn't yet picked a side could be swayed.
Mostly I think people underestimate how good at "bullshiting" we've become on average compared to previous generations (since that's how you win an internet argument) and how that need to be called out harshly -- we can't build anything on bullshiting. We're also not indoctrinating our young to have a deeper sense of responsability for others and society in general, and while that's optimal for the individual, collectively we all lose.
I completely agree. However, I feel many of the problems we face as a society is because far too much has already been built on it. It was once strategically placed where necessary to maintain operations. Now it's just an industry all to itself.
Really indoctrination is the wrong way to think about it period. Children are not to be your puppets but your successors who will always diverge from you.
I am also of the opinion that 99% of the time "the collective" is used merely an excuse to manufacture consent by claiming to speak for all or represent their interests. Communists are especially infamous for this "listen to the working class but only if they agree with me" sleight of hand! I find doing away with the "collective" bullcrap and think in terms of individuals and "mirrored standards and impacts". You don't want "bad people" detained indefinitely without a trial because then there is nothing stopping it from applying to you. It is damn simple but people tend to fail that mirror test all the time.
My understanding is that ethics are how we treat other beings, and morals are rules we follow (think "moralizing").
I'm much more interested in the former.
I'm unsure how regular people use "ethics" and "morals", so in any conversation I tend to clarify what I mean by the terms so as to avoid confusion.
My recommendation: utilitarianism is the best ethical framework for life - in every regard. It has a proven track record (advocating for abolition of slavery, women's rights, gay right, animal rights, etc -- all decades or even centuries before these became mainstream).
The invention of the knife, of the bow & arrow, of gunpowder, of rockets. Each one of these pieces of technology amplified the ways humans can harm and kill each other, as well as created other technological use cases. Their creation did not create changes in morality.
Social media is a tool, no different from a bow & arrow. It can be used for many use cases. Some of them have negative effects. That does not make it moral or immoral.
Technology does not create nor affect morality, it is how the technology is used by individual users that does that.
His opinion was that such "deep ethical problems" have been around for millenia and it's unreasonable to expect anyone to "just solve" them. Therefore, self-driving cars will not have solutions to these fundamental issues and, as a consequence, society should not and probably will not accept self-driving cars.
I agree that we will not "just solve" such questions (i.e., arrive on a consensus across humanity) any time soon. However, I also think such questions are almost irrelevant, because the "conundrums" ethical philosophy discusses don't happen in practice. There is no need to "solve" these problems in order to use self-driving cars. We can (and will) slowly progress towards a consensus-ish on what we want (or, at least, can tolerate) the "moral choices" of self-driving to be in almost all situations that arise in practice. In fact, AI can be a great step forward in "practical morality", because an AI will actually do what it "considers" morally right.
Of course, there will be many difficult questions to answer. However, I think it's a fundamental error to just give up and take the position of my philosopher friend. Moral qualms have not stopped technology in the past and I find it unplausible that societey will somehow "not accept" it. As a philosopher, or even just a member of society, you have to see AI as a chance and an obligation to advance morality. It's pretty clear that human morality is changing (I believe advancing) over the millenia. AI marks a transition where the moral questions of the past begin to make a difference in the real world because what we set as moral standards has a much larger effect on what people and things do.
To make progress on this, we have to accept that it is a fools errand to try "deriving" correct morality from "first principles" (Kant famously derived from absolute and eternal first principles that it's morally OK to kill "illegitimate" new-borns as a means of birth control). Rather, it's an exercise in consensus building. Likewise, it is not reasonable to expect moral solutions to arrive at something "perfect and complete". Practically relevant morality will be fuzzy and everchanging, just like judicial systems.
I am quite sad that so many philosophers and members of the public seem reluctant to accept this challenge at overhauling the millenia old stagnated academic debates. If they don't participate, engineers will "solve" these problems themselves, perhaps choosing ease of implementation over moral considerations.
Many philosophers and public members are sad that you insist on ignoring their input and are going to charge ahead long term consequences be damned. (I’m not taking sides here)
> If they don't participate, engineers will "solve" these problems themselves, perhaps choosing ease of implementation over moral considerations.
It’s OK. Congresses, parliaments, and other policy making bodies, basing their decisions on populist emotional feedback loops, will regulate these solutions in ways that leave both the moralist and the solver confused and unhappy.
On the contrary, I strongly encourage them to give input, and I criticise those who would rather give up, dismiss the questions as impossible to solve, and lament how technology has been destroying society for the last 2000 years, while self-driving cars still start being used leading to suffering that could have been prevented by thinking about things more and seeing them from more points of view.
Which inputs by philosophers or the public are being ignored?
How do they solve the problem?
My guess is, they randomly chose who to run over in the heat of the moment.
Why isn't that a viable solution?
If AI drivers generally have less accidents and in the few cases left behave like humans, wouldn't that be a win?
A workaround thus far has been to abstract the problem into small enough pieces that ARE palatable to sign off on, as your comment shows. "Minimize the number of Grandmas run over" is a different framing than "Should we run over Grandma?".
Do self-driving engineers personally commit to be punished and suffer remorse for their algorithm’s choices? And before you say “it’s not fair, the CEO is at fault!” think about who’s writing the code. The CEO doesn’t make the self-driving car possible, the engineer does.
Yes. I think that's a big part of why it's not necessary to "completely, once and for all solve" ethical problems to automate things that might run into them. One could easily argue (and people of course have) that it's also immoral not to take measures that will reduce accidents, which I'm quite sure will happen with AI drivers in the not too distant future.
This is a bad question to start the article, because the average Joe needs only two seconds to answer it in the affirmative. A society with plenty of material wealth facilitated by technology (electronics, financials, corporate law, patents, accepted practices, etc.) will consider amoral to feed a dog anything other than dog-food, while that would be a non-issue in a country where people themselves are starving because they are stuck with the wrong "societal tech" (e.g. Cuba or North Korea).
> One prominent example of how technology can impact morality is Facebook.
Ah, Facebook, obviously the most important piece of technology we have invented in the last twenty years. No, they are really not.