Systematizing empathy: The trolley problem, the boy in the lake and autonomous vehicles

We ended our introduction to empathy with a quick mention of Simon Baron Cohen's Empathy Quotient exam and, more specifically, my results, which are not much higher than those of someone with high-functioning autism.

People on the low end of the EQ spectrum, Baron-Cohen notes in The Science of Evil, tend to systematize moral codes, rather than letting empathy decide their actions.

There are many types of moral systems running around out there, and many people ascribe to at least parts of some, including followers of most religions, including "humanism," which is essentially an atheistic religion with its own moral code. Many nations write moral codes into their founding documents and codes of law as well. There will be a longer post on this, so we won't belabor it here.

What does a systematized moral code look like? Why is having one important? #betterhumanhood Click To Tweet

What does a "systematized moral code" look like? Why is having one important?

Let's answer the second question first.

Andrew Yang, who is running for president of the U.S. in 2020, notes that in the coming years (not decades, years), we're going to see not just a continued drop in manufacturing jobs thanks to automation, but also self-driving trucks are going to take over a people-heavy industry.

Steven Kotler told Duncan Trussell that in the new book he's writing with Peter Diamandis they detail that by 2023, Los Angeles will have self-driving aeronautic Uber available.

Let's slow that down and play it back. Within the next four years, if you're in Los Angeles and need to get somewhere, you might get there in a flying vehicle with no driver.

It won't be long until that's everywhere. Flying autonomous vehicles, getting you where you want to go, summoned with a touch of a button on your phone. Or by just talking to your phone or watch or whatever new smart devices are coming, of course.

Grocery store chain Kroger just wrapped up a test run in Scottsdale, Arizona, during which over 2,000 grocery deliveries were made in autonomous vehicles; they were happy enough with that they're trying it in a much bigger city next: Houston.

Self-driving vehicles are going to be everywhere soon, and guess what? Cars don't have empathy. We're going to have to program them with a moral code, and, because we're dealing with a programmable instrument, we're going to have to program its moral code.

Self-driving cars are going to be everywhere soon, and cars don't have empathy. We're going to have to create a moral code a computer can understand. #betterhumanhood Click To Tweet

Some of the items in that moral code are going to be, essentially, self-evident. Don't murder (separate from don't kill, because at some point, the car will have to "make a decision" as to who dies; we'll get there soon). Don't rape. Definitely don't rape children. If you're running out of fuel, get to a fuel station, rather than siphoning fuel from a nearby vehicle.

But what of the more intricate choices?

Let's start on maybe the easiest of them: Peter Singer's drowning child example.

To challenge my students to think about the ethics of what we owe to people in need, I ask them to imagine that their route to the university takes them past a shallow pond. One morning, I say to them, you notice a child has fallen in and appears to be drowning. To wade in and pull the child out would be easy but it will mean that you get your clothes wet and muddy, and by the time you go home and change you will have missed your first class.
 
I then ask the students: do you have any obligation to rescue the child?

Unanimously, they say yes.

Singer then widens the scope, asking about a child far away, in another country. If that child was in danger of death and it would be of small cost and no danger to you, do you still have that obligation?

While they generally say yes, few of them actually donate money to aid organizations that would help those children. And this is a problem that Paul Bloom notes in Against Empathy: We're very good at empathizing with a specific person in front of us or who we read about in the news, but we're not at all good about empathizing with a hypothetical person or a distant group of people (this is also why we have an us-vs.-them attitude with things like politics — it's easy to talk about "those people" but we're much more polite and empathetic when we're actually speaking to one of them).

Let's make it more complicated, as The New York Times Magazine did in 2015.

What did you say? Let's see that tweet again, this time with the results of their poll.

Now, that's not exactly the same, right? If you chose to not save the baby Hitler, that would be different than actively killing baby Hitler, wouldn't it? And does it matter that we're talking about a baby?

First, Vox takes on the time-travel problem:

Given certain assumptions, this isn't a hard question. Assume that going back in time merely eliminates Hitler, and that the sole effect of that is that the Nazi Party lacks a charismatic leader and never takes power in Germany, and World War II and the Holocaust are averted, and nothing worse than World War II transpires in this alternate reality, and there are no unintended negative consequences of time travel. Then the question is reduced to, "Is it ethical to kill one person to save 40-plus million people?" That's pretty easy. You don't have to be a die-hard utilitarian to think one baby is an acceptable price to pay to save tens of millions of lives.

But, Vox points out, going back in time and changing things isn't that simple. There are many works of popular fiction that tackle this, but one that might be overlooked is Stephen King's novel 11/22/63, in which our protagonist determines Lee Harvey Oswald acted alone in killing JFK. He goes back and stops the assassination, only to return to the present and find that America is in the hand of white supremacists.

The Atlantic points out that this is really just a spin on the trolley problem.

Ah, the trolley problem. This is going to be the thing for autonomous cars.

Wikipedia has a good collection of trolley problem variations, but let's start with the basic one. I'll put this in my own words.

Imagine you are walking along some tracks, and ahead, you see five workers laboring away. They are working on a narrow section of track with sides too steep to climb; they will have walked in from some distance away.

Then you notice an out-of-control trolley barreling down the track toward them. If you do nothing, they will surely die.

Then you notice you're walking by a switch, and you see ahead a split in the track. On the other length of track, there is one person who faces the same predicament as the five on the other section — if you throw the switch and divert the trolley, that one person will surely die.

Do you throw the switch?

Many people say yes, they would throw the switch — they would kill one person to save five. On the other hand, some people say while they would be sad to see five people die instead of one, their inaction spares them from making the conscious decision to actively end the one person's life.

Now let's change it up a little. Instead of walking near a switch, you're on a footbridge above the tracks, and it just so happens you're walking next to a man who is fat enough to slow the trolley enough to give the five laborers the opportunity to avoid the trolley. Do you push him over the rail?

If you already didn't want throwing the switch on your conscience, you're surely not throwing someone over the rail. But if you would throw the switch, would your conscience let you throw the man over the rail? Does it get easier if you know he's a serial killer (like the baby Hitler example)? What if it were one of the world's great people (like if the Dalai Lama got very fat)? What if you knew that the five people on the tracks weren't laborers, but escaped convicts?

There are some good conversations over here, as well.

What does this have to do with autonomous vehicles, manufacturing robots, and the like?

In 2016, the driver of a Tesla in self-driving mode died when the car failed to detect a tractor-trailer crossing an intersection.

In 2018, a self-driving Uber killed a pedestrian; it was later determined the pedestrian entered the roadway in such a way that a human driver wouldn't have been able to brake in time to avoid the accident.

Autonomous vehicles are going to get in accidents, and they're going to cause injuries and deaths. While those accidents will be far fewer than accidents involving human drivers, they'll come under much deeper scrutiny.

And there are going to be times when autonomous vehicles are going to have to make trolley problem-like decisions.

Given the choice, human drivers will generally choose saving their own lives. If you are in a position, driving down the road, in which you must decide between hitting a pedestrian, or group of pedestrians, or driving off an overpass, you'll probably hit the pedestrian or group of pedestrians.

An autonomous car, however, might see five pedestrians and make the decision that the lives of those five pedestrians are more important than the life of the single passenger in the vehicle — or even two or three passengers, particularly if it knows that the overpass is over, say, a ditch or creek.

If it's over a freeway, it might make a different decision.

If the option is to drive into a tree instead of off an overpass, that seems like a different decision — hitting the pedestrian would surely kill the pedestrian, but hitting the tree might only injure the passenger.

What if one or more of the passengers are small children?

Would the car be able to distinguish, say, a group of elderly pedestrians? Would it make a decision based on (a) elders are to be respected and saved, or (b) the elders have lived their lives and the people in the car have more potential?

Would an autonomous vehicle have access to good facial recognition? If a pedestrian appeared to be a former convict, would it value that pedestrian's life less? If the pedestrian was a societal contributor, would her life be valued higher than that of the driver?

What about in the case of the Kroger delivery system, if there are no people, only cargo? What cargo is worth a person's life? Obviously, groceries are replaceable. So is clothing. What about a high-value cargo, like high-priced jewelry or precious artwork? What about hazardous materials?


Humans, of course, face a different set of problems. If we program a moral code into a machine, it has to be absolute. Machines can't say, "I know we made this rule, but I think I'll make an exception here." People are more nuanced than that. Systematizing a code is a way to say, "In this situation, I should do this." Sometimes, though, we have to say, "Let's make an exception in this case."

We'll look next time at moral systems like religion and government, and how free will plays a role in empathy and moral systems.

Share

Leave a Reply

Your email address will not be published. Required fields are marked *

CommentLuv badge

This site uses Akismet to reduce spam. Learn how your comment data is processed.