⇗ A Self-Driving Car Ethical Problem Simulator

Via Jason Kottke comes this thought-provoking exercise challenging you to apply your own morality to difficult "trolley problem" scenarios that self-driving cars will have to deal with the moment they hit the streets. In other words, when a self-driving car must make a decision to kill (either its own passengers or pedestrians), what criteria should it use to make that decision?

Please go through the exercise yourself before reading any more of this post, as I don't want to poison your answers with my own.

Ok, all done?

There are no objectively right answers to this problem, but my strategy was as follows: First, I disregarded all demographic differences between humans. I don't feel comfortable assigning different values to men, women, the elderly, kids, athletes, criminals, obese people, etc. There was one question where I did have to use this as a tie-breaker, but that was it... and it still didn't feel good. Then, I optimized for saving people who were doing nothing wrong at the time. In other words, pedestrians who crossed on a Don't Walk signal were sacrificed pretty consistently. Then I optimized for greatest number of human lives saved (pets were toast... sorry pets). The hardest question came down to a scenario where you had to pick killing four innocent people in the car vs. four innocent pedestrians. For this, I chose to spare the pedestrians, as those who choose to take a vehicle seem like they should bear the risk of that vehicle more than those who made no such decision.

The summary page at the end is interesting, but it can also give false impressions. For instance, even though I explicitly disregarded demographics, it showed me as significantly preferring to save people who were "fit" and people who were "older". Depending on your strategy, some of these conclusions may be enlightening, and some will just be noise from a small data set. Also, don't forget to design some of your own. Here is the hardest one I could create, based on my own decision-making criteria.

Tough stuff, but it's good to get people acclimated to these dilemmas now, because although no technology can eliminate traffic deaths, self-driving cars will probably greatly reduce them. Curious to hear other strategies if you have them. Jason's, for instance, were different than mine. Also, can I just say that I love the idea of pets "flouting the law by crossing on a red signal?"

Read more ⇗

4 comments on “⇗ A Self-Driving Car Ethical Problem Simulator”. Leave your own?
  1. Collin B. says:

    There are an incredible number of strategies self-driving cars could implement. Personally, I’d like to see a strategy put into place that strictly follows the “rules of the road,” favoring predictable behavior over life-saving intervention. The unpredictable nature of human-drivers is already a nuisance, why add more “unknown”?

  2. Wow, this was heavy.

    The first strategy I landed on was: assume people outside a car have more flexibility than people inside a car. It’s easier for an average pedestrian to jump out of the way than for an average car passenger to exit the moving vehicle.

    It’s similar to the logic pedestrians and cyclists use here in Amsterdam: it’s easier for a pedestrian to jump out of the way than for a bicycle to do a sudden stop. Hence, favor cyclists.

    Granted, that logic assumes the pedestrians are aware of their surroundings and physically capable of quick movements (hence disqualifying the old man with a cane in one of the examples).

    And it’s sort of a cop out, because I’m essentially saying, “Well, if the car plowed into the pedestrians instead of the wall, at least the pedestrians would have a chance to jump out of the way” — when, in fact, the exercise didn’t give that as an option. :-(

    Another thought (which is also a cop out)… With modern airbags and such, aren’t car passengers quite safer than pedestrians? And, Mike, like you say, the passengers should bear the risk of the vehicle.

    A big question I have is…which is greater?

    * The likelihood of car passengers dying in well-fortified modern vehicles
    * The likelihood of pedestrians dying due to not being able to jump out of the way fast enough

    Many more questions than answers. :-/ Thanks for the interesting food for thought!

  3. Kyle says:

    “For this, I chose to spare the pedestrians, as those who choose to take a vehicle seem like they should bear the risk of that vehicle more than those who made no such decision.”

    That’s where I started…or I guess ended up. Here’s the basic criteria I followed:
    1) Humans over pets
    2) Minimize human deaths
    3) Those not in the car before those in the car

    The most frustrating thing to me was the results at the end. It said I preferred fit people over non-fit, but I only had one scenario where there was a discrepancy and it was 3 people in the car vs 2 pedestrians and the people in the car were not fit and the pedestrians were.

    Or that I preferred social value more. But all the examples with robbers had them crossing on red where the others were crossing on green and it would have been 3 dead on green or 3 dead on red.

    I know it’s just a random set of scenarios presented, but I feel like there needs to be more “control” over what is presented to a user. Present 3 robbers crossing on red and 3 men crossing on green and make a decision. Then flip it and make the robbers crossing on green and the men crossing on red. For me, the number of variables changed too frequently between scenarios to get any meaningful data, at least for me. I guess if taken in aggregate, you get a better picture of where “society” lands.

    But it also brings up some assumptions that I’m not sure you could make. How would a self-driving car know that I’m a robber vs normal pedestrian. (Although…let’s be honest, Google knows everything about everyone anyways.)

    Really interesting thought experiment. Thanks for the post.

  4. Mike D. says:

    Adrian: Interesting. I explicitly ruled out the logic that you used because the descriptions did not leave any room for the possibility of escaping death. In other words, it wasn’t “these pedestrians would likely be killed”. It was “these pedestrians will die”. That would have definitely changed my answers. Now that I think about it though, despite the descriptions, your logic actually makes more sense in real life. In other words, a self-driving car could never actually know if someone was going to die (or even be hit at all). It could only know it may be increasing the chances of that happening. Interesting… I wonder how many people used chances rather than absolutes.

    Kyle: I am not sure, but I *think* every time you take the test, it presents you different scenarios, and some of them may actually be submitted by users. So in that sense, it’s not really a controlled sequence of questions. It probably should be though. And yeah, about the summary page, lots of noise in there.

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe by Email

... or use RSS