{"id":28378,"date":"2016-10-05T10:38:03","date_gmt":"2016-10-05T17:38:03","guid":{"rendered":"https:\/\/mikeindustries.com\/blog\/?p=28378"},"modified":"2017-03-04T09:30:55","modified_gmt":"2017-03-04T17:30:55","slug":"a-self-driving-car-ethical-problem-simulator","status":"publish","type":"post","link":"https:\/\/mikeindustries.com\/blog\/archive\/2016\/10\/a-self-driving-car-ethical-problem-simulator","title":{"rendered":"\u21d7 A Self-Driving Car Ethical Problem Simulator"},"content":{"rendered":"<p><a href=\"http:\/\/moralmachine.mit.edu\">A Self-Driving Car Ethical Problem Simulator<\/a><\/p>\n<p>Via <a href=\"http:\/\/kottke.org\/16\/10\/solve-trolley-problem-scenarios-with-mits-moral-machine\">Jason Kottke<\/a> comes this thought-provoking exercise challenging you to apply your own morality to difficult <a href=\"https:\/\/en.wikipedia.org\/wiki\/Trolley_problem\">&#8220;trolley problem&#8221;<\/a> scenarios that self-driving cars will have to deal with the moment they hit the streets. In other words, when a self-driving car must make a decision to kill (either its own passengers or pedestrians), what criteria should it use to make that decision?<\/p>\n<p>Please go through the exercise yourself before reading any more of this post, as I don&#8217;t want to poison your answers with my own.<\/p>\n<p>Ok, all done?<\/p>\n<p>There are no objectively right answers to this problem, but my strategy was as follows: First, I disregarded all demographic differences between humans. I don&#8217;t feel comfortable assigning different values to men, women, the elderly, kids, athletes, criminals, obese people, etc. There was one question where I did have to use this as a tie-breaker, but that was it&#8230; and it still didn&#8217;t feel good. Then, I optimized for saving people who were doing nothing wrong at the time. In other words, pedestrians who crossed on a Don&#8217;t Walk signal were sacrificed pretty consistently. Then I optimized for greatest number of human lives saved (pets were toast&#8230; sorry pets). The hardest question came down to a scenario where you had to pick killing four innocent people in the car vs. four innocent pedestrians. For this, I chose to spare the pedestrians, as those who choose to take a vehicle seem like they should bear the risk of that vehicle more than those who made no such decision.<\/p>\n<p>The summary page at the end is interesting, but it can also give false impressions. For instance, even though I explicitly disregarded demographics, it showed me as significantly preferring to save people who were &#8220;fit&#8221; and people who were &#8220;older&#8221;. Depending on your strategy, some of these conclusions may be enlightening, and some will just be noise from a small data set. Also, don&#8217;t forget to design some of your own. <a href=\"http:\/\/moralmachine.mit.edu\/browse\/-875865283\">Here is the hardest one I could create<\/a>, based on my own decision-making criteria.<\/p>\n<p>Tough stuff, but it&#8217;s good to get people acclimated to these dilemmas now, because although no technology can eliminate traffic deaths, self-driving cars will probably greatly reduce them. Curious to hear other strategies if you have them. <a href=\"http:\/\/kottke.org\/16\/10\/solve-trolley-problem-scenarios-with-mits-moral-machine\">Jason&#8217;s<\/a>, for instance, were different than mine. Also, can I just say that I love the idea of pets &#8220;flouting the law by crossing on a red signal?&#8221;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>A Self-Driving Car Ethical Problem Simulator Via Jason Kottke comes this thought-provoking exercise challenging you to apply your own morality to difficult &#8220;trolley problem&#8221; scenarios that self-driving cars will have to deal with the moment they hit the streets. In other words, when a self-driving car must make a decision to kill (either its own [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[495,500,39],"tags":[494],"class_list":["post-28378","post","type-post","status-publish","format-standard","hentry","category-links","category-society","category-technology","tag-instapaper"],"_links":{"self":[{"href":"https:\/\/mikeindustries.com\/blog\/wp-json\/wp\/v2\/posts\/28378","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/mikeindustries.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mikeindustries.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/mikeindustries.com\/blog\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/mikeindustries.com\/blog\/wp-json\/wp\/v2\/comments?post=28378"}],"version-history":[{"count":0,"href":"https:\/\/mikeindustries.com\/blog\/wp-json\/wp\/v2\/posts\/28378\/revisions"}],"wp:attachment":[{"href":"https:\/\/mikeindustries.com\/blog\/wp-json\/wp\/v2\/media?parent=28378"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mikeindustries.com\/blog\/wp-json\/wp\/v2\/categories?post=28378"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mikeindustries.com\/blog\/wp-json\/wp\/v2\/tags?post=28378"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}