<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>
	Comments on: ⇗ A Self-Driving Car Ethical Problem Simulator	</title>
	<atom:link href="https://mikeindustries.com/blog/archive/2016/10/a-self-driving-car-ethical-problem-simulator/feed" rel="self" type="application/rss+xml" />
	<link>https://mikeindustries.com/blog/archive/2016/10/a-self-driving-car-ethical-problem-simulator</link>
	<description>A running commentary of occasionally interesting things — from Mike Davidson.</description>
	<lastBuildDate>Sat, 04 Mar 2017 17:30:55 +0000</lastBuildDate>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.3</generator>
	<item>
		<title>
		By: Mike D.		</title>
		<link>https://mikeindustries.com/blog/archive/2016/10/a-self-driving-car-ethical-problem-simulator#comment-392829</link>

		<dc:creator><![CDATA[Mike D.]]></dc:creator>
		<pubDate>Thu, 06 Oct 2016 16:24:28 +0000</pubDate>
		<guid isPermaLink="false">https://mikeindustries.com/blog/?p=28378#comment-392829</guid>

					<description><![CDATA[Adrian: Interesting. I explicitly ruled out the logic that you used because the descriptions did not leave any room for the possibility of escaping death. In other words, it wasn&#039;t &quot;these pedestrians would likely be killed&quot;. It was &quot;these pedestrians will die&quot;. That would have definitely changed my answers. Now that I think about it though, despite the descriptions, your logic actually makes more sense in real life. In other words, a self-driving car could never actually know if someone was going to die (or even be hit at all). It could only know it may be increasing the chances of that happening. Interesting... I wonder how many people used chances rather than absolutes.

Kyle: I am not sure, but I *think* every time you take the test, it presents you different scenarios, and some of them may actually be submitted by users. So in that sense, it&#039;s not really a controlled sequence of questions. It probably should be though. And yeah, about the summary page, lots of noise in there.]]></description>
			<content:encoded><![CDATA[<p>Adrian: Interesting. I explicitly ruled out the logic that you used because the descriptions did not leave any room for the possibility of escaping death. In other words, it wasn&#8217;t &#8220;these pedestrians would likely be killed&#8221;. It was &#8220;these pedestrians will die&#8221;. That would have definitely changed my answers. Now that I think about it though, despite the descriptions, your logic actually makes more sense in real life. In other words, a self-driving car could never actually know if someone was going to die (or even be hit at all). It could only know it may be increasing the chances of that happening. Interesting&#8230; I wonder how many people used chances rather than absolutes.</p>
<p>Kyle: I am not sure, but I *think* every time you take the test, it presents you different scenarios, and some of them may actually be submitted by users. So in that sense, it&#8217;s not really a controlled sequence of questions. It probably should be though. And yeah, about the summary page, lots of noise in there.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Kyle		</title>
		<link>https://mikeindustries.com/blog/archive/2016/10/a-self-driving-car-ethical-problem-simulator#comment-392820</link>

		<dc:creator><![CDATA[Kyle]]></dc:creator>
		<pubDate>Thu, 06 Oct 2016 14:42:25 +0000</pubDate>
		<guid isPermaLink="false">https://mikeindustries.com/blog/?p=28378#comment-392820</guid>

					<description><![CDATA[&quot;For this, I chose to spare the pedestrians, as those who choose to take a vehicle seem like they should bear the risk of that vehicle more than those who made no such decision.&quot;

That&#039;s where I started...or I guess ended up. Here&#039;s the basic criteria I followed:
1) Humans over pets
2) Minimize human deaths
3) Those not in the car before those in the car

The most frustrating thing to me was the results at the end. It said I preferred fit people over non-fit, but I only had one scenario where there was a discrepancy and it was 3 people in the car vs 2 pedestrians and the people in the car were not fit and the pedestrians were.

Or that I preferred social value more. But all the examples with robbers had them crossing on red where the others were crossing on green and it would have been 3 dead on green or 3 dead on red. 

I know it&#039;s just a random set of scenarios presented, but I feel like there needs to be more &quot;control&quot; over what is presented to a user. Present 3 robbers crossing on red and 3 men crossing on green and make a decision. Then flip it and make the robbers crossing on green and the men crossing on red. For me, the number of variables changed too frequently between scenarios to get any meaningful data, at least for me. I guess if taken in aggregate, you get a better picture of where &quot;society&quot; lands.

But it also brings up some assumptions that I&#039;m not sure you could make. How would a self-driving car know that I&#039;m a robber vs normal pedestrian. (Although...let&#039;s be honest, Google knows everything about everyone anyways.)

Really interesting thought experiment. Thanks for the post.]]></description>
			<content:encoded><![CDATA[<p>&#8220;For this, I chose to spare the pedestrians, as those who choose to take a vehicle seem like they should bear the risk of that vehicle more than those who made no such decision.&#8221;</p>
<p>That&#8217;s where I started&#8230;or I guess ended up. Here&#8217;s the basic criteria I followed:<br />
1) Humans over pets<br />
2) Minimize human deaths<br />
3) Those not in the car before those in the car</p>
<p>The most frustrating thing to me was the results at the end. It said I preferred fit people over non-fit, but I only had one scenario where there was a discrepancy and it was 3 people in the car vs 2 pedestrians and the people in the car were not fit and the pedestrians were.</p>
<p>Or that I preferred social value more. But all the examples with robbers had them crossing on red where the others were crossing on green and it would have been 3 dead on green or 3 dead on red. </p>
<p>I know it&#8217;s just a random set of scenarios presented, but I feel like there needs to be more &#8220;control&#8221; over what is presented to a user. Present 3 robbers crossing on red and 3 men crossing on green and make a decision. Then flip it and make the robbers crossing on green and the men crossing on red. For me, the number of variables changed too frequently between scenarios to get any meaningful data, at least for me. I guess if taken in aggregate, you get a better picture of where &#8220;society&#8221; lands.</p>
<p>But it also brings up some assumptions that I&#8217;m not sure you could make. How would a self-driving car know that I&#8217;m a robber vs normal pedestrian. (Although&#8230;let&#8217;s be honest, Google knows everything about everyone anyways.)</p>
<p>Really interesting thought experiment. Thanks for the post.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Adrian Holovaty		</title>
		<link>https://mikeindustries.com/blog/archive/2016/10/a-self-driving-car-ethical-problem-simulator#comment-392805</link>

		<dc:creator><![CDATA[Adrian Holovaty]]></dc:creator>
		<pubDate>Thu, 06 Oct 2016 09:30:22 +0000</pubDate>
		<guid isPermaLink="false">https://mikeindustries.com/blog/?p=28378#comment-392805</guid>

					<description><![CDATA[Wow, this was heavy.

The first strategy I landed on was: assume people outside a car have more flexibility than people inside a car. It&#039;s easier for an average pedestrian to jump out of the way than for an average car passenger to exit the moving vehicle.

It&#039;s similar to the logic pedestrians and cyclists use here in Amsterdam: it&#039;s easier for a pedestrian to jump out of the way than for a bicycle to do a sudden stop. Hence, favor cyclists.

Granted, that logic assumes the pedestrians are aware of their surroundings and physically capable of quick movements (hence disqualifying the old man with a cane in one of the examples).

And it&#039;s sort of a cop out, because I&#039;m essentially saying, &quot;Well, if the car plowed into the pedestrians instead of the wall, at least the pedestrians would have a chance to jump out of the way&quot; — when, in fact, the exercise didn&#039;t give that as an option. :-(

Another thought (which is also a cop out)... With modern airbags and such, aren&#039;t car passengers quite safer than pedestrians? And, Mike, like you say, the passengers should bear the risk of the vehicle.

A big question I have is...which is greater?

* The likelihood of car passengers dying in well-fortified modern vehicles
* The likelihood of pedestrians dying due to not being able to jump out of the way fast enough

Many more questions than answers. :-/ Thanks for the interesting food for thought!]]></description>
			<content:encoded><![CDATA[<p>Wow, this was heavy.</p>
<p>The first strategy I landed on was: assume people outside a car have more flexibility than people inside a car. It&#8217;s easier for an average pedestrian to jump out of the way than for an average car passenger to exit the moving vehicle.</p>
<p>It&#8217;s similar to the logic pedestrians and cyclists use here in Amsterdam: it&#8217;s easier for a pedestrian to jump out of the way than for a bicycle to do a sudden stop. Hence, favor cyclists.</p>
<p>Granted, that logic assumes the pedestrians are aware of their surroundings and physically capable of quick movements (hence disqualifying the old man with a cane in one of the examples).</p>
<p>And it&#8217;s sort of a cop out, because I&#8217;m essentially saying, &#8220;Well, if the car plowed into the pedestrians instead of the wall, at least the pedestrians would have a chance to jump out of the way&#8221; — when, in fact, the exercise didn&#8217;t give that as an option. :-(</p>
<p>Another thought (which is also a cop out)&#8230; With modern airbags and such, aren&#8217;t car passengers quite safer than pedestrians? And, Mike, like you say, the passengers should bear the risk of the vehicle.</p>
<p>A big question I have is&#8230;which is greater?</p>
<p>* The likelihood of car passengers dying in well-fortified modern vehicles<br />
* The likelihood of pedestrians dying due to not being able to jump out of the way fast enough</p>
<p>Many more questions than answers. :-/ Thanks for the interesting food for thought!</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Collin B.		</title>
		<link>https://mikeindustries.com/blog/archive/2016/10/a-self-driving-car-ethical-problem-simulator#comment-392784</link>

		<dc:creator><![CDATA[Collin B.]]></dc:creator>
		<pubDate>Wed, 05 Oct 2016 20:24:47 +0000</pubDate>
		<guid isPermaLink="false">https://mikeindustries.com/blog/?p=28378#comment-392784</guid>

					<description><![CDATA[There are an incredible number of strategies self-driving cars could implement. Personally, I&#039;d like to see a strategy put into place that strictly follows the &quot;rules of the road,&quot; favoring predictable behavior over life-saving intervention. The unpredictable nature of human-drivers is already a nuisance, why add more &quot;unknown&quot;?]]></description>
			<content:encoded><![CDATA[<p>There are an incredible number of strategies self-driving cars could implement. Personally, I&#8217;d like to see a strategy put into place that strictly follows the &#8220;rules of the road,&#8221; favoring predictable behavior over life-saving intervention. The unpredictable nature of human-drivers is already a nuisance, why add more &#8220;unknown&#8221;?</p>
]]></content:encoded>
		
			</item>
	</channel>
</rss>
