The Problem with the Trolley Problem.

Let’s stop using it in the discussions about autonomous vehicles.

Jairo José Niño Pérez
5 min readNov 16, 2018
Photo by Xan Griffin on Unsplash

The publication of an article in the November issue of Nature magazine sparked the conversation about a long known moral problem.

You may have heard about it. The trolley problem, as it is known, depicts a situation in which the vehicle is speeding uncontrolled and on its way to kill five people. You are there with access to a lever that can switch the trolley to the next track, killing one instead of many. ¿Would you pull it?

The article in question is the work of researchers at the MIT Media Lab who back in 2014 devised an experiment called the Moral Machine. In it, people from around the world would decide what to do in different variations of the trolley problem.

The experiment measured nine attributes:

  • Intervention: Keep the course(inaction)/swerve(action)
  • Number of Characters: Save more lives/fewer lives
  • Relation to the autonomous vehicle: Save passengers/pedestrians
  • Gender: Save men/women
  • Species: Save human/pets
  • Age: Save young/elderly
  • Law: Save the lawful /jaywalking
  • Social status: Save the rich/poor
  • Fitness: Save the fit/less fit

I find the experiment itself amazing. Gamification of the study online allowed researchers to access a sample that would otherwise be almost impossible to gather.

Yes, self-selection of participants comes at a price (like over-representing some groups and under-representing others), but the more than 450 thousand individuals who besides taking part in the experiment also agreed to complete a socio-demographic survey, helped the researchers explore the effect of individual variations, cultural structures, and country-level characteristics on the preferences.

My key takeaways regarding the results are:

1. Individual Variations: (age, education, gender, income, political views, religious views) are theoretically important, but not essential information. Religion and gender are the most significant. For instance, although all groups are more inclined to spare women over men, men are 0.06% less so.

2. Cultural Clusters: Authors found three moral clusters of countries with a remarkable resemblance to the Inglehart-Welzel cultural map. (A famous scatter plot based on the world values survey, but I digress). The thing here is that this is a measure of consistency for the Moral Machine experiment results. Geographical and cultural proximity had a significant impact on respondents preferences.

3. Country-level Predictors: In line with the latter and this is my favorite, cultural and economic differences between countries showed a high correlation to the preferences of the participants. Hence, those countries with weaker institutions and low GDP per capita were more inclined to spare the unlawful jaywalkers. Also, collectivistic cultures preference was weaker regarding saving the young above the elderly. This group includes eastern countries where older members of the community enjoy a higher social standard.

From Wikipedia: A recreation of the Inglehart–Welzel Cultural Map of the World, created by political scientists Ronald Inglehart and Christian Welzel based on the World Values Survey data — survey wave 4, finalised 2004

Again, I encourage you to check the article as I believe the whole experiment, the methodology and the findings are valuable as an exploration of moral preferences across human groups. I think it is fair to say this before addressing the issue in the title of this post: I believe that we must not use this approach in conversations about autonomous vehicles. It puts an unfair, unnecessary and inconvenient burden on the technology, those working on its development and those who could eventually benefit from it.

Would / Should.

The trolley problem asks whether a person would take action to alter an outcome based on a moral assessment of the situation. I’d argue that it is wrong to make a false equivalence between that and what a programmer codes into the system defining what an autonomous vehicle should do.

Bias in machine learning is an undeniable fact. One that we all should be aware of and one that tech companies must make continuous efforts to prevent. But that has nothing to do with trying to force moral values and ethical considerations within a system.

I am not saying that principles should not be in place. My point is that an assessment of the value of human life must not be one of them because let me spare you the time, there is no unbiased way to do that. And having policymakers working on it, is something I find deeply uncomfortable.

The whole point of the Trolley problem is to pose the subject with a dilemma. The work of policymakers is to make sure that companies making autonomous systems minimize susceptibility of their products to a dilemma.

NO, IT DID NOT. One of the worst headlines I found about the topic. The source is ZDNet.

A misunderstanding on the autonomous vehicle as a product lies at the bottom of these flawed approaches. This piece from Benedict Evans (A16Z) explains it very well. People outside the industry are trying to define the outcome and engineer the path afterward.

The truth is that autonomy itself is yet to be defined (which is why some automakers prefer the term self-driving) and when it is, the definition will have to consider what Evans calls the “what” and the “where”.

[…] it’s easy to ask ‘when can we have autonomy?’ (and everyone does) but the ‘when’ depends on the ‘where’ and the ‘what’, and these are a matter of incremental deployment of lots of different models in different places over time.

Levels of Autonomy: Source

Products with different levels of autonomy will work in different contexts with different configurations. As more of these vehicles enter the market, space for unavoidable collisions will shrink, and we will drift away from the million people who die every year due to car accidents.

Similar to how vaccination within a population eventually hits a threshold at which not vaccinated individuals are protected from disease, gains from the adoption of self-driving vehicles will distribute across society, keeping pedestrians and human drivers safe.

In the meantime, when in a dilemma, it should brake.

From Twitter.

As usual, I thank you in advance for your feedback and I encourage you to continue the conversation in the comments below, Twitter and Linkedin.

--

--

Jairo José Niño Pérez

Political Scientist + Data enthusiast. Fond of tech and social dynamics.