
The MIT Moral Machine experiment is one of the largest-scale cross-cultural investigations into ethical judgement. The authors describe it as a “serious game”, although it is almost too simple to be called a game, as it involves a series of standalone choices with no context or long-term consequences. Each choice involves deciding whether an out-of-control autonomous care should swerve or continue straight ahead, based on the victims who are likely to die in either case. I strongly recommend you give it a go, if you have not played it already. It does not take long and it raises some interesting ethical questions.
This paper summarises the outcomes of the experiment, which gathered 39.61 million decisions from players in 233 countries around the world. The analysis it presents is straight-forward. There were nine factors they were testing to see how they influence people’s decisions:
- Sparing humans vs pets
- Staying on course vs swerving
- Sparing passengers vs pedestrians
- Sparing more lives vs fewer lives
- Sparing men vs women
- Sparing the young vs the elderly
- Sparing pedestrians crossing legally vs jaywalkers
- Sparing the fit vs the unfit
- Sparing those with higher vs lower social status
In summary they effects they found (in decreasing order of magnitude) were preferences to:
- Save humans over pets
- Save more people over fewer
- Save the young over the elderly
- Save the lawful over the unlawful
- Save higher status or lower status individuals
- Save the fit over the unfit
- Save females over males
- Save pedestrians over passengers
- Refrain from acting rather than acting
The paper goes into more detail analysing the cultural differences (noting significant clusters of “Western”, “Eastern” and “Southern” nations with different moral outlooks). But I’d like to focus on one particular finding here, which is a preference to save the lawful over the unlawful. When I played the game, this was the basis for a lot of my decisions, even if it meant greater number of people would die. This is not because I think jaywalkers are evil and deserve death. Rather, my reasoning was this: “Would I prefer to live in a world whether you are safer crossing with the lights or against them?” If my choice in this instance was turned into a general rule, and that rule was implemented in autonomous cars, would the overall result be better or worse?
This raises an important issue that I think is missing in the Moral Machine experiment: There is another player in the game — the pedestrian. Imagine we take the rules above and implement them in a new version of the Moral Machine. Only this time you control a pedestrian. There are two groups of people crossing the road, and an oncoming car about to kill one of those groups. You have to decide which group you would rather be in. According to the rules above, you should try to cross with a large group of young people, even if it means crossing against the lights. If everyone follows these rules, then as soon as one young person starts jaywalking, there will be an incentive for everyone else to follow suit.
And this is the issue I have with the experiment: it neglects to take into account the feedback loop between cars and pedestrians. Once a rule is implemented for what autonomous cars will do and it becomes well-known, pedestrians will alter their behaviour to maximise their safety. While the first-order outcome of these rules seems sensible (don’t run down young people), the second-order effect may be counter-productive (gangs of jaywalking youths).
Fortunately, this is the kind of thing that games are very good at investigating. I suggest a new experiment: Moral Machine 2. In this case there are two players, one controlling the car and the other deciding which side of the road to cross on. This will give more insight into the problem from the victim’s perspective, and show what kind of emergent dynamics we are likely to see in as a result. If anyone at MIT is interested in giving it a go, I’d be happy to work on this.
DOI: https://doi.org/10.1038/s41586-018-0637-6