Originally published at The Atlantic

You know the drill by now: A runaway trolley is careening down a track. There are five workers ahead, sure to be killed if the trolley reaches them. You can throw a lever to switch the trolley to a neighboring track, but there’s a worker on that one as well who would likewise be doomed. Do you hit the switch and kill one person, or do nothing and kill five?

That’s the most famous version of the trolley problem, a philosophical thought experiment popularized in the 1970s. There are other variants; the next most famous asks if you’d push a fat man off a bridge to stop the trolley rather than killing even one of the supposedly slim workers. In addition to its primary role as a philosophical exercise, the trolley problem has been used a tool in psychology—and more recently, it has become the standard for asking moral questions about self-driving cars.

Should an autonomous car endanger a driver over a pedestrian? What about an elderly person over a child? If the car can access information about nearby drivers it might collide with, should it use that data to make a decision? The trolley problem has become so popular in autonomous-vehicle circles, in fact, that MIT engineers have built a crowdsourced version of it, called Moral Machine, which purports to catalog human opinion on how future robotic apparatuses should respond in various conditions.

But there’s a problem with the trolley problem. It does a remarkably bad job addressing the moral conditions of robot cars, ships, or workers, the domains to which it is most popularly applied today. Deploying it for those ends, especially as a source of answers or guidance for engineering or policy, leads to incomplete and dangerous conclusions about the ethics of machines.

continue reading at The Atlantic

published March 30, 2018