Originally published at The Atlantic

On Sunday night, a self-driving car operated by Uber struck and killed a pedestrian, 49-year-old Elaine Herzberg, on North Mill Avenue in Tempe, Arizona. It appears to be the first time an automobile driven by a computer has killed a human being by force of impact. The car was traveling at 38 miles per hour.

An initial investigation by Tempe police indicated that the pedestrian might have been at fault. According to that report, Herzberg appears to have come “from the shadows,” stepping off the median into the roadway, and ending up in the path of the car while jaywalking across the street. The National Transportation Safety Board has also opened an investigation. It’s still hard to know exactly what took place, at this time, without some speculation.

Likewise, it’s difficult to evaluate what this accident means for the future of autonomous cars. Crashes, injuries, and fatalities were a certainty as driverless vehicles began moving from experiment to reality. In 2016, a Tesla operating in its unique “autopilot” mode in Florida crashed into a tractor-trailer that made a left turn in front of the vehicle, killing the Tesla’s driver. At the time, it was the first known fatality from a self-driving vehicle—but at the time of the accident, the car had apparently been warning its driver to disengage the autopilot mode and take control of the vehicle.

Advocates of autonomy tend to cite overall improvements to road safety in a future of self-driving cars. Ninety-four percent of car crashes are caused by driver error, and both fully and partially autonomous cars could improve that number substantially—particularly by reducing injury and death from speeding and drunk driving. Even so, crashes, injuries, and fatalities will hardly disappear when and if self-driving cars are ubiquitous. Robocars will crash into one another occasionally and, as the incident in Tempe illustrates, they will collide with pedestrians and bicyclists, too. Overall, eventually, those figures will likely number far fewer than the 37,461 people who were killed in car crashes in America in 2016.

The problem is, that result won’t be accomplished all at once, but in spurts as autonomous technology rolls out. During that period, which could last decades, the social and legal status of robocar safety will rub up against existing standards, practices, and sentiments. A fatality like the one in Tempe this week seems different because it is different. Instead of a vehicle operator failing to see and respond to a pedestrian in the road, a machine operating the vehicle failed to interpret the signals its sensors received and process them in a way that averted the collision. It’s useful to understand, and even to question the mechanical operation of these vehicles, but the Tempe fatality might show that their legal consequences are more significant than their technical ones.

published March 20, 2018