Macabre Ethical Dilemmas: Just the Tip of the Iceberg for Robot Cars

The ethical quandaries that will confront self-driving cars as they navigate the world pose a lot of difficult questions. Antonio Loro charts a course for answering these tough questions.

13 minute read

July 18, 2016, 2:00 PM PDT

By Antonio Loro

@antonioloro


Intersection

evening_tao / Shutterstock

We've all been there.

You're driving through a narrow tunnel. A cement truck is roaring down the opposing lane. Just before it speeds past, the motorcyclist behind it suddenly loses control and careens right toward you. Jamming on your brakes won't help—it would scarcely slow your car in the fraction of a second you have to work with. In a blink of an eye, you must choose whether to (a) plow into the motorcyclist head-on, killing them, but saving your own life, or (b) swerve left and slam into the oncoming cement truck, leaving the truck driver unharmed, but killing yourself.

Or perhaps you haven't experienced such a stark ethical dilemma on the road. But you've probably noticed that, as automated vehicle technology advances further and further, there's been more and more talk about how robot cars should handle these fraught situations.

There's something eerily fascinating about these thought experiments, where a ruthlessly rational artificial intelligence is forced to make tragic moral choices on the road. It can be tempting, though, to dismiss these discussions as pointless maundering about events that are highly unlikely to occur in the first place. Certainly, they seem far removed from the Tesla crash of May 7, 2016—the first known fatal crash involving a vehicle controlling its own steering and speed. There is no suggestion of an ethical dilemma here—while investigations are still ongoing, it seems the vehicle's "Autopilot" and driver simply failed to slow down as a tractor trailer made a left turn across their path.

But the problem of how robot drivers should act in ethical dilemmas goes beyond a few freak situations that seem to belong in the Twilight Zone. Moreover, dilemmas on the road are just the tip of the iceberg of ethical issues for automated vehicles.

Robot drivers—guided by their programmers, of course—will need to make ethical judgements at every moment, even when there is no dilemma in sight. They will have to decide how much risk to impose on their passengers and on other road users. They will have to balance safety against other priorities, like how quickly to get you to your destination. If that wasn't challenging enough, solving these problems is complicated by the difficulty of predicting the behaviour of other drivers—robot or human. And there are more fundamental ethical issues still: how skilled must robot drivers be before we even allow them on the roads, and how certain must we be about how safe they are?

There's nothing inherently unsolvable about these problems, but they certainly can't be brushed off as idle armchair musings.

Ethical dilemmas on the road are more relevant than they might sound

Humans have been driving motor vehicles for well over a century. Why the concern over ethical dilemmas on the road only now that robots are starting to move into the driver's seat? It's not because the rise of automated vehicle technology has sparked an awakening of moral consciousness. And while the idea of a robot choosing whom to kill might invoke a sinister, HAL 9000 aura, that creepy factor isn't the reason either.

Human drivers can indeed be involved in dilemmas on the road, say where the driver must choose between colliding with a school bus vs. a mother pushing a baby stroller. However, we'd be unlikely to criticize the driver for making the wrong moral choice—there's no way they could carefully compare the alternatives according to moral principles in a split second. A robot is different. The choice it makes is pre-ordained well beforehand by a programmer who, unlike the human driver caught in the heat of the moment, has the luxury of plenty of time sitting in front of the computer to scrupulously consider how the robot should choose in every situation it needs to be able to handle. This fundamental difference is why automated vehicles attract special scrutiny in these hypothetical dilemmas.

Back in the real world, there's a dearth of statistics on crashes involving fatal dilemmas, but it's safe to say they're probably uncommon. Nevertheless, these scenarios are more than just excuses for modern philosophers to stroke their beards and puff on their pipes and mull how many robot cars can dance on the head of a pin.

The ethical predicaments for automated cars are inspired by the "Trolley Problem," originally posed by the philosopher Philippa Foot in the 1960s. A runaway streetcar is barreling toward five people—but you can switch it to a different set of tracks where it will kill only one person. What do you do?

The problem was not conceived in response to an epidemic of streetcar brake failures and an ensuing urgent need to figure out which innocent bystanders to kill. Rather, the purpose was to pose stark choices to shine a spotlight on the implications of various ethical principles that might otherwise be difficult to disentangle in messier real-world situations. The thought experiment was especially intended to help pick out the moral differences between actively doing harm and passively allowing harm.

Similarly, while envisioning automated vehicles in dramatic moral dilemmas might help us decide who to kill when such unfortunate situations arise, that's not the point of the exercise. Instead, the scenarios highlight the ethical dimension of the choices robot cars make. Whether the car should hit the school bus or the mother and baby is an ethical question. Even if the robot driver has not been programmed with rules that were conceived to conform with particular ethical principles, the choices it makes will imply an ethical stance of some kind.

Moreover, while one is rarely forced to choose whom to kill on the road, it's easy to conceive of more mundane dilemmas with ethical dimensions. The problem may be whether to side-swipe a vehicle or to get rear-ended by another. It could even be whether to splash the mother pushing a pram or to swerve away and spill hot McDonald's coffee on the passenger's lap. Compared to the more dramatic fatal variety, dilemmas where the choice is over whom to injure or whose property to harm are doubtless more common.

Driving risk

A robot driver could emphasize avoiding such awkward situations where every available option would harm someone. But even if it could steer clear of ethical dilemmas, it will need to make choices about risk—choices that are inextricably wrapped up with ethical values.

Imagine a narrow street. A pedestrian steps out from behind a curbside van. The automated vehicle must choose whether to hit the pedestrian or an oncoming car. But if the automated vehicle had been driving more slowly, it could have simply braked and avoided that dilemma entirely. That is, the robot driver could have avoided causing any harm if it had been driving in a more risk-averse fashion. Or take a situation that's free of dilemmas: while it's speeding along a freeway, a tire blows out, and the automated vehicle loses control and sideswipes a guardrail. If its speed had been lower, it wouldn't have lost control.

Of course, the only way for an automated vehicle to avoid any chance that its passenger will be harmed, or that it will harm anyone else, or someone's property, would be to stay safely parked at the dealership. There will always be some risk of harm; the programmer of a robot car must determine how much risk is acceptable. How much risk should the car impose on its own passengers and on other road users? And how should those risks be traded off against other goals? A car that tops out at 10 mph might have a low risk of causing death or injury, but most people would be willing to put themselves, and others, at a slightly higher risk in exchange for the benefit of getting around more quickly. The size of that risk depends on the circumstances—for example, whether one is having a heart attack and needs to get to the hospital, or whether one wants to get to the movie on time—but in any case, how much risk is acceptable is an ethical question.

Beyond speed, many other values will have to be balanced against the risk of harm to the vehicle's passengers and to others. Traveling through an intersection without decelerating and re-accelerating saves energy but increases the severity of a potential collision. A group of automated vehicles following each other closely in a "platoon" consumes less road space, which could improve traffic flow but also brings a greater risk of a pile-up collision.

Prediction is hard

Hypothetical dilemmas involving automated vehicles usually have clearly defined, if unpleasant, outcomes—for example, kill yourself vs. kill ten pedestrians—but in the real world, our lack of omniscience makes consequences harder to predict. Perhaps you'll merely be injured when your car crashes into the barrier; perhaps that school bus you're about to hit is full of kindergarteners, or perhaps aged Nazis; perhaps that pram is carrying twins, or merely watermelons from the market. Even the trajectory the vehicle takes is uncertain: for example, the vehicle steers to the right, intending to hit the pram, but loses traction on a slick patch of road and skids straight into the school bus.

The more profound complication is in the choices made by others on the road. In the simplistic hypothetical dilemmas, it's all up to the robot driver's choices—there is nothing anyone else can do to affect the outcome. In the real world, there could be many players in a given situation, each attempting to predict what others will do, and each making their own choices.

Imagine that an automated vehicle is programmed to give the passenger speedy service, and to drive aggressively to do so. As it attempts to merge into freeway traffic, it encounters another equally aggressive vehicle. One vehicle attempts to accelerate and cut in in front; the other attempts the same. This behaviour increases risk for the passengers in both vehicles.

To address this, manufacturers program vehicles to be cautious during merges. But now when a cautious vehicle encounters another cautious vehicle attempting to merge into traffic, both slow down for each other, wasting time and potentially increasing the risk of rear-end collisions. Meanwhile, an opportunistic vehicle manufacturer observes the emerging norm of caution and decides to continue to program their vehicles to be aggressive. Now, merging conflicts are rare, since the cautious cars always let the aggressive cars ahead. This is great for the passengers in the aggressive cars who are traveling faster than ever; those left behind in the cautious cars are less thrilled.

To avoid problems like this—vehicles aggressively facing off against each other, stalemating each other, unfairly free-riding on the generosity of others—automated vehicles will need to coordinate their behaviour. Vehicle-to-vehicle communication technology will help, but automated vehicles will still need rules that clarify when one vehicle should let another ahead, for example, and the myriad of other aspects of vehicular interactions. And coordinating behaviour will be particularly tricky for as long as robot drivers must interact with human drivers, who are less amenable to strict rule-following.

Transparency

The programmers of automated vehicles have their work cut out for them, with so many behavioural rules to craft and align with an accepted ethical framework. But what if the programmers take a more hands-off approach and allow the robot driver to figure out the rules for itself? In such a "deep learning" approach, used by Google and others, programmers expose the system to an abundance of data and then guide the system to learn to recognize patterns. If programmers use deep learning to allow the vehicle to discover what driving choices to make in different situations, the vehicle will drive itself using rules it has developed for itself. Not only will the programmers not have programmed explicit rules into the automated vehicle—they won't even know what rules the vehicle is using. The system is a black box.

Nevertheless, even if the vehicle's behavioural rules were not explicitly chosen to align with an ethical framework, the choices it makes will still imply ethical valuations. A deep learning-based car may make choices that are challenged as unethical, and a plaintive defense that the vehicle taught itself to make those choices might not satisfy critics. And our inability to predict how the vehicle will behave in given situations—owing to our ignorance of the rules that guide the automated system—might dampen the public's trust in such a vehicle.

How proven must the technology be?

The most fundamental ethical questions come up before an automated vehicle even hits the road. What standard of safety should we demand? How good do we expect robot drivers to be? And how certain do we need to be about the vehicle's capabilities and limitations?

Hopefully, robots will be safer drivers than humans. But assessing the safety of automated vehicles is surprisingly tricky. The reason is that while humans are terrible drivers, spreading carnage on the roads—almost 33,000 deaths on the road in the United States in 2014—in another sense, we're pretty good at driving when you consider how much we drive. The distance driven in the United States in 2014 totaled over 3 trillion miles, meaning that, on average, U.S. drivers traveled 94 million miles at a time without causing any deaths. But even if an automated vehicle travels 94 million miles without killing anyone, that proves little—far more testing is needed.

In the case of the death that occurred in a Tesla Model S on May 7 while the driver was using the partially automated "Autopilot," which controls steering and speed but requires the driver to keep an eye on things, the automaker pointed out it was "the first known fatality in just over 130 million miles where Autopilot was activated." The company further asserted that Autopilot "results in a statistically significant improvement in safety when compared to purely manual driving." However, Tesla's reassurances aren't backed up by the figures it cites.

If we want a 99 percent level of confidence that Tesla cars driven with the assistance of Autopilot will be involved in fatal crashes less frequently than the average human-driven vehicle, we would need 400 million miles of fatality-free driving. That's according to the statistical logic explained by the research organization RAND a month before the crash. But for truly solid proof, their analysis shows that an automated vehicle would have to be driven distances up to hundreds of billions of miles to firmly establish that it is safer than the average human-driven vehicle. The key point is that one death after 130 million miles of Autopilot does not tell us that the next death will happen after another 130 million miles. Another death could come in 13 million miles, or perhaps 200 million miles—we have no way of knowing.

It's too early to make a pronouncement on whether Tesla's Autopilot improves safety or not. Should we allow a technology on the road when we don't know how likely it is to either save lives or result in new deaths?

If a technology would improve safety, every day of delay before it hits the road means more lives lost that could have been saved; but if the technology is deficient, rushing it onto the road might result in more deaths. In the face of such uncertainty, what is the right degree of haste or precaution to take? This may be the trickiest ethical question to grapple with.

In the case of Tesla's Autopilot, though, we can be certain of one thing. Decades of research have proven that humans are bad at maintaining unwavering vigilance over an automated system—exactly what Tesla expects drivers to do when using Autopilot. The automaker forewarns drivers to pay attention, and the automated system gives drivers occasional reminders, but these measures are not enough. Knowing what we know about how poorly humans supervise automated systems, it is essential above all that a technology like Autopilot—which depends on constant human supervision to function safely—be designed to inescapably compel the driver to remain engaged and alert. Tesla, then, may have an ethical duty to modify Autopilot to help drivers perform their supervisory duties.

The public trust

As automated vehicle technologies progress, it will become more and more urgent to grapple with these ethical questions. While some skeptics might see engaging with these questions as a distraction from the work of developing the tech, in fact, scrutinizing the ethical problems might help get advanced automated vehicles on the roads sooner. The public will have greater trust in the technologies if they know we've deliberated on the ethical issues rather than breezily dismissing them as intellectual parlor curiosities. And as we debate and search for solutions, we need to engage with the full range of ethical questions. Bizarre dilemmas on the road might pose the most eye-catching puzzles, but the ethical issues for automated vehicles go far deeper than that.

Antonio Loro is an urban planner specializing in helping cities prepare for automated vehicles. He is based in Vancouver, Canada.

courses user

As someone new to the planning field, Planetizen has been the perfect host guiding me into planning and our complex modern challenges. Corey D, Transportation Planner

As someone new to the planning field, Planetizen has been the perfect host guiding me into planning and our complex modern challenges.

Corey D, Transportation Planner

Ready to give your planning career a boost?

SunRail passenger train at platform in Poinciana, Florida.

Central Florida’s SunRail Plans Major Expansion

The expanded train line will connect more destinations to the international airport and other important destinations.

November 24, 2024 - Hoodline

Rendering of proposed housing development on former Desert Pines golf course in East Las Vegas, Nevada.

Las Vegas Golf Course to Become Over 1,000 Units of Affordable Housing

The project is part of an initiative to build affordable housing on shuttered golf courses.

November 20, 2024 - KTNV

Low close-up of busy city crosswalk in Vienna, Austria.

Planning for True Transportation Affordability: Beyond Common Misconceptions

Transportation affordability is important but often misunderstood, resulting in misguided solutions. New research helps identify ways to provide true affordability for economic freedom, opportunity and happiness.

November 21, 2024 - Todd Litman

Car parked at EV charging station in parking lot in Carlsbad, California.

California Governor Vows to Protect EV Credits

If the federal government eliminates the tax credit for electric vehicles, the governor will need legislative support to restart a state-level incentive program.

4 hours ago - The Hill

View of dense apartment buildings on Seattle waterfront with high-rise buildings in background.

Seattle Legalizes Co-Living

A new state law requires all Washington cities to allow co-living facilities in areas zoned for multifamily housing.

6 hours ago - Smart Cities Dive

Times Square in New York City empty during the Covid-19 pandemic.

NYC Officials Announce Broadway Pedestrianization Project

Two blocks of the marquee street will become mostly car-free public spaces.

December 1 - StreetsBlog NYC

Urban Design for Planners 1: Software Tools

This six-course series explores essential urban design concepts using open source software and equips planners with the tools they need to participate fully in the urban design process.

Planning for Universal Design

Learn the tools for implementing Universal Design in planning regulations.

Write for Planetizen