TROLLEY PROBLEM

Philosophers are building ethical algorithms to help control self-driving cars

Self-driving cars are a myriad ethical quandaries on wheels.
Self-driving cars are a myriad ethical quandaries on wheels.
Image: AP Photo/Tony Avelar
By

Artificial intelligence experts and roboticists aren’t the only ones working on the problem of autonomous vehicles. Philosophers are also paying close attention to the development of what, from their perspective, looks like a myriad of ethical quandaries on wheels.

The field has been particularly focused over the past few years on one particular philosophical problem posed by self-driving cars: They are a real-life enactment of a moral conundrum known as the Trolley Problem. In this classic scenario, a trolley is going down the tracks towards five people. You can pull a lever to redirect the trolley, but there is one person stuck on the only alternative track. The scenario exposes the moral tension between actively doing versus allowing harm: Is it morally acceptable to kill one to save five, or should you allow five to die rather than actively hurt one?

Though the Trolley Problem sounds farfetched, autonomous vehicles will be unable to avoid comparable scenarios. If a car is in a situation where any action will put either the car passenger or someone else in danger—if there’s a truck crash ahead and the only options are to swerve into a motorbike or off a cliff—then how should the car be programmed to respond?

Rather than pontificating on this, a group of philosophers have taken a more practical approach, and are building algorithms to solve the problem. Nicholas Evans, philosophy professor at Mass Lowell, is working alongside two other philosophers and an engineer to write algorithms based on various ethical theories. Their work, supported by a $556,000 grant from the National Science Foundation, will allow them to create various Trolley Problem scenarios, and show how an autonomous car would respond according to the ethical theory it follows.

To do this, Evans and his team are turning ethical theories into a language that can be read by computers. Utilitarian philosophers, for example, believe all lives have equal moral weight and so an algorithm based on this theory would assign the same value to passengers of the car as to pedestrians. There are others who believe that you have a perfect duty to protect yourself from harm. “We might think that the driver has some extra moral value and so, in some cases, the car is allowed to protect the driver even if it costs some people their lives or puts other people at risk,” Evans said. As long as the car isn’t programmed to intentionally harm others, some ethicists would consider it acceptable for the vehicle to swerve in defense to avoid a crash, even if this puts a pedestrian’s life at risk.

Evans is not currently taking a stand on which moral theory is right. Instead, he hopes the results from his algorithms will allow others to make an informed decision, whether that’s by car consumers or manufacturers. Evans isn’t currently collaborating with any of the companies working to create autonomous cars, but hopes to do so once he has results.

Perhaps Evans’s algorithms will show that one moral theory will lead to more lives saved than another, or perhaps the results will be more complicated. “It’s not just about how many people die but which people die or whose lives are saved,” says Evans. It’s possible that two scenarios will save equal numbers of lives, but not of the same people.

“The difference between theory A and theory B is that the people who die in the first theory are mostly over 50 and the people who die in the second theory are mostly under 30,” Evans said. “Then we have to have a discussion as a society about not just how much risk we’re willing to take but who we’re willing to expose to risk.”

If some moral theories save drivers while other protect pedestrians, then there could be a discussion about which option is best. “We could also have a discussion about how we build our traffic infrastructure,” adds Evans, perhaps with a greater separation between pedestrians and drivers.

Evans is also interested in further research on how any set of values used to program self-driving cars could be hacked. For example, if a car will swerve to avoid pedestrians even if this puts the driver at risk, then someone could intentionally put themselves in the path of an autonomous vehicle to harm the driver. Evans says even an infrared laser could be used to confuse the car’s sensory system and so cause a crash. Then there are further questions, such as how differently-programmed cars might react with each other while they’re on the road.

Evans is not the only academic researching how to address self-driving cars’ version of the Trolley Problem. Psychologists are also working on the issue, and have researched which solution the majority of the public would prefer.

But while Evans is focused on Trolley Problem-type scenarios, he acknowledges that simply figuring out the solution for such specific situations does not address the broader issues of whether autonomous cars are ethical. For example, when such cars are rolled out and are on the road alongside current vehicles, they will be something of an experiment in how our transit systems work. Others on the road could be deeply uncomfortable with this.

“One of the hallmarks of a good experiment in medicine, but also in science more generally, is that participants are able to make informed decisions about whether or not they want to be part of that experiment,” he said. “Hopefully, some of our research provides that information that allows people to make informed decisions when they deal with their politicians.”

Patrick Lin, philosophy professor at Cal Poly, San Luis Obispo, is one of the few philosophers who’s examining the ethics of self-driving cars outside the Trolley Problem. There are concerns about advertising (could cars be programed to drive past certain shops shops?), liability (who is responsible if the car is programmed to put someone at risk?), social issues (drinking could increase once drunk driving isn’t a concern), and privacy (“an autonomous car is basically big brother on wheels,” Lin said.) There may even be negative consequences of otherwise positive results: If autonomous cars increase road safety and fewer people die on the road, will this lead to fewer organ transplants?

Autonomous cars will likely have massive unforeseen effects. “It’s like predicting the effects of electricity,” Lin said. “Electricity isn’t just the replacement for candles. Electricity caused so many things to come to life—institutions, cottage industries, online life. Ben Franklin could not have predicted that, no one could have predicted that. I think robotics and AI are in a similar category.”

The invention of standard cars, for example, gave us the rise of the suburbs and fast food drive-through restaurants. Perhaps autonomous cars will lead to people living further away. The time humans once spent driving could be replaced by leisure while in driverless cars, but this is also highly uncertain. “Nature abhors a vacuum. When you have free time, that usually gets sucked up by work,” Lin said.

Meanwhile, autonomous cars’ efficient driving could reduce traffic. “Or, it could get worse. People could take more unnecessary trips, and further clog up the streets,” Lin said. “I don’t think anyone has a crystal ball when it comes to extrapolating that far out. It’s a safe bet to say that we can’t imagine the scale of effects.” Ultimately, the effects of autonomous cars will likely be huge and unpredictable. No algorithm or philosophical theory can make driverless cars perfectly moral.