Already on the road today, self-driving cars will increasingly populate tomorrow’s roads. How should they perform in extreme situations, and who decides? Image credit: Craig Berry

Riffs on Classic Thought Experiment Show Need for Ethics in AI Development

When our robot overlords inevitably come conquering the world (undoubtedly encased in svelte white plastic bodies with creepy lifeless half-grins if popular perception is any indication) and enslaving the human race, perhaps we’ll have wished during the “Robo-pocalypse” that we’d thought of more effective ways to imbue them with human ethics and safeguard ourselves against extinction.

While seriousvery serious–doomsday scenarios of artificial general intelligence (AGI) and superintelligence are crucial problems that prominent AI researchers and tech mavens (including Elon Musk) fear we are neglecting, such existential risks from vast godlike machine intelligences are most likely far enough in the future that they aren’t likely to affect the average person today or in the near future, if ever.

For the foreseeable future, it’s more likely that stupid AI, rather than superintelligent machines, will pose the greatest risk to humanity.

A similar example: What should the self driving auto do?

For instance, imagine yourself in some near future speeding safely along a busy road in your sexy self-driving Tesla roadster or driverless Uber across town, dozing off a hangover, binge-watching a show on HBO or Fast and the Furious 2025, or getting work done in the rush hour traffic now that you don’t have to worry about the dangerous, anachronistic bore of driving.

Suddenly, some other car—undoubtedly driven by a dangerous Luddite who prefers the old-fashioned thrill of driving—slams on its brakes in front of you, imperiling both cars. No worries! Your robo-chauffeur has driven millions of miles of simulated roadways, learns from real-world conditions and is so computationally advanced that it deftly maneuvers you around the offending car, avoiding certain death with a precision unmatched by human drivers and with nary enough force to spill your morning java.

But what if there is a bicyclist or pedestrian in the lane next to the offending vehicle? Shall our trusty robotic steed plow into the vehicle in front of you, risking the lives of all motorists involved and yet saving the bystander? Or will the car choose first and foremost to defend your own life but risk the lives of others on the road?

Such a hypothetical dilemma demonstrates that even seemingly mundane AI systems such as the humble self-driving car will inevitably wield decision-making powers of life-and-death import. Anyone who’s endured a Philosophy 101 course before has likely recognized the above scenario as a modern take on the [in]famous “Trolley Problem.” It is the classic thought problem in ethics in which an observer has to decide between a pair of unsavory actions, usually between actively choosing an outcome which kills one person versus declining to intervene, thus killing a larger group of people condemned to fate. Countless variations of the Trolley Problem serious, not-so-serious, and patently absurd exist, introducing all manner of variables and plot twists into the equation.

The Classic Trolley Problem: Do you passively allow five people to perish, or actively divert the trolley to kill only one? Graphic by Zapyon – Wikipedia Commons

In a 21st Century riff on the Trolley Problem designed by Jason Millar of RoboHub.org, the classic ethical dilemma quickly moves from the realm of the philosophical to the uncomfortably close-at-hand. Coined “The Tunnel Problem,” Millar’s ethically vexing autonomous vehicle Trolley Problem is laid out as follows:

“You are traveling along a single lane mountain road in an autonomous car that is fast approaching a narrow tunnel. Just before entering the tunnel a child attempts to run across the road but trips in the center of the lane, effectively blocking the entrance to the tunnel. The car has but two options: hit and kill the child, or swerve into the wall on either side of the tunnel, thus killing you. How should the car react?”

According to a joint poll by Robohub.org and Openroboethics.org presently citing 520 participants, the majority—58.1% to be exact—of readers would choose to run over the errant child in the Tunnel Problem if they could decide what course of action their self-driving auto would take, with the remainder of the respondents selecting self-sacrifice as the more ethical action. A review of readers’ self-submitted commentary and explanation on their answers reveals widely varying reasons and processes for arriving at their decisions:

🔳”There is no answer, so I have abstained.”
🔳”I feel selfish for saving my own life and I have kids which makes it more difficult.”
🔳”Darwin’s survival of the fittest⎯if a stupid child decides to run out in front of a car then it is their own fault if they live or die.”
🔳”The child is younger and probably has more years of life ahead thus saving the child saves more years of human life.”
🔳”Religion actually makes this a somewhat difficult decision, given the prohibition against killing AND the prohibition against suicide.”
🔳”I bought the car. It should protect me.”
🔳”There is a chance that the passenger will not die. There is no chance that the child would survive if hit by the car.”
🔳”I have entered into an implicit ‘risk contract’ by getting into the car. The child has done no such thing. Therefore, I have a greater obligation to take on the consequences when this situation arises.”
🔳”Kid is dumb enough to cross the road. Parents’ fault for not controlling the kid.”
🔳”I like living.”

With an almost 60/40 split in public opinion on the proper course of action, it’s apparent that there is no simple or obvious answer to the stark moral dilemma the Tunnel Problem poses. Regardless of whether a self-driving auto is coded to act in the “selfish” interest of the automobile-and-passenger or designed instead to favor reduction of collateral damage, the Tunnel Problem by its very ambiguity and wide spread of responses suggests that any singular, universal ethical standard applied by auto manufactures and/or regulators is bound to infringe upon the ethical precepts of a sizable portion of autonomous vehicle owners (and, consequently, anyone else who travels tomorrow’s roads alongside them).

While the poll asked respondents to the Tunnel Problem poll to decide as if they were the passenger/operator of the self driving car in the situation, in a later question the poll asked readers to determine who—if anyone—should make the ultimate decision.

Almost half of the poll’s participants (49.5 %) were confident that “the passenger of the car should determine if the car swerves or not,” while the rest were split between “Lawmakers should determine if the car swerves or not” (27.1%), “the manufacturer/designer of the car should determine if the car swerves or not” (13.1%), and “other” (10.3%). In more illuminating commentary demonstrating the pitfalls and merits of each choice, respondents could also add their own explanations to this question as well:

🔳”I believe that for autonomous cars we need something similar to Asimov’s rules for robotics. A set of simple but universally followed rules.”
🔳”I would let a company to decide this and to make it public. For a potential owner of the car that would mean he also chooses a set of moral values along with car manufacturer.”
🔳”The decision would be based on a very large set of input parameters. Only the designers are likely to have a complete understanding of all the variables that enter into the calculation.”
🔳”Individual ethical opinions differ vastly, no one but the person directly affected should have the power to decide.”
🔳”I don’t want my life to be decided by some stuffy politician. It’s the kid’s fault he’s in the tunnel.”
🔳”Engineers are likely make more informed decisions about risk tradeoffs than lawmakers—but will nonetheless be deciding within the restrictions of law.”
🔳”The passenger should not decide as they will have excessive bias, and this behavior should be standardized across all autonomous vehicles. I feel like lawmakers would be the most impartial in making this decision.”

While autonomous cars will generally be safer than human drivers and may ultimately lead to much less dangerous roads, malfunctions and freak accidents such as the Tunnel Problem will—even if rarely—occur in in the coming years, and someone will have to decide who bears the risks and damages in such situations.

In Millar’s original presentation of the Tunnel Problem, he concludes that ethical decisions are best left to the discretion of the owner/passenger of the vehicle. Any decision by external authorities, be they lawmakers or automakers, risks what Millar calls “paternalism by design.” Millar favors the individual’s right to act in accordance with their own beliefs, explaining:

[A]s in healthcare we should expect drivers’ preferences to demonstrate their personal moral commitments. It could be that a very old driver would always choose to sacrifice herself to save a child. It might be that a deeply committed animal lover might opt for the wall even if it were a deer in his car’s path. It might turn out that most of us would choose not to swerve. These are all reasonable choices when faced with impossible situations. Whatever the outcome, it is in the choosing that we maintain our personal autonomy.

While it’s an attractive option to let drivers calibrate their autonomous cars’ moral compasses or at least have the option to select from different manufacturers’ competing moral software, even this option of maximizing ostensible freedom of choice still has its pitfalls. In an editorial for Wired calling adjustable ethics settings “A Terrible Idea”, Patrick Lin proposes that giving consumers a say in their autos’ settings could lead to unintended consequences and that “even if the user ultimately determines the weighting of different values factored into a crash decision the company can still be liable.”

The only way to truly have complete control over a vehicle is—surprise—to drive the car. Offering autonomous car owners a set of customization ethical parameters to control their auto’s decisions —no matter how permissive or flexible— still restricts the car’s behavior and leads us no further forward from the initial quandaries of what to decide, who should choose, and who should assume the risks:

“Imagine that manufacturers created preference settings that allow us to save hybrid cars over gas-guzzling trucks, or insured cars over uninsured ones, or helmeted motorcyclists over unhelmeted ones. Or more troubling, ethics settings that allow us to save children over the elderly, or men over women, or rich people over the poor, or straight people over gay ones, or Christians over Muslims.”

Given such extreme hypothetical cases, Lin concludes that while offering car owners some measure of choice in their vehicle’s ethics is initially attractive, such a seemingly freedom-maximizing choice still leaves ultimate ethical responsibility to manufacturers (and ultimately the law), given that “the manufacturer could still be faulted for giving the user any option at all⎯that is, the option to discriminate against a particular class of drivers or people.”

Yesterday’s Trolley Problem was once a purely philosophic abstraction safely relegated to the purview of stuffy academic debate without any real-world implications. Today, issues such as the Tunnel Problem will have a direct influence over our lives within a generation. Industry observers estimate that beginning in 2020 and continuing over the next 20 years, cars and trucks with increasing levels of automation will become a growing presence on the world’s roads until virtually all new cars delivered in 2040 are fully autonomous.

Estimated market penetration of fully autonomous vehicles (‘Level 4 & 5’). A child born today may never sit behind the wheel of a conventional auto in his or her life.

It’s not yet apparent if or how lawmakers or the manufacturers of autonomous vehicles—and/or their operators, in the case of a self-driving taxi or bus—will address such unprecedented ethical and legal issues as those addressed in the Tunnel Problem. Hypothetical experiments can inform and enlighten the path ahead, prompting various means of tackling the rapidly approaching frontier of roboethics.

Manufacturers, regulators and even consumers of autonomous vehicle technology (read: you) will have to make difficult decisions. While thought experiments won’t grant us easy answers, they force us to ask the right questions to help guide a world increasingly governed by autonomous systems.