Self-driving cars will need to be programmed to kill their owners, academics warn, and people will have to choose who will die

Most people are happy with cars that are programmed to minimise the death toll if it has to choose who will die — as long as it doesn’t mean sacrificing themselves

Andrew Griffin
Tuesday 27 October 2015 17:54 GMT
Comments
The new prototype of Google's self-driving car
The new prototype of Google's self-driving car (Google)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Car companies will have to decide who their self-driving vehicles are going to kill in the event of a crash, philosophers have warned.

Self-driving vehicles are now being widely adopted, and are likely to soon become the norm — partly because they will lead to fewer crashes. And the advantage for drivers is clear, allowing them to switch off and have their car arrive at their destination without expending any thought or effort.

But their manufacturers will have to tell the cars which sets of people they are going to kill when the cars do crash, according to a new paper. The cars that generally will allow you to sit back in leisurely comfort might one day have to drive you into a wall and kill you.

Some accidents will be “inevitable”, the authors note. In that case, “some situations will require AVs to choose the lesser of two evils”, according to a paper by Jean-Francois Bonnefon at the Toulouse School of Economics in France and his two co-authors.

“For example, running over a pedestrian on the road or a passer-by on the side; or choosing whether to run over a group of pedestrians or to sacrifice the passenger by driving into a wall,” the paper notes.

In those kinds of situations, the car would have to make a choice. The three researchers set out to explore how that choice should be made, by asking members of the public how they think that cars should decide who to kill.

The researchers asked people on Amazon’s Mechanical Turk — a marketplace that allows people to pay others to do tasks — who they thought should die in a range of different situations.

In general, people are happy to using a utilitarian approach to deciding who to kill, they found. That meant that cars should generally minimise the death toll, irrespective of who that meant would die in a crash.

Google is Making Its Self-driving Cars Drive More Like People

But that mostly applied to other people’s cars — the respondents were less keen on buying cars that would sacrifice themselves. People “were not as confident that autonomous vehicles would be programmed that way in reality—and for a good reason: they actually wished others to cruise in utilitarian autonomous vehicles, more than they wanted to buy utilitarian autonomous vehicles themselves”, the team write.

And the team aren’t sure that the question might be that simple.

“Is it acceptable for an autonomous vehicle to avoid a motorcycle by swerving into a wall, considering that the probability of survival is greater for the passenger of the car, than for the rider of the motorcycle?

“Should different decisions be made when children are on board, since they both have a longer time ahead of them than adults, and had less agency in being in the car in the first place?

“If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions?”

The paper ponders similar questions to an article published earlier this year by a US bioethicist, which also proposed that cars will end up having to kill their owners.

The research is published this month in an article titled ‘Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars?’.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in