Driverless Cars Are Cool, But What If One Had To Kill You To Save Lives?

Make no bones about it: we’re heading very rapidly toward a world of driverless cars. 

Driving cars is a nasty business, when it comes down to it. According to the World Health Organisation, 1.25 million people died in road traffic accidents in 2015, and progress toward reducing that number has been excruciatingly slow. A renewed focus on public transport is one obvious solution – but the world decided collectively in the 20th century that we were going to build a helluva lot of roads, and spread our cities out like giant spiderwebs. Now we have to live with that infrastructure decision and work around it somehow. 
Driverless cars often bubble up as a solution here. Maybe we should cut out that pesky human element and just have our roads run with algorithms. Maybe then everything will look like that insane intersection in Ethiopia, which seems to work fine:
what the hell is this
But people are skittish about the concept of handing over the autonomy of their vehicle to a computer. Maybe rightfully so, too. Tesla Motors announced recently that one of its cars, operating in autopilot mode, had been involved in a crash which killed its driver. It’s important to note that Tesla’s autopilot isn’t a true self-driving mode: it’s an an assisted driving mode which is very much in beta and requires you to keep your hands on the wheel at all times.
But it’s still a concern, and it feels a bit icky. When a person dies in a regular car accident, there’s usually something to blame. Either one or both drivers made an error, or there was a failure in the vehicle, or the traffic design was inadequate. When we inevitably begin seeing fatal car accidents caused by an software error, we’ll have to ask whether the software provider is culpable, or if this is just a margin of error we’re willing to accept as a society.
It won’t just be errors, either. An article by The Guardian‘s science editor Ian Sample asks a pretty startling question: should a driverless car be willing to cause the deaths of other people to save its occupant? If it would save your life to have your car plow through a gaggle of pedestrians, what should the car do?
Or think about it the other way: if your car could save the lives of ten pedestrians by steering directly into a wall and killing you, should it do that? That’s even thornier, but these are absolutely the moral questions that designers of self-driving cars are thinking about right bloody now. Would people even buy a car that would do that?
buckled up for some moral philosophy

Jean-Francois Bonnefon at the Toulouse School of Economics reckons  that it’s even more complex than that, and demands info that a self-driving car may not be able to accurately collect:
Is it acceptable for an autonomous vehicle to avoid a motorcycle by swerving into a wall, considering that the probability of survival is greater for the passenger of the car, than for the rider of the motorcycle? Should different decisions be made when children are on board, since they both have a longer time ahead of them than adults, and had less agency in being in the car in the first place? If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions?
Wild stuff.
My take: nobody should be keen on some libertarian hell dystopia where self-driving cars only seek to preserve to life of the person who paid for it. That’s kinda not how society works. Besides, we’d certainly be morally troubled if a person made that decision, let alone an algorithm. Are we just too skittish about technology?
But at the end of the day, when accidents happen when a car is in autopilot, we need to decide if someone is morally and legally culpable. Tesla made a statement via its blog to the effect that accidents are essentially inevitable, but that its autopilot mode are still massively safer:
That given its nature as a driver assistance system, a collision on Autopilot was a statistical inevitability, though by this point, not one that would alter the conclusion already borne out over millions of miles that the system provided a net safety benefit to society.
There’s definitely a question to be asked as to whether Tesla should be beta-testing its autopilot software on customers. Even if they did consent to it via the terms and conditions, it’s still a tiny bit questionable. If this kinda thing was in a lab environment, it’d be subject to a shitload of ethics oversight. These tests most certainly are not.
As we move forward with driverless cars, we’re going to have to decide on solid principles as to who is responsible when cars crash. Do we assume someone is at fault, like we do in a regular accident with human drivers? 
Or do we accept Tesla and Google’s assertion that self-driving cars are extremely safe, and accidents are just an unfortunate statistical necessity?
Photo: Minority Report.

The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At PEDESTRIAN.TV, we independently choose and write about stuff we love and think you’ll froth too. We have affiliate partnerships so we might get a bit of money from any purchase you make based on our recs, cool? Cool. FYI – prices are accurate and items in stock at the time of posting.

More Stuff From PEDESTRIAN.TV