The problem with self-driving cars: who controls the code?

selfdrivingvehiclesofthefuture


Powered by Guardian.co.ukThis article titled “The problem with self-driving cars: who controls the code?” was written by Cory Doctorow, for theguardian.com on Wednesday 23rd December 2015 12.00 UTC

The Trolley Problem is an ethical brainteaser that’s been entertaining philosophers since it was posed by Philippa Foot in 1967:

A runaway train will slaughter five innocents tied to its track unless you pull a lever to switch it to a siding on which one man, also innocent and unawares, is standing. Pull the lever, you save the five, but kill the one: what is the ethical course of action?

The problem has run many variants over time, including ones in which you have to choose between a trolley killing five innocents or personally shoving a man who is fat enough to stop the train (but not to survive the impact) into its path; a variant in which the fat man is the villain who tied the innocents to the track in the first place, and so on.

Now it’s found a fresh life in the debate over autonomous vehicles. The new variant goes like this: your self-driving car realizes that it can either divert itself in a way that will kill you and save, say, a busload of children; or it can plow on and save you, but the kids all die. What should it be programmed to do?

I can’t count the number of times I’ve heard this question posed as chin-stroking, far-seeing futurism, and it never fails to infuriate me. Bad enough that this formulation is a shallow problem masquerading as deep, but worse still is the way in which this formulation masks a deeper, more significant one.

Here’s a different way of thinking about this problem: if you wanted to design a car that intentionally murdered its driver under certain circumstances, how would you make sure that the driver never altered its programming so that they could be assured that their property would never intentionally murder them?

There’s an obvious answer, which is the iPhone model. Design the car so that it only accepts software that’s been signed by the Ministry of Transport (or the manufacturer), and make it a felony to teach people how to override the lock. This is the current statutory landscape for iPhones, games consoles and many other devices that are larded with digital locks, often known by the trade-name “DRM”. Laws like the US Digital Millennium Copyright Act (1998) and directives like the EUCD (2001) prohibit removing digital locks that restrict access to
copyrighted works, and also punish people who disclose any information that might help in removing the locks, such as vulnerabilities in the device.

There’s a strong argument for this. The programming in autonomous vehicles will be in charge of a high-speed, moving object that inhabits public roads, amid soft and fragile humans. Tinker with your car’s brains? Why not perform amateur brain surgery on yourself first?

But this obvious answer has an obvious problem: it doesn’t work. Every locked device can be easily jailbroken, for good, well-understood technical reasons. The primary effect of digital locks rules isn’t to keep people from reconfiguring their devices – it’s just to ensure that they have to do so without the help of a business or a product. Recall the years before the UK telecoms regulator Ofcom clarified the legality of unlocking mobile phones in 2002; it wasn’t hard to unlock your phone. You could download software from the net to do it, or ask someone who operated an illegal jailbreaking business. But now that it’s clearly legal, you can have your phone unlocked at the newsagent’s or even the dry-cleaner’s.

If self-driving cars can only be safe if we are sure no one can reconfigure them without manufacturer approval, then they will never be safe.

But even if we could lock cars’ configurations, we shouldn’t. A digital lock creates a zone in a computer’s programmer that even its owner can’t enter. For it to work, the lock’s associated files must be invisible to the owner. When they ask the operating system for a list of files in the lock’s directory, it must lie and omit those files (because otherwise the user could delete or replace them). When they ask the operating system to list all the running programs, the lock program has to be omitted (because otherwise the user could terminate it).

All computers have flaws. Even software that has been used for years, whose source code has been viewed by thousands of programmers, will have subtle bugs lurking in it. Security is a process, not a product. Specifically, it is the process of identifying bugs and patching them before your adversary identifies them and exploits them. Since you can’t be assured that this will happen, it’s also the process of discovering when your adversary has found a vulnerability before you and exploited it, rooting the adversary out of your system and repairing the damage they did.

When Sony-BMG covertly infected hundreds of thousands of computers with a digital lock designed to prevent CD ripping, it had to hide its lock from anti-virus software, which correctly identified it as a program that had been installed without the owner’s knowledge and that ran against the owner’s wishes. It did this by changing its victims’ operating systems to render them blind to any file that started with a special, secret string of letters: “$sys$.” As soon as this was discovered, other malware writers took advantage of it: when their programs landed on computers that Sony had compromised, the program could hide under Sony’s cloak, shielded from anti-virus programs.

A car is a high-speed, heavy object with the power to kill its users and the people around it. A compromise in the software that allowed an attacker to take over the brakes, accelerator and steering (such as last summer’s exploit against Chrysler’s Jeeps, which triggered a 1.4m vehicle recall) is a nightmare scenario. The only thing worse would be such an exploit against a car designed to have no user-override – designed, in fact, to treat any attempt from the vehicle’s user to redirect its programming as a selfish attempt to avoid the Trolley Problem’s cold equations.

Whatever problems we will have with self-driving cars, they will be worsened by designing them to treat their passengers as adversaries.

That has profound implications beyond the hypothetical silliness of the Trolley Problem. The world of networked equipment is already governed by a patchwork of “lawful interception” rules requiring them to have some sort of back door to allow the police to monitor them. These have been the source of grave problems in computer security, such as the 2011 attack by the Chinese government on the Gmail accounts of suspected dissident activists was executed by exploiting lawful interception; so was the NSA’s wiretapping of the Greek government during the 2004 Olympic bidding process.

Despite these problems, law enforcement wants more back doors. The new crypto wars, being fought in the UK through Theresa May’s “Snooper’s Charter”, would force companies to weaken the security of their products to make it possible to surveil their users.

It’s likely that we’ll get calls for a lawful interception capability in self-driving cars: the power for the police to send a signal to your car to force it to pull over. This will have all the problems of the Trolley Problem and more: an in-built capability to drive a car in a way that its passengers object to is a gift to any crook, murderer or rapist who can successfully impersonate a law enforcement officer to the vehicle – not to mention the use of such a facility by the police of governments we view as illegitimate – say, Bashar al-Assad’s secret police, or the self-appointed police officers in Isis-controlled territories.

That’s the thorny Trolley Problem, and it gets thornier: the major attraction of autonomous vehicles for city planners is the possibility that they’ll reduce the number of cars on the road, by changing the norm from private ownership to a kind of driverless Uber. Uber can even be seen as a dry-run for autonomous, ever-circling, point-to-point fleet vehicles in which humans stand in for the robots to come – just as globalism and competition paved the way for exploitative overseas labour arrangements that in turn led to greater automation and the elimination of workers from many industrial processes.

If Uber is a morally ambiguous proposition now that it’s in the business of exploiting its workforce, that ambiguity will not vanish when the workers go. Your relationship to the car you ride in, but do not own, makes all the problems mentioned even harder. You won’t have the right to change (or even monitor, or certify) the software in an Autonom-uber. It will be designed to let third parties (the fleet’s owner) override it. It may have a user override (Tube trains have passenger-operated emergency brakes), possibly mandated by the insurer, but you can just as easily see how an insurer would prohibit such a thing altogether.

Forget trolleys: the destiny of self-driving cars will turn on labour relationships, surveillance capabilities, and the distribution of capital wealth.

guardian.co.uk © Guardian News & Media Limited 2010

Published via the Guardian News Feed plugin for WordPress.