Should the government have the right to troll through your thoughts and memories? That seems like a question for a "Minority Report" or "Matrix" future, but legal precedent is being set today. This is what is really at stake in an emerging tussle between Washington DC and Silicon Valley.
The Internets areall abuzz with Apple's refusal to hack an iPhone belonging to an accused terrorist. The FBI has served a court order on Apple, based on the All Writs Act of 1789, requiring Apple to break the lock that limits the number of times a passcode can be tried. Since law enforcement has been unable to crack the security of iOS on its own, it wants Apple to write special software to do the job. Here is Wired's summary. This NYT story has additional good background. The short version: should law enforcement and intelligence agencies be able to compel corporations to hack devices owned by citizens and entrusted with their sensitive information?
Apple CEO Tim Cook published a letter saying no, thank you, because weakening the security of iPhones would be bad for his customers and "has implications far beyond the legal case at hand". Read Cook's letter; it is thoughtful. The FBI says it is just about this one phone and "isn't about trying to set a precedent," in the words of FBI Director James Comey. But this language is neither accurate nor wise — and it is important to say so.
Once the software is written, the U.S. government can hardly argue it will never be used again, nor that it will never be stolen off government servers. And since the point of the hack is to be able to push it onto a phone without consent (which is itself a backdoor that needs closing), this software would allow breaking the locks on any susceptible iPhone, anywhere. Many commentators have observed that any effort to hack iOS this once would facilitate repetitions, and any general weakening of smartphone security could easily be exploited by governments or groups less concerned about due process, privacy, or human rights. (And you do have to wonder whether Tim Cook's position here is influenced by his experience as a gay man, a demographic that has been persecuted, if not actually prosecuted, merely for thought and intent by the same organization now sitting on the other side of the table. He knows a thing or two about privacy.) U.S. Senator Ron Wyden has a nice take on these issues. Yet while these are critically important concerns for modern life, they are shortsighted. There is much more at stake here than just one phone, or even the fate of a one particular company. The bigger, longer term issue is whether governments should have access to electronic devices that we rely on in daily life, particularly when those devices are becoming extensions of our bodies and brains. Indeed, these devices will soon be integrated into our bodies — and into our brains.
Hacking electronically-networked brains sounds like science fiction. That is largely because there has been so much science fiction produced about neural interfaces, Matrices, and the like. We are used to thinking of such technology as years, or maybe decades, off. But these devices are already a reality, and will only become more sophisticated and prevalent over the coming decades. Policy, as usual, is way behind.
My concern, as usual, is less about the hubbub in the press today and instead about where this all leads in ten years. The security strategy and policy we implement today should be designed for a future in which neural interfaces are commonplace. Unfortunately, today's politicians and law enforcement are happy to set legal precent that will create massive insecurity in just a few years. We can be sure that any precedent of access to personal electronic devices adopted today, particularly any precedent in which a major corporation is forced to write new software to hack a device, will be cited at least decades hence, when technology that connects hardware to our wetware is certain to be common. After all, the FBI is now proposing that a law from 1789 applies perfectly well in 2016, allowing a judge to "conscript Apple into government service", and many of our political representatives appear delighted to concur. A brief tour of current technology and security flaws sets the stage for how bad it is likely to get.
As I suggested a couple of years ago, hospital networks and medical devices are examples of existing critical vulnerabilities. Just in the last week hackers took control of computers and devices in a Los Angeles hospital, and only a few days later received a ransom to restore access and functionality. We will be seeing more of this. The targets are soft, and when attacked they have little choice but to pay when patients' health and lives are on the line. What are hospitals going to do when they are suddenly locked out of all the ventilators or morphine pumps in the ICU? Yes, yes, they should harden their security. But they won't be fully successful, and additional ransom events will inevitably happen. More patients will be exposed to more such flaws as they begin to rely more on medical devices to maintain their health. Now consider where this trend is headed: what sorts of security problems will we create by implanting those medical devices into our bodies?
Already on the market are cochlear implants that are essentially ethernet connections to the brain, although they are not physically configured that way today. An external circuit converts sound into signals that directly stimulate the auditory nerves. But who holds the password for the hardware? What other sorts of signals can be piped into the auditory nerve? This sort of security concern, in which networked electronics implanted in our bodies create security holes, has actually been with us for more than a decade. When serving as Vice President, Dick Cheney had the wireless networking on his fully-implanted heart defibrillator disabled because it was perceived as a threat. The device contained a test mode that could exploited to fully discharge the battery into the surrounding tissue. This might be called a fatal flaw. And it will only get worse.
DARPA has already limited the strength of a recently developed, fully articulated bionic arm to "human normal" precisely because the organization is worried about hacking. These prosthetics are networked in order to tune their function and provide diagnostic information. Hacking is inevitable, by users interested in modifications and by miscreants interested in mischief.
Not content to replace damaged limbs, within the last few months DARPA has announced a program to develop what the staff sometimes calls a "cortical modem". DARPA is quite serious about developing a device that will provide direct connections between the internet and the brain. The pieces are coming together quickly. Several years ago a patient in Sweden received a prosthesis grafted to the bone in his arm and controlled by local neural signals. Last summer I saw Gregoire Courtine show video of a monkey implanted with microfabricated neural bridge that spanned a severed spinal cord; flip a switch on and the monkey could walk, flip it off and the monkey was lame. Just this month came news of an implanted cortical electrode array used to directly control a robot arm. Now, imagine you have something like this implanted in your spine or head, so that you can walk or use an arm, and you find that the manufacturer was careless about security. Oops. You'll have just woken up — unpleasantly — in a William Gibson novel. And you won't be alone. Given the massive medical need, followed closely by the demand for augmentation, we can expect rapid proliferation of these devices and accompanying rapid proliferation of security flaws, even if today they are one-offs. But that is the point; as Gibson has famously observed, "The future is already here — it's just not evenly distributed yet."
When — when — cortical modems become an evenly distributed human augmentation, they will inevitably come with memory and computational power that exceeds the wetware they are attached to. (Otherwise, what would be the point?) They will expand the capacity of all who receive them. They will be used as any technology is, for good and ill. Which means they will be targets of interest by law enforcement and intelligence agencies. Judges will be grappling with this for decades: where does the device stop and the human begin? ("Not guilt by reason of hacking, your honor." "I heard voices in my head.") And these devices will also come with security flaws that will expose the human brain to direct influence from attackers. Some of those flaws will be accidents, bugs, zero-days. But how will we feel about back doors built in to allow governments to pursue criminal or intelligence investigations, back doors that lead directly into our brains? I am profoundly unimpressed by suggestions that any government could responsibly use or look after keys to any such back door.
There are other incredibly interesting questions here, though they all lead to the same place. For example, would neural augmentation count as a medical device? If so, what does the testing look like? If not, who will be responsible for guaranteeing safety and security? And I have to wonder, given the historical leakiness of backdoors, if governments insist on access to these devices who is going to want to accept liability inherent in protecting access to customers' brains? What insurance or reinsurance company would issue a policy indemnifying a cortical modem with a known, built-in security flaw? Undoubtably an insurance policy can be written that exempts governments from responsibility for the consequences of using a backdoor, but how can a government or company guarantee that no one else will exploit the backdoor? Obviously, they can do no such thing. Neural interfaces will have to be protected by maximum security, otherwise manufacturers will never subject themselves to the consequent product liability.
Which brings us back to today, and the precedent set by Apple in refusing to make it easy for the FBI to hack an iPhone. If all this talk of backdoors and golden keys by law enforcement and politicians moves forward to become precedent by default, or is written into law, we risk building security holes into even more devices. Eventually, we will become subject to those security holes in increasingly uncomfortable, personal ways. That is why it is important to support Tim Cook as he defends your brain.