Can you trust the personal Internet of Things?
Some of us are really excited about a world of human-implantable Internet of Things (IoT). I’m not keen on it. You see, a few years back, in the TV series Homeland, the US Vice President was assassinated by a terrorist who hacked into his heart pacemaker.
Could that really happen? Yes.
Fatal security problems
In 2017, MedSec, a medical technology security company, found that Abbott Laboratories’ St Jude Medical defibrillator or pacemakers could be remotely attacked by hackers. At about the same time, Johnson & Johnson admitted one of its insulin pumps had a security vulnerability, which could be exploit to overdose diabetics with insulin. Since then, these Implantable Medical Devices (IMDs) have been patched. But who knows how many other such potentially fatal security problems may lie hidden within medical devices?
Actually, Karen M. Sandler, executive director of the Software Freedom Conservancy, has a good idea of how many: Too many. As she explained, “All software has bugs and all software is vulnerable.” We know that. But did you know that, according to the Software Engineering Institute, there is one bug for every 100 lines of software? And did you know that pacemaker in your chest has about 70,000 lines of code? Scary, isn’t it?
But, as Sandler pointed out, “free and open software tends to be better and safer over time.” Unfortunately, all IMD software is proprietary.
What does it run?
Sandler, aka the cyborg lawyer, is close to this problem. You see, she has an enlarged heart from a condition called hypertrophic cardiomyopathy. This mean she could suddenly die at any moment. But, thanks to a pacemaker/defibrillator, she should be OK. When she first saw one, her question to her doctor, who had implanted thousands of these, was: “What does it run?”
The doctor, of course, didn’t have a clue. He wasn’t even sure it had software in it. Next, the company representative came in, and he didn’t know either. But, he assured her that “these devices are very, very safe and fully tested.” To make a long story short, she found medical professionals hadn’t even thought about software issues and IMD vendors won’t talk about their software.
Don’t think anyone is checking up on IMD software outside the vendors. They’re not. The Food and Drug Administration (FDA) doesn’t review IMD source code, nor does it keep a repository of source code. You have to trust your device vendor, which Sandler compared to having a cat guard a fish store.
A black mystery box
Sandler’s OK with having a device in her body — after all, it’s keeping her alive. But she’s “not comfortable with the idea of having proprietary software literally screwed into her heart.”
Think about it. How would you feel about having a black mystery box in you? I know I’d hate it.
As Sandler explained, these medical “devices are the worst of both worlds. They have closed and proprietary software on them that no one can review, and at the same time, they are broadcasting remotely without any real security.”
Sandler noted that you can’t turn off most IMD defibrillator wireless functionality. The same is true of most personal IoT devices.
Sandler explained that it’s important to have a “right to not broadcast or be connected.” She said, “One of the main points is that we cannot really consent to something we have no viable alternative to.” This is a real worry, because with a network connection with unknown security, your device is much more vulnerable to attacks.
She wants to have the opportunity to examine the code and its algorithms, but with the proprietary software used in her body, she doesn’t have it. And neither does anyone else. Also, as she pointed out, with “IoT software which talks to everything else, often unnecessarily, we are introducing even more vulnerabilities.”
Sandler came out about her search for IMD source code and safety in 2012 at the linux.conf.au conference. Since then, she’s always asked, “Hey, did you ever get your source code?” And the answer is: “No, she hasn’t.”
So, for me, at least, I’ll get a IMD with proprietary software if I must. But volunteer to have an implantable device with no idea what’s going on in its software, and it could be attacked wirelessly? No thanks.