THERE WAS ONCE a time when people distinguished between cyberspace, the digital world of computers and hackers, and the flesh-and-blood reality known as meatspace. Anyone overwhelmed by the hackable perils of cyberspace could unplug and retreat to the reliable, analog world of physical objects.
But today, cheap, radio-connected computers have invaded meatspace. They’re now embedded in everything from our toys to our cars to our bodies. And this year has made clearer than ever before that this Internet of Thingsintroduces all the vulnerabilities of the digital world into our real world.
Security researchers exposed holes in everything from Wi-Fi-enabled Barbie dolls to two-ton Jeep Cherokees. For now, those demonstrations have yet to manifest in real-world malicious hacks, says security entrepreneur Chris Rouland. But Rouland, who once ran the controversial government hacking contractor firm Endgame, has bet his next company, an Internet-of-Things-focused security startup called Bastille, on the risks of hackable digital objects. And he argues that public understanding of those risks is on the rise. “2015 has been the pivotal year when we saw awareness and vulnerability discoveries published about ‘things’,” Rouland says. He’s added a new slogan to his powerpoint presentations: “Cyber Barbie is now part of the kill chain.”
Here are a few of the hacks that made 2015 the year of insecure internet things:
Security researchers Charlie Miller and Chris Valasek forever altered the automobile industry’s notion of “vehicle safety” in July when theydemonstrated for WIRED that they could remotely hack a 2014 Jeep Cherokeeto disable its transmission and brakes. Their work led Fiat Chrysler to issue an unprecedented recall for 1.4 million vehicles, mailing out USB drives with a patch for the vulnerable infotainment systems and blocking the attack on the Sprint network that connected its cars and trucks.
That Jeep attack turned out to be only the first in a series of car hacks that rattled the auto industry through the summer. At the DefCon hacker conference in August, Marc Rogers, principal security researcher for CloudFlare, and Kevin Mahaffey, co-founder and CTO of mobile security firm Lookout, revealed a suite of vulnerabilities they found in the Tesla Model Sthat would have allowed someone to connect their laptop to the car’s network cable behind the driver’s-side dashboard, start the $100,000 vehicle with a software command, and drive off with it—or they could plant a remote-access Trojan on the car’s internal network to later remotely cut the engine while someone was driving. Other vulnerabilities they found could theoretically have been exploited remotely without needing physical access to the car first, though they didn’t test these. Tesla patched most of these in an over-the-air patch delivered directly to vehicles.
Also at Defcon this year, security researcher Samy Kamkar showed off a book-sized device he’d created called OwnStar, which could be planted on a GM vehicle to intercept communications from a driver’s OnStar smartphone app and give the hacker the ability to geolocate the car, unlock it at will, and even turn on its engine. Kamkar soon found that similar tricks worked for BMW and Mercedes Benz apps, too. Just days later, researchers at the University of California at San Diego showed that they could remotely exploit a small dongle that insurance companies ask users to plug into their dashboards to monitor their car’s speed and acceleration. Through that tiny gadget’s radio, they were able to send commands to a Corvette that disabled its brakes.
All of those high-profile hacks were meant to send a message not only to the automobile industry, but to the consumers and regulators who hold them accountable. “If consumers don’t realize this is an issue, they should, and they should start complaining to carmakers,” Miller told WIRED after the Jeep hack. “This might be the kind of software bug most likely to kill someone.”
Hacked cars aren’t the only devices in the Internet of Things that are capable of killing, of course. Critical medical equipment and devices also have software and architecture vulnerabilities that would let malicious actors hijack and control them, with potentially deadly consequences. Just ask the cardiologist for Dick Cheney who, fearing that an attacker could deliver a fatal shock to the former vice president through his pacemaker, disabled the device’s Wi-Fi capability during his time in office. Students at the University of Alabama showed why Cheney’s cardiologist had cause for concern this year when they hacked the pacemaker implanted in an iStan—a robotic dummy patient used to train medical students—and theoretically killed it. “[W]e could speed the heart rate up; we could slow it down,” Mike Jacobs, director of the university’s simulation program told Motherboard. “If it had a defibrillator, which most do, we could have shocked it repeatedly.”
The students based their test on research done in the past by others, but they provided a nice proof-of-concept to show the real-world effect such an attack could have.
Drug infusion pumps—which dole out morphine, chemotherapy, antibiotics, and other drugs to patients—were also in the spotlight this year. Security researcher Billy Rios took a special interest in them after he had a stint in the hospital for emergency surgery. After taking a close look at the ones that were used in his hospital, Rios found serious vulnerabilities in them that would allow a hacker to surreptitiously and remotely change the dose of drugs administered to patients. The pump maker patched some of the vulnerabilities but insisted others weren’t a problem.
The Federal Drug Administration, which oversees the safety approval process of medical equipment, has taken note of the problems found in all of these devices and others and is beginning to take steps to remedy them. The federal agency began working this year with a California doctor to find a way to fix security problems found in insulin pumps specifically. But the remedies they devise for these pumps could serve as a model for securing other medical devices as well.
Unfortunately, many of the problems with medical devices can’t be fixed with a simple software patch—instead, they require the systems to be re-architected. All of this takes time, however, which means it could be years before hospitals and patients see more secure devices.
For any given consumer product, there seemed to be at least one company this year who eagerly added Wi-Fi to it. Securing that Wi-Fi, on the other hand, seemed to be a more distant priority.
When Mattel added Wi-Fi connectivity to its Hello Barbie to enable what it described as real-time artificially intelligent conversations, it left its connection to the Hello Barbie smartphone app open to spoofing and interception of all the audio the doll records. A Samsung “smart fridge,” designed to synch over Wi-Fi with the user’s Google Calendar, failed to validate SSL certificates, leaving users’ Gmail credentials open to theft. Even baby monitors, despite the creepy risk of hackers spying on kids, remain worryingly insecure: A study from the security firm Rapid7 found that all nine of the monitors it tested were relatively easy to hack.
Not even guns have been spared from the risks of hacking. Married hacker couple Runa Sandvik and Michael Auger in July showed WIRED that they could take control of a Wi-Fi-enabled TrackingPoint sniper rifle. Sandvik and Auger exploited the rifle’s insecure Wi-Fi to change variables in the gun’s self-aiming scope system, allowing them to disable the rifle, make it miss its target, or even make it hit a target of their choosing instead of the intended one. “There’s a message here for TrackingPoint and other companies,” Sandvik told WIRED at the time. “When you put technology on items that haven’t had it before, you run into security challenges you haven’t thought about before.” That rule certainly applies to any consumer-focused company thinking of connecting their product to the Internet of Things. But for those whose product can kill—whether a gun, a medical implant, or a car—let’s hope the lesson is taken more seriously in 2016.