The Whips of War: Hoopty Hacks in our Driverless Future

By Mark

“Nine-tenths of our crimes an’ calamities are made possible by th’ automobile. It has unleashed all th’ pent-up criminal tendencies o’ th’ ages.” -Kin Hubbard[i]

So we’ve talked about a few of the weird, wild, easy, and totally predictable ways cars can be physically weaponized. Pretty much anybody can slap a battering ram or some armor on a wheeled or tracked vehicle. This isn’t a problem that’s going to be solved by banning stuff because A) it’s ridiculously easy to do, and B) most of the materials can’t be banned.

In lofty theoretic terms, this is all just consistent with an increasing devolution of capacity. The march of technology means more and more people have greater capacity to do stuff, including destructive stuff. Throwing some extra metal on a perfectly good stock automobile, or pickup, or cargo truck, or bulldozer, or whatever… is not the sort of thing you can stop by putting more restrictions on metal fabrication or automobiles. The real question is how invasive do we want our intelligence and law enforcement communities to become in order to stop a scattered number of attacks so rudimentary that literally damn near anyone could do them? A world where nobody can pull off the simplest and most decentralized attack is a world so heavily surveilled we might not recognize it.

But things get weirder when we consider the security implications of all these newfangled systems meant to improve the safety and efficiency of our beloved automobiles.

There would certainly be some major benefits to self driving cars. Proponents say safety would improve (the vast majority of accidents are, of course, a result of human error), they’ll increase everyone’s productivity, they’ll reduce traffic congestion, they’ll enable autonomous delivery vehicles (which I’m sure the Teamsters will have absolutely no problem with…), they’ll save energy, go faster than humans can safely drive, etc… If you ask some of their proponents, it’s gonna be all puppies and sunshine once we finally get humans out of the driving business, and the engineering issues are already practically solved!

Of course, there could also be some drawbacks. It’s not clear how driverless vehicles are going to pan out for traffic and parking congestion. On one hand, traffic flow should be much more efficient. On the other hand, we don’t know what happens when half the vehicles on the road are empty but driving anyway. For example, one of the great benefits of a self-driving car is that it could drop you off at your destination and then go park itself, or just head home and come back to pick you up later. Lots of people will want to send their car to circle the block while they do errands, especially if the nearest autonomous-vehicle-hibernation-facility is far enough away that the human would have to wait for pickup. This might save lots of urban parking space, but could actually cause autonomous vehicles to take more trips than human drivers already take, thereby increasing congestion. Of course, the primary response to this criticism is that most people won’t actually own the cars anymore at that point, they’ll just call shared vehicle services. However, without some social engineering to force carpooling, the number of trips probably wouldn’t decrease, and forced ride-sharing might undermine much of the convenience, freedom, and privacy that owning a car on an individual basis provides people today. It’s a real question for our expanding and urbanizing global population. Anyway, the all-knowing (human) experts are on it, and we’ll see what they come up with.

We can be pretty confident about one major prediction: if the engineering and programming are good, all evidence is that driverless cars would indeed be safer than human operators. However, let’s bear in mind that “average” human performance is a race to the bottom weighed down by the population of  teenagers and seniors who often don’t drive too good, not to mention that half of everyone else is actively texting, sexting, cosmetic-ing, sexing, eating, watching TV, drunk, or road-ragin’ at any given time. In America, most people never have to retake the driving exam after the age of 16. Hell, a bunch of states don’t even require driver’s ed.

Basically, lots of people already act like the car drives itself, and the main road safety strategy of the last few decades has been to treat driver performance as a lost cause and improve engineering instead. It’s not like we expect people to take driving seriously. If we did, maybe we wouldn’t be in such a hurry to see the safety benefits of getting humans out of the formula.


Ever have the feeling all this self-driving stuff wouldn’t seem so necessary if people actually drove their cars? Me neither. (Source)

So yeah, statistically they’d be safer. However, they wouldn’t be perfect, and this introduces its own questions that you’ve probably read about somewhere else but I’ll briefly rehash. Firstly, when a self-driving vehicle crashes, how will police and insurers assign responsibility? Traffic is an extremely complex system. Some experts are skeptical about introducing all-driverless systems anytime soon because city driving (which is most of the driving people actually do), is “a hundred or thousand times more complicated than driving on the interstate.” Engineers are smart enough to program vehicles that commit far fewer of the routine screw ups humans do (especially in relatively low-intensity highway driving), but these vehicles are still designed and programmed by humans, and only operate within parameters the programmers thought of. A couple autonomous vehicle crashes have been the vehicle’s fault, and a few have been blamed on human error in the other vehicle, such as a human-piloted vehicle running a red light. However, a very defensive human driver might actually anticipate another vehicle’s human error better than an autonomous program would. Hence, the best way to ensure maximum safety is to mandate that all vehicles be self-driving (with no human operator override, since the human’s instincts in an emergency might be wrong), and they must be networked, constantly GPS-tracked, and operating basically as a real-time constantly-communicating self-collective.  Fahrvergnügen, folks!

There are also some weird ethical implications. Without driver control, it’s up to the vehicle to decide how to engage in emergency maneuvers. This brings us a post-industrial reboot of the trolley problem. What should your car do if it’s got a nanosecond to choose between killing you or killing somebody else? If you grew up driving in North American deer country your instincts ought to be pretty straightforward: swerving is for suckers, and better Bambi than me. Now, for the moment let’s bypass the obvious issue that a car willing to swerve off road and kill its passenger instead of striking an obstruction had better be able to differentiate between a human being, an animal big enough to destroy the car, an animal small enough that the car should just hit it, a human child the size of a small animal, a rock that’ll disable the car if run over, and a damned plastic shopping bag. Ok, bypassed.

Assuming none of these practical issues apply, there’s still the ethical question of who the car should kill: the passenger or the person/people in the road. Most people (76% in one survey) say, hypothetically, that if the car is faced with a choice between killing one person in the car and 10 people outside the car, then regrettably, it’s just that particular passenger’s time to go. However, ask the same people whether they’d personally want a car that would sacrifice them for the greater good, and they reverse themselves faster than a self-parking Volvo. Mercedes apparently figured this would be a problem, and has already started putting together systems that would prioritize passengers over pedestrians.

Now, lest you think we’re total Luddites over here at MakingCrimes, not true we swear. It’s just that there are ethical, practical, marketability, and security differences between systems that help humans drive better, versus systems that supplant the human entirely. Thing is, we’d barely begun to reap the benefits of the former before some technologists began urging us all to unquestioningly accept the latter.

For example, antilock braking modulates braking outputs to prevent traction loss much more effectively than any human operator. Electronic stability control applies braking to each individual wheel to maximize traction. Lane departure, backup, and collision warning systems alert drivers to problems with more time to take action. These “Driver Assistance Systems” are great. According to the Insurance Institute for Highway Safety, stability control alone would prevent at least 10,000 fatal accidents in the US per year if all vehicles were equipped with it. Driver assistance systems are an improvement. They make driving safer, but they don’t really become a fundamentally different thing until the technology is making the major executive decisions. So, instead of antilock braking systems to help the human maintain control, we’ve got automatic braking systems that override human control. Instead of lane departure warning systems, it’s lane assist systems that step in to keep the car in the intended lane. (Or, at least, whatever the software thinks is the intended lane…)

Even before we go full driverless, these systems are laying the groundwork, and they differ from their forerunners in a crucial way: they automate major vehicle functions. Now the car isn’t just designed to help you drive better, it’s designed to drive better than you. But to do this, it needs control. And for the computer to physically control the vehicle, it needs to have its own physical input between your physical inputs and the car’s physical outputs. It needs drive-by-wire, and for systems that automatically maneuver the car, the computer’s input needs to be able to override human input.

This happens through the “controller area network,” or CAN bus. The CAN bus networks the major components of the vehicle, and may physically control them with programmable logic controllers (PLCs). In most modern vehicles, the CAN bus already controls some physical systems that substantially impact vehicle performance or behavior, like the transmission, fuel system, braking, and throttle. If your car has a fancy self-parking or “lane assist” feature, that means the steering can probably be controlled through the CAN bus too. And of course, a CAN bus can be hacked.

Car hacking was already happening years before cars were able to control themselves. OnStar and other GPS, remote-assistance, keyless entry and ignition, and Bluetooth systems can be hacked to physically unlock the vehicle or remotely surveil the driver, but that doesn’t necessarily involve physical control over driving functions. Nowadays, hacking the CAN bus can actually enable physical hacks: if you can get into the CAN bus, you can probably disable the car in motion, control the brakes, or (in some cases) the steering. All kinds of stuff.  The FBI has already expressed some concerns. As cars become more internet-connected (which we know pretty much every single self-driving car will be…), and as they become more interfaced with smartphones and smartphone apps, the attack surface may only broaden. There are cars that can be hacked by simply calling their built-in cell phone connections, jumping on their Bluetooth, or tricking the driver to play a “song of death” on a compact disc, which somehow hacks the internet-facing stereo system to hook up with the hacker’s computer. FBI says we should protect ourselves by keeping an eye out for recalls, ensuring our software is up to date, and using discretion when connecting 3rd party devices or apps to our vehicles. Fine advice, but it’s exactly what people won’t be doing. Amazingly, 62% of American consumers believe internet-connected cars will be hacked, but 42% (60% of millennials) still want cars to be more connected anyway.  Apparently, “…a mere 13% said they would not use an app if it increased the risk of their car getting hacked.”


More screens = better always! (Source)

If we want to be sure the vehicle software will be updated (which will be necessary to protect many internet-connected vehicles, and absolutely essential for fully-autonomous vehicles in the future), the best way is to have the car perform its own automatic updating through a wireless connection. Great, except if there’s any interface between the CAN bus and the vehicle’s internet-connected “In-Vehicle Entertainment (IVI)” bullshit, or to connected 3rd party smart phone apps, the vehicle could be remotely hackable. Having the car auto-update the software is necessary to maintain safety and security, but also might be an attack vector even without the added risks of interfacing with tons of 3rd party applications.

Carmakers are increasingly tuned in to the problem, and a straightforward solution is to firewall or air-gap the CAN bus (aka, the “drivey” systems) from the infotainment (aka, the bullshit) systems, so never the twain shall meet. On older cars developed after they had CAN bus’, but before cars had Bluetooth or internet connections, that’s basically how it always worked. These cars can still be hacked, but you need physical access through the vehicle’s on board diagnostic port (OBD). Doesn’t mean you can’t do some harm though:  car thieves have been gaining physical access and boosting cars by spoofing or stealing cryptographic RFID keyless entry fobs for years.[ii] If you can physically unlock the car, then you can probably plug in through the OBD to alter the programming. Some clever white hat hackers have even designed a car-hacking, I mean, auditing tool that hooks into the CAN bus and analyzes traffic to see what systems are vulnerable to CAN hacks. The tool requires physical access, but stuff like this can also presumably tell a knowledgeable person about vulnerabilities between the CAN and internet-facing sub-systems or connected apps.

Now, the best way to design against remote vehicle hacking is to separate the CAN bus from any kind of external network, including the damned internet. Carmakers are increasingly trying to do this, and it really shouldn’t be too hard to compartmentalize functions because there is not much reason the thing that streams Peppa Pig videos for the kids needs to be tied into the same system that determines whether the vehicle will stop gently or Thelma-and-Louise the family unit into the nearest gulch. On the other hand, the compartmentalized design that can protect against remote vehicle hacks is also the design governments will be increasingly tempted to mandate against. Why? Back doors, of course!

As Cory Doctorow points out, a world of mandated surveillance back doors is a world of 3rd party vehicle hacks and overrides, and a world of car manufacturer “DRM” is a world of purposefully “jailbroken” vehicles intentionally hacked to do stuff outside their original programming. It may be a while until this really comes to fruition, but it ain’t just paranoia-porn. OnStar already has remote-disabling capability. Usually it’s used to disable a stolen vehicle, but once in a while law enforcement kindly asks the good people at OnStar to use it for other purposes too. The EU is quietly discussing standards to mandate backdoors in all new vehicles that would enable remote disabling by police. These systems don’t currently hack the brakes or steering, but they are able to remotely turn off components in the fuel system, temporarily “bricking” the car so you’ll be nice and easy to catch up with.

Problem is, these are CAN bus hacks. If police want to be able to remotely disable any vehicle, that means a hackable interface between external networks and the CAN bus must be integral to the design of every car. It’s just like everything else these days. Police want to be able to hack your car without somehow leaving the door open for private hackers, but that’s probably impossible. The only way to guarantee that private hackers can’t hack your car is to design it so the police can’t hack it either.

Besides, even if the design is pretty secure, private hackers actually develop a lot of the hacking tools police use, and these tools have a way of leaking out into the wild even if they’re not supposed to. At this point, it’s hard to believe that governments will be able to monopolize vehicle backdoors without any abuses or leaks to malicious actors. If we’re talking about vulnerabilities to nation-state cyber attack, it gets worse. Let’s say there’s a national, or EU-wide, or global mandate that all cars have the same kind of back door so the police can disable any car on demand. You think that back door is staying a secret? If the Chinese, or Russians, or North Koreans, or whoever, can  find a way to hack this system, they can stall every car in NYC simultaneously during rush hour. What about logic bombs? If a software vendor insinuates malicious code into the programming of a major manufacturer’s vehicles, or perhaps into the hardware used in many brands of vehicle, millions of cars can be pre-programmed to go screwy at the same time regardless of whether they’re hackable through wireless networks.

There’s smaller-scale stuff too. There are lots of security implications to vehicle hacks. Malicious surveillance is an obvious one that’s already been done. Stalling or crashing a car with the target in it is also obvious. Blackmail and ransomware will probably be a thing: too bad you didn’t update your in-vehicle Netflix buddy. But tell you what, for a reasonable price in cyber-shekels we can get your truck going again… Or, gee looks like you’ve been going some interesting places at some interesting times with a passenger not enrolled in your biometric systems, eh friend? Looks like the rear passenger compartment scanner logged some pretty high heart rates and an unusual increase in cabin humidity the other night. Might be hard to explain that to the wife, eh? Tell ya what, for a reasonable price in digi-doubloons you won’t have to…

Assassinations might get weird when all the attacker has to do is hack into the target’s vehicle and run it off a cliff or high speed into a fixed obstacle. When we get to the point that virtually all major vehicle functions are physically controlled through the software, remote operation may revolutionize suicide bombings and vehicle ram attacks. It’s a lot more convenient to run a vehicle at a target when you don’t have to convince some sucker to do it for you. Worse, remote control means the possibility of coordinated remote-operated or fully-autonomous vehicle attacks. It’s bad enough that there are folks out there who will run a second or third suicide truck at rescuers after the first one goes off. Remote operation might make that easier: they could be timed like clockwork and programmed to run autonomously without need for live connection to an operator (and hence, couldn’t be jammed). In an unarmored vehicle, the weak point is often going to be the driver: we’re soft and not too difficult to disable. Basically, first responders would have to plan to erect vehicle barriers at every major terrorism scene due to the threat of endless follow-on autonomous vehicle attacks. Standoff weapons might have to get more serious in some locales simply because now you’ve got to focus on disabling (potentially hardened) autonomous attack vehicles.

This would be possible not only because vehicles could be remotely hacked, but also because they could be “jailbroken.” Even if the systems in all vehicles were fortified against remote hacking, it’d probably be more difficult to prevent hacking through physical access or DIY jerry rigs. So perhaps massive remote hacking of vehicles can be prevented, but owned, rented, and stolen vehicles might still be hacked on a smaller scale. Lots of users would probably start by figuring out how to disable the systems that police use to remotely track or disable vehicles. Or, they could hack systems to enable their vehicles to do stuff not normally permitted by the programming.

Is this for real? It could be. iPhone jailbreaker extraordinaire George Hotz’s recently developed  a DIY self-driving car retrofit that costs about $1000. It plugs into the CAN bus and acts like Tesla’s semi-autonomous autopilot system: when turned on it can steer to keep the car in the lane, accelerate, or brake without need for the driver’s input. The company originally planned to market the system commercially, but the National Highway Traffic Safety Administration sent them a warning letter that they’d need official testing and approval before marketing it. Rather than spend the money on getting approval as a commercial product, Hotz just released the code open source. Now, if you’re handy with a 3D printer, possess an Android OnePlus 3 smart phone to run the software and provide the road camera, and you’re pretty serious about high-tech car tinkering, you can try and build it yourself for free. It’s probably illegal to actually use it on public roads, but you can build it. Because the software is open source, you can tinker with its behavior too.

But let’s say designers protect against every possible vehicle hack or DIY set up. We’ve still got the security implications of humans manipulating patterns in autonomous vehicle behavior. If all vehicles automatically brake and remain at full stop when faced with a road obstruction, a couple jackasses might be able to completely shut down the 405 for hours just by standing in the road.[iii] (Or, how about just throwing a few bags of warm meat out of your self-driving car in heavy traffic?) How will all these autonomous vehicles be networked to yield to the cops and firefighters that need to get through the gridlock and deal with obstructions? What happens when protesters or random douchebags realize they can make cars crash into a wall or drive off a bridge, or whatever, just by jumping in front of them or spoofing them to make them think a bunch of humans have jumped in front of them?

And sure, we can make software that avoids obstructions, but can we make the software recognize hostile threats and act accordingly without human input? Will kidnap and ransom jobs get easier when attackers realize they can stop any car by simply standing in the road? How about when kidnappers realize they can hack vehicles to have their victims delivered right to them? The term “drive-by” may take on a different connotation when the target’s car dutifully halts to avoid hitting the attacker calmly standing there with a shotgun or incendiary device. What about the old Colombian how-do-ya-do (aka, 2 guys + 1 motorbike + 1 submachine gun)? Either way, something tells me the good safety engineering folks will get together with the omni-benevolent  folks in the legal department to include a “lock in” feature like they’ve got at Disney World, so passengers can’t exit the vehicle until it’s at full stop. Hell, if the car doesn’t detect that it’s being attacked, the assailant can run off and the car might even proceed to its destination. It’s possible nobody would even know an attack happened until the robo-taxi shows up with a dead guy in it.

Cops, diplomatic security, and executive security professionals get training on emergency maneuvering, evasive driving, tactical driving, and counter ambush techniques. Something tells me these are the last folks who are going to accept a computer override. That’s all well and good for the sheepdogs, but what about we defenseless sheep? Are we going to get stuck with autonomous programs that make us easier targets? Are we assuming a driverless future will be crime free? The more paranoid and/or less sheep-like among us may try to hack our own cars just to have some kind of control in threatening situations, and the more wolf-like among us will hack cars to create threatening situations.

So what’s it all mean? In an omni-networked post-industrial world, people can hack cars remotely, make their own open source autonomous vehicle kits, turn vehicles into self-guiding weapons, and manipulate the standardized behaviors of systems meant to keep us safe. The police can try to stay ahead by requiring backdoors, but that may only make the attack surface broader without stopping the people who can hack and DIY themselves some truly “autonomous” vehicles. The experts are smart, but they won’t think of everything. Buckle up.


[i] Writing as his character, “th’ Hon. Ex.-Editur Cale Fluhart.” quoted in The American Humorist : Conscience of the Twentieth Century (1964) by Norris Yeats, p. 107

[ii] Robert Vamosi. “When Gadgets Betray Us.” Basic Books, New York. Chap 1

[iii] Sub-Question: How long before anyone notices the difference?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s