Autonomous Cars Can Be 'Hacked' Without Altering Computer Code
8 August 2017 - motor1
Researchers found minor changes to road signs can have major consequences.
If you're one of the motorists in the world a bit leery about self-driving cars and how quickly the technology has progressed, you'll love this story. A report published by several researchers from the University of Michigan, University of Washington, Stony Brook University, and the University of California basically says it's very easy to trick the electronic brains used by autonomous autos to think road signs are different than what they say. In one example, a few rectangular stickers placed in clever areas on a stop sign can make some autonomous systems think it's a 45 mph speed limit sign. Yeah, that's not good.
The crux of the report is about Deep Neural Networks – basically the programming used in a wide range of devices and systems where some kind of interpretation is required. Obviously such systems are the cornerstone of autonomous cars, which increasingly use a variety of sensors, cameras, and even lasers to "see" the environment. Based on the feedback, these systems determine everything from distance to the vehicle in front, to whether that funny shape on the side of the road is a tree or an actual person.
At the end of the day, it all comes down to programming and algorithms and things us mere writers and purveyors of driving confess to not fully understanding. The researchers on this study, however, do understand such things. They took a closer look at how these systems work, and in doing so they were able to figure out how fool them with strategically placed stickers on road signs. Specifically, they tweaked right turn signs in such a way that systems thought they were stop signs or new lane signs. The really scary stuff, however, is that with just a few rectangles stuck on a stop sign in certain spots, they fooled the system into thinking it was a 45 mph speed limit sign.
In fairness to computers, human drivers certainly make all kinds of mistakes behind the wheel. We won't follow this thread any further, because it's getting dangerously close to becoming a philosophical debate over who faces consequences for the mistakes of a machine. But with automakers like Audi taking big strides in the self-driving realm, we suspect such a conversation will be happening very soon – especially if such self-driving systems can be so easily fooled.