Posted on April 2, 2019 at 10:36 AM
A cybersecurity research team from China has been able to trick Tesla AI autopilot using nothing but little stickers.
Keen Labs, the famed Chinese cybersecurity research group, has written a white paper that delves into security flaws found in Tesla’s cars. The trick that has been catching the most headlines is where they used small, white stickers to simulate a lane. They used this trick to force the AI lane autodetection into a different lane than the one it was supposed to be in.
Among the other exploits, they managed to find are Remote Control Steering, remote activation of the windshield wipers and further lane detection exploits. While many of these may seem to be scary, further analysis of the flaws paints a rather more benign picture.
Little stickers casing problems
The researchers first tried to trick the car (a Tesla Model S) by altering lane markings. They did this by putting in a sizable number of patches to the line that was used for the test to blur it. However, this looked far too conspicuous. In a real-life scenario, the blurred lines and a large number of patches would immediately be a red flag to other people. It would also be far too difficult to do in a real-world scenario.
They then hit on the idea to use small stickers to simulate a lane. This fake lane strategy came about when someone in the team noticed something strange. The autopilot would detect a lane if there were just three, relatively small and inconspicuous squares placed on the road.
They proceeded by putting these small, innocuous square at an intersection that they had built specifically for that purpose. They believed that they would be able to track the car’s system into believing the patches to be an extension of the right lane. They were proved correct as the car was forced to go into the real left lane.
“Our experiments proved that this architecture has security risks and reverse-lane recognition is one of the necessary functions for autonomous driving in non-closed roads,” was written in the white paper. They mention that the scene they built should not have happened. The vehicle should know that the “fake lane” that they created is pointing to a reverse lane. There should be methods to control for this and help avoid potentially deadly accidents.
Gamepad steering and windscreen hacks not as bad as feared
The team then showed the ability to remotely commandeer steering control. The Keen researchers were able to break through the security layers of the car’s internal network. Once they had done this, they used a gamepad to control the car.
Though this may sound like the worst case scenario in a nightmare for many people who are skeptical of autonomous driving, the exploit did come with many caveats. The largest of which being that the attack did not work when the car was taken from autopilot to manual. The problem being that in autopilot mode, the exploit worked, to quote the paper, “without limitations”.
Tesla responded to an inquiry from Forbes, saying that they had found and fixed the steering wheel exploit before Keen Labs had found it. They say that it has been patched out and that there is no way to that it can be used against drivers of their cars.
The windscreen wipers being controlled remotely was a minor problem, that even Keen said would be extremely difficult to do in a real-world scenario.
Tesla further mentioned that all the other “hacks” were physically limited and that drivers should always be ready to take over control of their cars using the steering wheel and brakes.