LAS VEGAS—Three years ago at the Black Hat conference, Charlie Miller and Chris Valasek (pictured) detailed flaws in Chrysler cars that led to the recall of millions of vehicles. The pair have now changed their focus from offense to defense, detailing ways to help secure autonomous vehicles at the Black Hat USA 2018 event on Aug. 9.
Miller and Valasek now both work for GM’s Cruise autonomous ride sharing vehicle service, helping engineers to build secure systems. In speaking at Black Hat, the researchers said their goal is getting information out in the open for the betterment of the automobile industry.
“We want everyone to be secure,” Miller said.
There are different levels of automation when it comes to autonomous vehicles. Level 1 is a car that has some driver assistance features, but the driver is still responsible for all vehicle actions. Level 2 automation are cars that can control steering and speed, such as Tesla’s autopilot feature.
Miller and Valasek said that Level 3 automation is where the vehicle controls all elements of driving and the driver has no obligation other than than taking over in the event of unknown conditions. Level 4 automation is where the two researchers have focused their efforts, on securing a car that is entirely automated without the need for a driver. An example of a level 4 autonomous vehicle is GM’s Cruise ride sharing service. Waymo and Uber, are also developing autonomous ride sharing vehicles with level 4 automation.
Hardware
Autonomous vehicles are based on existing production cars with added sensor and computing packages. The researchers noted that cars are very much a data center on wheels, complete with large amounts of computing power.
Miller said that there are millions of lines of code on autonomous vehicles, but from a security perspective the concern is about the inputs and outputs and making sure the vehicle systems can stay in a known good state and not be tampered with.
“It would be a shame if a customer’s info leaks, but it would be a tragedy if steering failed,” Valasek said.
Risks
One risk that media reports have highlighted is the potential to trick an autonomous vehicle’s sensors. For example, painting over a stop sign, so a car won’t see the stop sign and not stop. According to the two researchers, those types of sensor tricking attacks won’t actually work against level-4 autonomous vehicles.
Miller explained that autonomous vehicles have multiple types of sensors as well as GPS. He said that that cars all also benefit from highly accurate mapping done by the vendors, so the cars have more accurate location information than just simply relying on GPS or a single sensor.
Looking Forward
The two researchers want autonomous car makers to focus on the core elements of security that are already well known in enterprise data centers.
The researchers suggested that car makers continuously work to reduce the attack surface by removing any code or connections that are not needed. The use of encryption for data at rest and in motion is also a core recommendation from Miller and Valasek. They also advocate using Hardware Security Modules (HSM) to store encryption keys.
All updates should be signed, as should all communications in the vehicle to validate authenticity. The researchers also suggest that vendors have a clear separation of systems in the car so that less trusted devices should not be able to talk to more trusted devices. The two researchers have no illusion about making a car unhackable either, but they do want to help make cars as secure as possible.
“We just want to make hacking an autonomous car so hard, that [cyber-attackers] will just want to go and hack something else,” Miller said.
Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com. Follow him on Twitter @TechJournalist.