August 12, 2021 By David Bisson 2 min read

There’s something spooky going on. New research from the Ubiquitous System Security Lab, Zhejiang University Security and Privacy Research Group and the University of Michigan found ‘poltergeist’ (PG) attacks can fool autonomous vehicles in a way that hasn’t been seen before. Take a look at what the researchers found about how this attack works.

Vehicles with a self-driving feature rely on computer-enabled, object-based detection. This classifies objects, deciding what is an obstacle and what is a normal road condition. Using those decisions, autonomous vehicles make moves on their own. Poltergeist attackers tamper with those classification results.

Bombarding Self-Driving Cars With Acoustic Signals

To be specific, the poltergeist attack affects the stabilization of images detected by a vehicle. In their paper, the researchers noted this isn’t the same as past studies in which people showed the security risks of self-driving cars by targeting the main image sensors, such as the complementary metal-oxide semiconductor. Instead, they singled out inertial sensors. These provide an image stabilizer with motion feedback that it can use to reduce blur.

The researchers designed their PG attack to target those initial sensors with resonant acoustic signals. In doing so, they found that someone could gain control of the stabilizer. From there, the attacker could then perform one of the following three types of attacks:

  • Hiding Attacks: A threat actor could make a detected object, such as the rear of a car, disappear.
  • Creating Attacks: Someone could fool the computer detection systems into detecting an object that isn’t really there.
  • Altering Attacks: An attacker could cause the computer detection systems to classify one object as another.

In testing those attacks, the researchers saw a 100% success rate with people, cars, trucks, buses, traffic lights and stop signs with hiding attacks. The other two attack scenarios varied in success depending on which objects were involved and the extent to which they were targeted.

Researchers Leading Vehicle Hacking

Fooling object detection systems is just one of the types of attacks threat actors could use to prey upon self-driving vehicles. Others include using beams of light and adversarial machine learning to tamper with the vehicles’ decisions and/or performance.

Back in 2018, for instance, a hacker found that a threat actor could embed a custom piece of hardware into a self-driving vehicle. Then, they could use it to control almost any component of the car, including the brakes and speed.

In February 2020, another group of hackers made one type of autonomous vehicle speed up to 85 mph in a 35 mph zone.

Toward Better Cybersecurity in Autonomous Vehicles

The researchers working on the PG problem also offered some solutions. Vehicle makers who include a self-driving feature should include safeguards, such as using a microphone to detect acoustic injection attacks. They can also add adversarial training into their object detection algorithms.

In addition, autonomous vehicle manufacturers should ensure that third-party providers and others along their supply chains follow security best practices. This could keep malicious actors out of the supplier’s network, removing the chance for follow-up attacks.

Self-driving cars may seem like a sign of the future, but keeping threat actors from taking control of them is a problem researchers have been working on for years. This new type of attack is just one example of that.

More from News

Insights from CISA’s red team findings and the evolution of EDR

3 min read - A recent CISA red team assessment of a United States critical infrastructure organization revealed systemic vulnerabilities in modern cybersecurity. Among the most pressing issues was a heavy reliance on endpoint detection and response (EDR) solutions, paired with a lack of network-level protections. These findings underscore a familiar challenge: Why do organizations place so much trust in EDR alone, and what must change to address its shortcomings? EDR’s double-edged sword A cornerstone of cyber resilience strategy, EDR solutions are prized for…

DHS: Guidance for AI in critical infrastructure

4 min read - At the end of 2024, we've reached a moment in artificial intelligence (AI) development where government involvement can help shape the trajectory of this extremely pervasive technology. In the most recent example, the Department of Homeland Security (DHS) has released what it calls a "first-of-its-kind" framework designed to ensure the safe and secure deployment of AI across critical infrastructure sectors. The framework could be the catalyst for what could become a comprehensive set of regulatory measures, as it brings into…

Apple Intelligence raises stakes in privacy and security

3 min read - Apple’s latest innovation, Apple Intelligence, is redefining what’s possible in consumer technology. Integrated into iOS 18.1, iPadOS 18.1 and macOS Sequoia 15.1, this milestone puts advanced artificial intelligence (AI) tools directly in the hands of millions. Beyond being a breakthrough for personal convenience, it represents an enormous economic opportunity. But the bold step into accessible AI comes with critical questions about security, privacy and the risks of real-time decision-making in users’ most private digital spaces. AI in every pocket Having…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today