Effects of Learning-Based Action-Space Attacks on Autonomous Driving Agents
Vehicle cybernation with increasing use of information and communication technologies faces cybersecurity threats. This extended abstract studies action-space attacks on autonomous driving agents that make decisions using either a traditional modular processing pipeline or the recently proposed end-to-end model obtained via deep reinforcement learning (DRL). The action-space attacks alter the actuation signal and pose direct risks to the vehicle’s behavior. We formulate the attack construction as a DRL problem based on the input from either an extra camera or inertial measurement unit deployed. Attacks are designed to lurk until a safety-critical moment arises (e.g. lane changing or overtaking), with the goal of causing a side collision upon activation. Our results demonstrate that the modular processing pipeline is more resilient than the DRL-based agent, due to the former’s main focus of trajectory following. We further investigate two enhancement methods: adversarial training through fine-tuning and progressive neural networks, gaining an essential understanding of their pros and cons