ARLA: Using Reinforcement Learning to Strengthen DNNs
Published on: 2025-05-20 17:00:29
Deep neural networks will be crucial to future human–machine teams aiming to modernize safety-critical systems. Yet DNNs have at least two key problems:
Researchers have proposed many defense schemes to counter many attack vectors, yet none have yet secured DNNs from adversarial examples (AEs).
This DNN vulnerability to AEs renders their role in safety-critical systems problematic.
Enter the Adversarial Reinforcement Learning Agent (ARLA), a novel AE attack based on reinforcement learning that was designed to discover DNN vulnerabilities and generate AEs to exploit them.
ARLA is described in detail in Matthew Akers and Armon Barton’s Computer magazine article, “Forming Adversarial Example Attacks Against Deep Neural Networks With Reinforcement Learning.” Here, we offer a glimpse at ARLA’s approach and its capabilities.
The Reinforcement Learning Approach
ARLA is the first adversarial attack based on reinforcement learning (RL); in RL, an agent
Uses its sensors to observe an unkn
... Read full article.