Adversarial joint attacks on legged robots
release_rz6vbamveff5xhpnfwo3svpr2q
by
Takuto Otomo, Hiroshi Kera, Kazuhiko Kawamoto
2022
Abstract
We address adversarial attacks on the actuators at the joints of legged
robots trained by deep reinforcement learning. The vulnerability to the joint
attacks can significantly impact the safety and robustness of legged robots. In
this study, we demonstrate that the adversarial perturbations to the torque
control signals of the actuators can significantly reduce the rewards and cause
walking instability in robots. To find the adversarial torque perturbations, we
develop black-box adversarial attacks, where, the adversary cannot access the
neural networks trained by deep reinforcement learning. The black box attack
can be applied to legged robots regardless of the architecture and algorithms
of deep reinforcement learning. We employ three search methods for the
black-box adversarial attacks: random search, differential evolution, and
numerical gradient descent methods. In experiments with the quadruped robot
Ant-v2 and the bipedal robot Humanoid-v2, in OpenAI Gym environments, we find
that differential evolution can efficiently find the strongest torque
perturbations among the three methods. In addition, we realize that the
quadruped robot Ant-v2 is vulnerable to the adversarial perturbations, whereas
the bipedal robot Humanoid-v2 is robust to the perturbations. Consequently, the
joint attacks can be used for proactive diagnosis of robot walking instability.
In text/plain
format
Archived Files and Locations
application/pdf 923.5 kB
file_t6bj7kjnwnhf3bpmveguv4cc4q
|
arxiv.org (repository) web.archive.org (webarchive) |
2205.10098v1
access all versions, variants, and formats of this works (eg, pre-prints)