Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain
release_tin37xvyyrhr3nrfxtaxnw5jru
by
Ihai Rosenberg and Asaf Shabtai and Yuval Elovici and Lior Rokach
2021
Abstract
In recent years machine learning algorithms, and more specifically deep
learning algorithms, have been widely used in many fields, including cyber
security. However, machine learning systems are vulnerable to adversarial
attacks, and this limits the application of machine learning, especially in
non-stationary, adversarial environments, such as the cyber security domain,
where actual adversaries (e.g., malware developers) exist. This paper
comprehensively summarizes the latest research on adversarial attacks against
security solutions based on machine learning techniques and illuminates the
risks they pose. First, the adversarial attack methods are characterized based
on their stage of occurrence, and the attacker's goals and capabilities. Then,
we categorize the applications of adversarial attack and defense methods in the
cyber security domain. Finally, we highlight some characteristics identified in
recent research and discuss the impact of recent advancements in other
adversarial learning domains on future research directions in the cyber
security domain. This paper is the first to discuss the unique challenges of
implementing end-to-end adversarial attacks in the cyber security domain, map
them in a unified taxonomy, and use the taxonomy to highlight future research
directions.
In text/plain
format
Archived Content
There are no accessible files associated with this release. You could check other releases for this work for an accessible version.
Know of a fulltext copy of on the public web? Submit a URL and we will archive it
2007.02407v2
access all versions, variants, and formats of this works (eg, pre-prints)