On the Evasion Attack Detector
Li Huayui, Vasily Kostyumov, Oleg Pilipenko, Dmitry Namiot
20m
The paper deals with the issue of detecting adversarial attacks on machine
learning models. Such attacks are understood as deliberate (special) data
changes at one of the stages of the machine learning pipeline, which is designed
to either prevent the operation of the machine learning system, or vice versa,
to achieve the desired result for the attacker. Contention attacks pose a great
threat to machine learning systems because they do not guarantee the results
and quality of the system. And such guarantees are, for example, mandatory for
the use of a machine learning (artificial intelligence) system in critical areas such
as avionics, automatic driving, special applications, etc. The article considers
one of the possible detectors for the so-called evasion attacks.