講演抄録/キーワード |
講演名 |
2020-11-25 11:10
GAN based feature-level supportive method for improved adversarial attacks on face recognition ○Zhengwei Yin(USTC/Hosei Univ.)・Kaoru Uchida(Hosei Univ.) BioX2020-35 |
抄録 |
(和) |
With the rapid development of deep neural networks (DNN), DNN-based face recognition technologies are also achieving great success and have been widely used in various applications which require high-accuracy and robustness. However, deep neural networks are known to be vulnerable to adversarial attacks, performed using images added with well-designed perturbations. To enhance security of DNN-based face recognition, we need to explore deeper the mechanisms of related technologies. In this paper, we propose a feature-level supportive method, BiasGAN, to improve the performance of universal adversarial attack methods. We insert this image to image translation preprocessor before conducting adversarial example generation. BiasGAN will search in the potential face space and can generate images with biased face feature, causing generated face images to be easier to perturb efficiently. Experimental results show that this approach improves both fooling ratio and average perturbation size significantly at different perturbation levels. |
(英) |
With the rapid development of deep neural networks (DNN), DNN-based face recognition technologies are also achieving great success and have been widely used in various applications which require high-accuracy and robustness. However, deep neural networks are known to be vulnerable to adversarial attacks, performed using images added with well-designed perturbations. To enhance security of DNN-based face recognition, we need to explore deeper the mechanisms of related technologies. In this paper, we propose a feature-level supportive method, BiasGAN, to improve the performance of universal adversarial attack methods. We insert this image to image translation preprocessor before conducting adversarial example generation. BiasGAN will search in the potential face space and can generate images with biased face feature, causing generated face images to be easier to perturb efficiently. Experimental results show that this approach improves both fooling ratio and average perturbation size significantly at different perturbation levels. |
キーワード |
(和) |
Deep neural network / enerative adversarial network / Face recognition / Adversarial attack / / / / |
(英) |
Deep neural network / enerative adversarial network / Face recognition / Adversarial attack / / / / |
文献情報 |
信学技報, vol. 120, no. 247, BioX2020-35, pp. 1-6, 2020年11月. |
資料番号 |
BioX2020-35 |
発行日 |
2020-11-18 (BioX) |
ISSN |
Online edition: ISSN 2432-6380 |
著作権に ついて |
技術研究報告に掲載された論文の著作権は電子情報通信学会に帰属します.(許諾番号:10GA0019/12GB0052/13GB0056/17GB0034/18GB0034) |
PDFダウンロード |
BioX2020-35 |
|