講演抄録/キーワード |
講演名 |
2021-03-05 16:25
Towards Adversarial Robustness of Learning in the Frequency Domain ○Subhajit Chaudhury・Toshihiko Yamasaki(UTokyo) PRMU2020-100 |
抄録 |
(和) |
(まだ登録されていません) |
(英) |
Adversarial attacks study the effect of noise on the robustness of Convolutional Neural Networks (CNNs). Typically, these works have shown that CNNs can be easily fooled by simply adding small imperceptible noise in the RGB color space that cannot be detected by humans. In this paper, we study the effect of adversarial attacks in the frequency domain and show that such attacks are rendered weaker due to frequency domain transformations. We argue that learning CNNs in the frequency domain disentangles frequencies corresponding to semantic and adversarial features. Due to this property, CNNs learned in the frequency domain can selectively put less focus on the adversarial features resulting in a robust performance in the presence of adversarial noise. We performed experiments on multiple datasets and show that CNNs trained on Discrete Cosine Transform (DCT) inputs show significantly better noise robustness to many varieties of adversarial noise compared to standard CNNs learned on RGB/Grayscale input. From this result, we urge the research community towards exploring frequency domain learning as a potential novel area to improve neural network robustness to test-time noise. |
キーワード |
(和) |
/ / / / / / / |
(英) |
Adversarial Attacks / Discrete Cosine Transforms / Defense against Adversarial Attacks / / / / / |
文献情報 |
信学技報, vol. 120, no. 409, PRMU2020-100, pp. 176-180, 2021年3月. |
資料番号 |
PRMU2020-100 |
発行日 |
2021-02-25 (PRMU) |
ISSN |
Online edition: ISSN 2432-6380 |
著作権に ついて |
技術研究報告に掲載された論文の著作権は電子情報通信学会に帰属します.(許諾番号:10GA0019/12GB0052/13GB0056/17GB0034/18GB0034) |
PDFダウンロード |
PRMU2020-100 |