Committee |
Date Time |
Place |
Paper Title / Authors |
Abstract |
Paper # |
SIP |
2022-08-26 14:26 |
Okinawa |
Nobumoto Ohama Memorial Hall (Ishigaki Island) (Primary: On-site, Secondary: Online) |
Generation method of Adversarial Examples using XAI Ryo Kumagai, Shu Takemoto, Yusuke Nozaki, Masaya Yoshikawa (Meijo Univ.) |
(To be available after the conference date) [more] |
|
NC, IBISML, IPSJ-BIO, IPSJ-MPS [detail] |
2022-06-27 15:30 |
Okinawa |
(Primary: On-site, Secondary: Online) |
Evaluating and Enhancing Reliabilities of AI-Powered Tools
-- Adversarial Robustness -- Jingfeng Zhang (RIKEN-AIP) NC2022-4 IBISML2022-4 |
When we deploy models trained by standard training (ST), they work well on natural test data. However, those models cann... [more] |
NC2022-4 IBISML2022-4 pp.20-46 |
IA, ICSS |
2022-06-24 10:25 |
Nagasaki |
Univ. of Nagasaki (Primary: On-site, Secondary: Online) |
Application of Adversarial Examples to Physical ECG Signals Taiga Ono (Waseda Univ.), Takeshi Sugawara (UEC), Jun Sakuma (Tsukuba Univ./RIKEN), Tatsuya Mori (Waseda Univ./RIKEN/NICT) IA2022-11 ICSS2022-11 |
This work aims to assess the reality and feasibility of applying adversarial examples to attack cardiac diagnosis system... [more] |
IA2022-11 ICSS2022-11 pp.61-66 |
CAS, SIP, VLD, MSS |
2022-06-16 14:40 |
Aomori |
Hachinohe Institute of Technology (Primary: On-site, Secondary: Online) |
Adversarial Robustness of Secret Key-Based Defenses against AutoAttack Miki Tanaka, April Pyone MaungMaung (Tokyo Metro Univ.), Isao Echizen (NII), Hitoshi Kiya (Tokyo Metro Univ.) CAS2022-7 VLD2022-7 SIP2022-38 MSS2022-7 |
Deep neural network (DNN) models are well-known to easily misclassify prediction results by using input images with smal... [more] |
CAS2022-7 VLD2022-7 SIP2022-38 MSS2022-7 pp.34-39 |
IT, EMM |
2022-05-17 15:05 |
Gifu |
Gifu University (Primary: On-site, Secondary: Online) |
Generating patch-wise adversarial examples for avoidance of face recognition system and verification of its robustness Hiroto Takiwaki, Minoru Kuribayashi, Nobuo Funabiki (Okayama univ.) IT2022-5 EMM2022-5 |
Advances in machine learning technologies such as Convolutional Neural Networks (CNN) have made it possible to identify ... [more] |
IT2022-5 EMM2022-5 pp.23-28 |
IT, EMM |
2022-05-17 15:30 |
Gifu |
Gifu University (Primary: On-site, Secondary: Online) |
A study of adversarial example detection using the correlation between adversarial noise and JPEG compression-derived distortion Kenta Tsunomori, Yuma Yamasaki, Minoru Kuribayashi, Nobuo Funabiki (Okayama Univ.), Isao Echizen (NII) IT2022-6 EMM2022-6 |
Adversarial examples cause misclassification of image classifiers. Higashi et al. proposed a method to detect adversari... [more] |
IT2022-6 EMM2022-6 pp.29-34 |
PRMU, IPSJ-CVIM |
2022-03-10 17:30 |
Online |
Online |
Adversarial Training: A Survey Hiroki Adachi, Tsubasa Hirakawa, Takayoshi Yamashita, Hironobu Fujiyoshi (Chubu Univ.) PRMU2021-73 |
Adversarial training (AT) is a training method that aims to obtain a robust model for defencing the adversarial attack b... [more] |
PRMU2021-73 pp.78-90 |
EMM |
2022-03-07 15:25 |
Online |
(Primary: Online, Secondary: On-site) (Primary: Online, Secondary: On-site) |
[Poster Presentation]
A Proposal for Emotion-Expressive Editor:EmoEditor by Font Changing Yuuki Shimamura, Michiharu Niimi (KIT) EMM2021-100 |
Text media is one of important ways in communications on computers. For example, email, LINE or Twitter uses it frequent... [more] |
EMM2021-100 pp.46-51 |
EMM |
2022-03-07 17:00 |
Online |
(Primary: Online, Secondary: On-site) (Primary: Online, Secondary: On-site) |
Extention of robust image classification system with Adversarial Example Detectors Miki Tanaka, Takayuki Osakabe, Hitoshi Kiya (Tokyo Metro. Univ.) EMM2021-105 |
In image classification with deep learning, there is a risk that an attacker can intentionally manipulate the prediction... [more] |
EMM2021-105 pp.76-80 |
NLP, MICT, MBE, NC (Joint) [detail] |
2022-01-23 11:45 |
Online |
Online |
Adversarial Training with Knowledge Distillation considering Intermediate Feature Representation in CNNs Hikaru Higuchi (The Univ. of Electro-Communications), Satoshi Suzuki (former NTT), Hayaru Shouno (The Univ. of Electro-Communications) NC2021-44 |
Adversarial examples are one of the vulnerability attacks to the convolution neural network (CNN). The adversarialexampl... [more] |
NC2021-44 pp.59-64 |
IBISML |
2022-01-18 14:00 |
Online |
Online |
Robustness to Adversarial Examples by Mixtures of L1 Regularazation Models Hironobu Takenouchi, Junichi Takeuchi (Kyushu Univ.) IBISML2021-26 |
We propose a method of adversarial training using L1 regularizationfor image classification.It is known that L1 regulari... [more] |
IBISML2021-26 pp.61-66 |
MIKA (3rd) |
2021-10-28 10:30 |
Okinawa |
(Primary: On-site, Secondary: Online) |
[Poster Presentation]
Examination of Majority Decision Method for Network Intrusion Detection System Using Deep Learning Koko Nishiura, Yuju Ogawa, Tomotaka Kimura, Jun Cheng (Doshisha Univ.) |
In recent years, the importance of NIDS (Network Intrusion Detection Systems), which detects unauthorized access, has be... [more] |
|
RCS |
2021-10-22 15:00 |
Online |
Online |
[Poster Presentation]
Display-Camera Visible Light Communications Using Monocular Depth Estimation and Adversarial Example Hiraku Okada, ChangSeok Lee (Nagoya Univ.), Tadahiro Wada (Shizuoka Univ.), Kentaro Kobayashi (Meijo Univ.), Chedlia Ben Naila, Masaaki Katayama (Nagoya Univ.) RCS2021-140 |
In display-camera visible light communications, a display shows visual information on which data information is superimp... [more] |
RCS2021-140 pp.120-121 |
PRMU |
2021-10-09 09:30 |
Online |
Online |
Explaining Adversarial Examples by the Embedding Structure of Data Manifold Hajime Tasaki, Yuji Kaneko, Jinhui Chao (Chuo Univ.) PRMU2021-19 |
It is widely known that adversarial examples cause misclassification in classifiers using deep learning. Inspite of nume... [more] |
PRMU2021-19 pp.17-21 |
SIS, ITE-BCT |
2021-10-07 14:25 |
Online |
Online |
Block-wise Transformation with Secret Key for Adversary Robust Defence of SVM model Ryota Iijima, MaungMaung AprilPyone, Hitoshi Kiya (TMU) SIS2021-13 |
In this paper, we propose a method for implementing support vector machine (SVM) models that are robust against adversar... [more] |
SIS2021-13 pp.17-22 |
CS |
2021-07-16 10:25 |
Online |
Online |
Countermeasures against Adversarial Examples using Majority Decision Discriminators for Deep learning-Based Phishing Detection Methods Yuji Ogawa, Tomotaka Kimura, Jun Cheng (Doshisha Univ.) CS2021-33 |
In recent years, the number of phishing attacks has been increasing, and the detection of phishing URLs using deep learn... [more] |
CS2021-33 pp.78-79 |
SP, IPSJ-SLP, IPSJ-MUS |
2021-06-18 15:00 |
Online |
Online |
Protection method with audio processing against Audio Adversarial Example Taisei Yamamoto, Yuya Tarutani, Yukinobu Fukusima, Tokumi Yokohira (Okayama Univ) SP2021-4 |
Machine learning technology has improved the recognition accuracy of voice recognition, and demand for voice recognition... [more] |
SP2021-4 pp.19-24 |
EMM, IT |
2021-05-21 13:10 |
Online |
Online |
A Study of Detecting Adversarial Examples Using Sensitivities to Multiple Auto Encoders Yuma Yamasaki, Minoru Kuribayashi, Nobuo Funabiki (Okayama Univ.), Huy Hong Nguyen, Isao Echizen (NII) IT2021-11 EMM2021-11 |
By removing the small perturbations involved in adversarial examples, the image classification result returns to the cor... [more] |
IT2021-11 EMM2021-11 pp.60-65 |
EMM |
2021-03-04 14:15 |
Online |
Online |
[Poster Presentation]
Detection of Adversarial Examples in CNN Image Classifiers Using Features Extracted with Multiple Strengths of Filter Akinori Higashi, Minoru Kuribayashi, Nobuo Funabiki (Okayama Univ.), Huy Hong Nguyen, Isao Echizen (NII) EMM2020-70 |
Deep learning has been used as a new method for machine learning, and its performance has been significantly improved. A... [more] |
EMM2020-70 pp.19-24 |
ICSS, IPSJ-SPT |
2021-03-02 13:40 |
Online |
Online |
Research on the vulnerability of homoglyph attacks to online machine translation system Takeshi Sakamoto, Tatsuya Mori (Waseda Univ) ICSS2020-50 |
It has been widely known that systems empowered by neural network algorithms are vulnerable against an intrinsic attack ... [more] |
ICSS2020-50 pp.144-149 |