Committee |
Date Time |
Place |
Paper Title / Authors |
Abstract |
Paper # |
ICSS, IPSJ-SPT |
2024-03-22 14:55 |
Okinawa |
OIST (Primary: On-site, Secondary: Online) |
Adversarial Examples with Missing Perturbation Using Laser Irradiation Daisuke Kosuge, Hayato Watanabe, Taiga Manabe, Yoshihisa Takayama, Toshihiro Ohigashi (Tokai Univ.) ICSS2023-97 |
In recent years, neural networks have made remarkable progress in the field of image processing and other areas, and the... [more] |
ICSS2023-97 pp.201-207 |
CAS, CS |
2024-03-15 14:20 |
Okinawa |
|
Recoloring aware Countermeasure against Adversarial Examples Chisei Ishida, Ryo Kumagai, Shu Takemoto, Yusuke Nozaki, Masaya Yoshikawa (Meijo Univ.) CAS2023-134 CS2023-127 |
Adversarial Examples(AEs) which cause artificial intelligence (AI) to make a false prediction by embedded slight perturb... [more] |
CAS2023-134 CS2023-127 pp.128-133 |
RCC, ISEC, IT, WBS |
2024-03-13 - 2024-03-14 |
Osaka |
Osaka Univ. (Suita Campus) |
Performance Evaluation of Visible Light Communication System Using Imaginary Images based Image Classifier Masataka Naito, Tadahiro Wada, Kaiji Mukumoto (Shizuoka Univ.), Hiraku Okada (Nagoya Univ.) IT2023-78 ISEC2023-77 WBS2023-66 RCC2023-60 |
For visible light communication systems that utilizes machine learning-based image classifiers for information embedding... [more] |
IT2023-78 ISEC2023-77 WBS2023-66 RCC2023-60 pp.20-25 |
NC, MBE (Joint) |
2024-03-12 13:30 |
Tokyo |
The Univ. of Tokyo (Primary: On-site, Secondary: Online) |
Diffusion-Based Immediate Adversarial Purification Yuito Narisawa, Motonobu Hattori (Yamanashi Univ.) NC2023-56 |
Neural networks have achieved high performance in image classification, but there is a problem known as Adversarial Exam... [more] |
NC2023-56 pp.75-80 |
PRMU, IBISML, IPSJ-CVIM |
2024-03-04 09:12 |
Hiroshima |
Hiroshima Univ. Higashi-Hiroshima campus (Primary: On-site, Secondary: Online) |
Creating Adversarial Examples to Deceive Both Humans and Machine Learning Models Ko Fujimori (Waseda Univ.), Toshiki Shibahara (NTT), Daiki Chiba (NTT Security), Mitsuaki Akiyama (NTT), Masato Uchida (Waseda Univ.) PRMU2023-65 |
One of the vulnerability attacks against neural networks is the generation of Adversarial Examples (AE), which induce mi... [more] |
PRMU2023-65 pp.82-87 |
PRMU, IBISML, IPSJ-CVIM |
2024-03-04 09:36 |
Hiroshima |
Hiroshima Univ. Higashi-Hiroshima campus (Primary: On-site, Secondary: Online) |
Disabling Adversarial Examples through Color Information Processing Ryo Soeda, Masato Uchida (Waseda Univ.) PRMU2023-67 |
Image classification using neural networks is expected to have a wide range of applications, including automated driving... [more] |
PRMU2023-67 pp.94-99 |
SeMI, IPSJ-UBI, IPSJ-MBL |
2024-02-29 15:10 |
Fukuoka |
|
Evaluation Experiment of Display Camera Visible Light Communication Using Adversarial Examples on a Monocular Depth Estimation Model Changseok Lee, Hiraku Okada (Nagoya Univ.), Tadahiro Wada (Shizuoka Univ.), Chedlia Ben Naila, Masaaki Katayama (Nagoya Univ.) SeMI2023-75 |
Hidden display-camera visible light communication is a method of embedding data in visual information such as images and... [more] |
SeMI2023-75 pp.25-30 |
VLD, HWS, ICD |
2024-03-02 09:20 |
Okinawa |
(Primary: On-site, Secondary: Online) |
Countermeasure on AI Hardware against Adversarial Examples Kosuke Hamaguchi, Shu Takemoto, Yusuke Nozaki, Masaya Yoshikawa (Meijo Univ.) VLD2023-134 HWS2023-94 ICD2023-123 |
The demand for edge AI, in which artificial intelligence (AI) is directly embedded in devices, is increasing, and the se... [more] |
VLD2023-134 HWS2023-94 ICD2023-123 pp.184-189 |
ITS, IE, ITE-MMS, ITE-ME, ITE-AIT [detail] |
2024-02-19 10:45 |
Hokkaido |
Hokkaido Univ. |
Brightness Adjustment based Countermeasure against Adversarial Examples Takumi Tojo, Ryo Kumagai, Shu Takemoto, Yusuke Nozaki, Masaya Yoshikawa (Meijo Univ.) ITS2023-47 IE2023-36 |
Recently, image classification using deep learning AI has been used for in-vehicle AI, and its accuracy and response spe... [more] |
ITS2023-47 IE2023-36 pp.7-12 |
ITS, IE, ITE-MMS, ITE-ME, ITE-AIT [detail] |
2024-02-19 11:00 |
Hokkaido |
Hokkaido Univ. |
Improving Adversarial Robustness in Continual Learning Koki Mukai, Soichiro Kumano (UTokyo), Nicolas Michel (UGE/CNRS/LIGM), Ling Xiao, Toshihiko Yamasaki (UTokyo) ITS2023-48 IE2023-37 |
The goal of continual learning is to prevent catastrophic forgetting. However, few studies have simultaneously considere... [more] |
ITS2023-48 IE2023-37 pp.13-18 |
SIP, IT, RCS |
2024-01-19 13:30 |
Miyagi |
(Primary: On-site, Secondary: Online) |
[Invited Talk]
Problem of Adversarial Attacks on CNN-based Image Classifiers and Countermeasures Minoru Kuribayashi (Tohoku Univ.) IT2023-67 SIP2023-100 RCS2023-242 |
It is well-known that discriminative models based on deep learning techniques may cause misclassification if adversarial... [more] |
IT2023-67 SIP2023-100 RCS2023-242 p.204 |
EMM |
2024-01-17 10:55 |
Miyagi |
Tohoku Univ. (Primary: On-site, Secondary: Online) |
Detecting Adversarial Examples using Filtering Operation Based on JPEG-Compression-Derived Distortion Kenta Tsunomori (Okayama Univ.), Minoru Kuribayashi (Tohoku Univ.), Nobuo Funabiki (Okayama Univ.) EMM2023-87 |
Image classifiers based on convolutional neural networks are caused misclassification by adversarial perturbations. In t... [more] |
EMM2023-87 pp.38-43 |
MIKA (3rd) |
2023-10-11 14:30 |
Okinawa |
Okinawa Jichikaikan (Primary: On-site, Secondary: Online) |
[Poster Presentation]
Detecting Poisoning Attacks Using Adversarial Examples in Deep Phishing Detection Koko Nishiura, Tomotaka Kimura, Jun Cheng (Doshisha Univ.) |
In recent years, the convenience of online services has greatly improved, but the number of phishing scams has skyrocket... [more] |
|
AI |
2023-09-12 15:35 |
Hokkaido |
|
Variational Autoencoder Oriented Protection for Intellectual Property Ryo Kumagai, Shu Takemoto, Yusuke Nozaki, Masaya Yoshikawa (Meijo Univ.) AI2023-31 |
In recent years, generative AI, which generates images based on instructions in natural language, has developed rapidly ... [more] |
AI2023-31 pp.180-186 |
MICT, WBS, RCC, SAT (Joint) [detail] |
2023-05-26 10:50 |
Tokyo |
TOKYO BIG SIGHT (Primary: On-site, Secondary: Online) |
Simulation Evaluation of Hidden Screen-Camera Visible Light Communications Using Adversarial Examples on Depth Estimation Model Changseok Lee, Hiraku Okada (Nagoya Univ.), Tadahiro Wada (Shizuoka Univ.), Chedlia Ben Naila, Masaaki Katayama (Nagoya Univ.) WBS2023-6 RCC2023-6 |
A screen-camera communication has been proved that it is cost-efficient and intuitive to common users. Furthermore, hidd... [more] |
WBS2023-6 RCC2023-6 pp.29-34 |
RCC, ISEC, IT, WBS |
2023-03-14 09:50 |
Yamaguchi |
(Primary: On-site, Secondary: Online) |
A Proposal of Visible Light Communication System using Image Classifier based on Imaginary Images Masataka Naito, Tadahiro Wada, Kaiji Mukumoto (Shizuoka Univ.), Hiraku Okada (Nagoya Univ.) IT2022-70 ISEC2022-49 WBS2022-67 RCC2022-67 |
We have proposed a new method of information embedding in visible light communication by using an image classifier based... [more] |
IT2022-70 ISEC2022-49 WBS2022-67 RCC2022-67 pp.13-18 |
PRMU, IBISML, IPSJ-CVIM [detail] |
2023-03-02 11:40 |
Hokkaido |
Future University Hakodate (Primary: On-site, Secondary: Online) |
Novel Adversarial Attacks Based on Embedding Geometry of Data Manifolds Masahiro Morita, Hajime Tasaki, Jinhui Chao (Chuo Univ.) PRMU2022-84 IBISML2022-91 |
It has been shown recently that adversarial examples inducing misclassification by deep neural networks exist in the ort... [more] |
PRMU2022-84 IBISML2022-91 pp.140-145 |
IE, ITS, ITE-MMS, ITE-ME, ITE-AIT [detail] |
2023-02-22 09:45 |
Hokkaido |
Hokkaido Univ. |
Probabilistic Approach towards Theoretical Understanding for Adversarial Training Soichiro Kumano (UTokyo), Hiroshi Kera (Chiba Univ.), Toshihiko Yamasaki (UTokyo) ITS2022-59 IE2022-76 |
In this paper, we provide the first theoretical analysis of the training dynamics of adversarial training of deep neural... [more] |
ITS2022-59 IE2022-76 pp.95-100 |
IE, ITS, ITE-MMS, ITE-ME, ITE-AIT [detail] |
2023-02-22 10:15 |
Hokkaido |
Hokkaido Univ. |
Generation Method of Targeted Adversarial Examples using Gradient Information for the Target Class of the Image Ryo Kumagai, Shu Takemoto, Yusuke Nozaki, Masaya Yoshikawa (Meijo Univ.) ITS2022-61 IE2022-78 |
With the advancement of AI technology, the vulnerability of AI system is pointed out. The adversarial examples (AE), whi... [more] |
ITS2022-61 IE2022-78 pp.107-111 |
EMM |
2023-01-26 09:55 |
Miyagi |
Tohoku Univ. (Primary: On-site, Secondary: Online) |
On the Transferability of Adversarial Examples between Isotropic Network and CNN models Miki Tanaka (Tokyo Metropolitan Univ.), Isao Echizen (NII), Hitoshi Kiya (Tokyo Metropolitan Univ.) EMM2022-62 |
Deep neural networks are well known to be vulnerable to adversarial examples (AEs). In addition, AEs generated for a sou... [more] |
EMM2022-62 pp.7-12 |