IEICE Technical Committee Submission System
Conference Schedule
Online Proceedings
[Sign in]
Tech. Rep. Archives
    [Japanese] / [English] 
( Committee/Place/Topics  ) --Press->
 
( Paper Keywords:  /  Column:Title Auth. Affi. Abst. Keyword ) --Press->

All Technical Committee Conferences  (Searched in: All Years)

Search Results: Conference Papers
 Conference Papers (Available on Advance Programs)  (Sort by: Date Descending)
 Results 1 - 20 of 49  /  [Next]  
Committee Date Time Place Paper Title / Authors Abstract Paper #
ICSS, IPSJ-SPT 2024-03-22
14:55
Okinawa OIST
(Primary: On-site, Secondary: Online)
Adversarial Examples with Missing Perturbation Using Laser Irradiation
Daisuke Kosuge, Hayato Watanabe, Taiga Manabe, Yoshihisa Takayama, Toshihiro Ohigashi (Tokai Univ.) ICSS2023-97
In recent years, neural networks have made remarkable progress in the field of image processing and other areas, and the... [more] ICSS2023-97
pp.201-207
RCC, ISEC, IT, WBS 2024-03-14
10:20
Osaka Osaka Univ. (Suita Campus) Improving classification accuracy of imaged malware through data expansion
Kaoru Yokobori, Hiroki Tanioka, Masahiko Sano, Kenji Matsuura, Tetsushi Ueta (Tokushima Univ.) IT2023-115 ISEC2023-114 WBS2023-103 RCC2023-97
Although malware-based attacks have existed for years,
malware infections increased in 2019 and 2020.
One of the reaso... [more]
IT2023-115 ISEC2023-114 WBS2023-103 RCC2023-97
pp.259-264
PRMU, IBISML, IPSJ-CVIM 2024-03-04
09:12
Hiroshima Hiroshima Univ. Higashi-Hiroshima campus
(Primary: On-site, Secondary: Online)
Creating Adversarial Examples to Deceive Both Humans and Machine Learning Models
Ko Fujimori (Waseda Univ.), Toshiki Shibahara (NTT), Daiki Chiba (NTT Security), Mitsuaki Akiyama (NTT), Masato Uchida (Waseda Univ.) PRMU2023-65
One of the vulnerability attacks against neural networks is the generation of Adversarial Examples (AE), which induce mi... [more] PRMU2023-65
pp.82-87
PRMU, IBISML, IPSJ-CVIM 2024-03-04
09:36
Hiroshima Hiroshima Univ. Higashi-Hiroshima campus
(Primary: On-site, Secondary: Online)
Disabling Adversarial Examples through Color Information Processing
Ryo Soeda, Masato Uchida (Waseda Univ.) PRMU2023-67
Image classification using neural networks is expected to have a wide range of applications, including automated driving... [more] PRMU2023-67
pp.94-99
SIP, SP, EA, IPSJ-SLP [detail] 2024-03-01
09:30
Okinawa
(Primary: On-site, Secondary: Online)
Black-Box Adversarial Attack for Math Formula Recognition Model
Haruto Namura, Masatomo Yoshida (Doshisha Univ.), Nicola Adami (UNIBS), Masahiro Okuda (Doshisha Univ.) EA2023-110 SIP2023-157 SP2023-92
Remarkable advances in deep learning have greatly improved the accuracy of image analysis. The progress of deep learning... [more] EA2023-110 SIP2023-157 SP2023-92
pp.289-293
VLD, HWS, ICD 2024-03-02
09:20
Okinawa
(Primary: On-site, Secondary: Online)
Countermeasure on AI Hardware against Adversarial Examples
Kosuke Hamaguchi, Shu Takemoto, Yusuke Nozaki, Masaya Yoshikawa (Meijo Univ.) VLD2023-134 HWS2023-94 ICD2023-123
The demand for edge AI, in which artificial intelligence (AI) is directly embedded in devices, is increasing, and the se... [more] VLD2023-134 HWS2023-94 ICD2023-123
pp.184-189
ITS, IE, ITE-MMS, ITE-ME, ITE-AIT [detail] 2024-02-19
10:45
Hokkaido Hokkaido Univ. Brightness Adjustment based Countermeasure against Adversarial Examples
Takumi Tojo, Ryo Kumagai, Shu Takemoto, Yusuke Nozaki, Masaya Yoshikawa (Meijo Univ.) ITS2023-47 IE2023-36
Recently, image classification using deep learning AI has been used for in-vehicle AI, and its accuracy and response spe... [more] ITS2023-47 IE2023-36
pp.7-12
ITS, IE, ITE-MMS, ITE-ME, ITE-AIT [detail] 2024-02-19
11:00
Hokkaido Hokkaido Univ. Improving Adversarial Robustness in Continual Learning
Koki Mukai, Soichiro Kumano (UTokyo), Nicolas Michel (UGE/CNRS/LIGM), Ling Xiao, Toshihiko Yamasaki (UTokyo) ITS2023-48 IE2023-37
The goal of continual learning is to prevent catastrophic forgetting. However, few studies have simultaneously considere... [more] ITS2023-48 IE2023-37
pp.13-18
SIP, IT, RCS 2024-01-19
13:30
Miyagi
(Primary: On-site, Secondary: Online)
[Invited Talk] Problem of Adversarial Attacks on CNN-based Image Classifiers and Countermeasures
Minoru Kuribayashi (Tohoku Univ.) IT2023-67 SIP2023-100 RCS2023-242
It is well-known that discriminative models based on deep learning techniques may cause misclassification if adversarial... [more] IT2023-67 SIP2023-100 RCS2023-242
p.204
MIKA
(3rd)
2023-10-11
14:30
Okinawa Okinawa Jichikaikan
(Primary: On-site, Secondary: Online)
[Poster Presentation] Detecting Poisoning Attacks Using Adversarial Examples in Deep Phishing Detection
Koko Nishiura, Tomotaka Kimura, Jun Cheng (Doshisha Univ.)
In recent years, the convenience of online services has greatly improved, but the number of phishing scams has skyrocket... [more]
NS, IN, CS, NV
(Joint)
2023-09-08
09:00
Miyagi Tohoku University
(Primary: On-site, Secondary: Online)
Demonstrating Data Poisoning Attacks on Machine Learning Models with Multi-Sensor Inputs
Shyam Maisuria, Yuichi Ohsita, Masayuki Murata (Osaka Univ.) IN2023-31
Data poisoning attacks pose a significant threat to the integrity and reliability of machine learning models. These atta... [more] IN2023-31
pp.8-13
ICSS, IPSJ-SPT 2023-03-13
14:20
Okinawa Okinawaken Seinenkaikan
(Primary: On-site, Secondary: Online)
Dynamic Analysis of Adversarial Attacks
Kentaro Goto (JPNIC), Masato Uchida (Waseda Univ.) ICSS2022-52
In this study, we propose a method for identifying the characteristics of attack methods by operating them as “samples” ... [more] ICSS2022-52
pp.25-30
PRMU, IBISML, IPSJ-CVIM [detail] 2023-03-02
11:40
Hokkaido Future University Hakodate
(Primary: On-site, Secondary: Online)
Novel Adversarial Attacks Based on Embedding Geometry of Data Manifolds
Masahiro Morita, Hajime Tasaki, Jinhui Chao (Chuo Univ.) PRMU2022-84 IBISML2022-91
It has been shown recently that adversarial examples inducing misclassification by deep neural networks exist in the ort... [more] PRMU2022-84 IBISML2022-91
pp.140-145
SIS 2023-03-02
13:30
Chiba Chiba Institute of Technology
(Primary: On-site, Secondary: Online)
An image watermarking method using adversarial perturbations
Sei Takano, Mitsuji Muneyasu, Soh Yoshida (Kansai Univ.) SIS2022-43
The performance of convolutional neural networks (CNNs) has been dramatically improved in recent years, and they have at... [more] SIS2022-43
pp.15-20
IE, ITS, ITE-MMS, ITE-ME, ITE-AIT [detail] 2023-02-22
10:15
Hokkaido Hokkaido Univ. Generation Method of Targeted Adversarial Examples using Gradient Information for the Target Class of the Image
Ryo Kumagai, Shu Takemoto, Yusuke Nozaki, Masaya Yoshikawa (Meijo Univ.) ITS2022-61 IE2022-78
With the advancement of AI technology, the vulnerability of AI system is pointed out. The adversarial examples (AE), whi... [more] ITS2022-61 IE2022-78
pp.107-111
HWS, ICD 2022-10-25
13:50
Shiga
(Primary: On-site, Secondary: Online)
Fundamental Study of Adversarial Examples Created by Fault Injection Attack on Image Sensor Interface
Tatsuya Oyama, Kota Yoshida, Shunsuke Okura, Takeshi Fujino (Ritsumeikan Univ.) HWS2022-36 ICD2022-28
Adversarial examples (AEs), which cause misclassification by adding subtle perturbations to input images, have been prop... [more] HWS2022-36 ICD2022-28
pp.35-40
SIP 2022-08-26
14:26
Okinawa Nobumoto Ohama Memorial Hall (Ishigaki Island)
(Primary: On-site, Secondary: Online)
Generation method of Adversarial Examples using XAI
Ryo Kumagai, Shu Takemoto, Yusuke Nozaki, Masaya Yoshikawa (Meijo Univ.) SIP2022-72
With the advancement of AI technology, AI can be applied to various fields. Therefore the accountability for the decisio... [more] SIP2022-72
pp.115-120
NC, IBISML, IPSJ-BIO, IPSJ-MPS [detail] 2022-06-27
15:30
Okinawa
(Primary: On-site, Secondary: Online)
Evaluating and Enhancing Reliabilities of AI-Powered Tools -- Adversarial Robustness --
Jingfeng Zhang (RIKEN-AIP) NC2022-4 IBISML2022-4
When we deploy models trained by standard training (ST), they work well on natural test data. However, those models cann... [more] NC2022-4 IBISML2022-4
pp.20-46
IA, ICSS 2022-06-24
10:25
Nagasaki Univ. of Nagasaki
(Primary: On-site, Secondary: Online)
Application of Adversarial Examples to Physical ECG Signals
Taiga Ono (Waseda Univ.), Takeshi Sugawara (UEC), Jun Sakuma (Tsukuba Univ./RIKEN), Tatsuya Mori (Waseda Univ./RIKEN/NICT) IA2022-11 ICSS2022-11
This work aims to assess the reality and feasibility of applying adversarial examples to attack cardiac diagnosis system... [more] IA2022-11 ICSS2022-11
pp.61-66
CAS, SIP, VLD, MSS 2022-06-16
14:40
Aomori Hachinohe Institute of Technology
(Primary: On-site, Secondary: Online)
Adversarial Robustness of Secret Key-Based Defenses against AutoAttack
Miki Tanaka, April Pyone MaungMaung (Tokyo Metro Univ.), Isao Echizen (NII), Hitoshi Kiya (Tokyo Metro Univ.) CAS2022-7 VLD2022-7 SIP2022-38 MSS2022-7
Deep neural network (DNN) models are well-known to easily misclassify prediction results by using input images with smal... [more] CAS2022-7 VLD2022-7 SIP2022-38 MSS2022-7
pp.34-39
 Results 1 - 20 of 49  /  [Next]  
Choose a download format for default settings. [NEW !!]
Text format pLaTeX format CSV format BibTeX format
Copyright and reproduction : All rights are reserved and no part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher. Notwithstanding, instructors are permitted to photocopy isolated articles for noncommercial classroom use without fee. (License No.: 10GA0019/12GB0052/13GB0056/17GB0034/18GB0034)


[Return to Top Page]

[Return to IEICE Web Page]


The Institute of Electronics, Information and Communication Engineers (IEICE), Japan