IEICE Technical Committee Submission System
Conference Schedule
Online Proceedings
[Sign in]
Tech. Rep. Archives
    [Japanese] / [English] 
( Committee/Place/Topics  ) --Press->
 
( Paper Keywords:  /  Column:Title Auth. Affi. Abst. Keyword ) --Press->

All Technical Committee Conferences  (Searched in: All Years)

Search Results: Conference Papers
 Conference Papers (Available on Advance Programs)  (Sort by: Date Descending)
 Results 21 - 40 of 71 [Previous]  /  [Next]  
Committee Date Time Place Paper Title / Authors Abstract Paper #
PRMU, IBISML, IPSJ-CVIM [detail] 2023-03-02
11:40
Hokkaido Future University Hakodate
(Primary: On-site, Secondary: Online)
Novel Adversarial Attacks Based on Embedding Geometry of Data Manifolds
Masahiro Morita, Hajime Tasaki, Jinhui Chao (Chuo Univ.) PRMU2022-84 IBISML2022-91
It has been shown recently that adversarial examples inducing misclassification by deep neural networks exist in the ort... [more] PRMU2022-84 IBISML2022-91
pp.140-145
SIS 2023-03-02
13:30
Chiba Chiba Institute of Technology
(Primary: On-site, Secondary: Online)
An image watermarking method using adversarial perturbations
Sei Takano, Mitsuji Muneyasu, Soh Yoshida (Kansai Univ.) SIS2022-43
The performance of convolutional neural networks (CNNs) has been dramatically improved in recent years, and they have at... [more] SIS2022-43
pp.15-20
IE, ITS, ITE-MMS, ITE-ME, ITE-AIT [detail] 2023-02-22
09:45
Hokkaido Hokkaido Univ. Probabilistic Approach towards Theoretical Understanding for Adversarial Training
Soichiro Kumano (UTokyo), Hiroshi Kera (Chiba Univ.), Toshihiko Yamasaki (UTokyo) ITS2022-59 IE2022-76
In this paper, we provide the first theoretical analysis of the training dynamics of adversarial training of deep neural... [more] ITS2022-59 IE2022-76
pp.95-100
IE, ITS, ITE-MMS, ITE-ME, ITE-AIT [detail] 2023-02-22
10:15
Hokkaido Hokkaido Univ. Generation Method of Targeted Adversarial Examples using Gradient Information for the Target Class of the Image
Ryo Kumagai, Shu Takemoto, Yusuke Nozaki, Masaya Yoshikawa (Meijo Univ.) ITS2022-61 IE2022-78
With the advancement of AI technology, the vulnerability of AI system is pointed out. The adversarial examples (AE), whi... [more] ITS2022-61 IE2022-78
pp.107-111
EMM 2023-01-26
09:55
Miyagi Tohoku Univ.
(Primary: On-site, Secondary: Online)
On the Transferability of Adversarial Examples between Isotropic Network and CNN models
Miki Tanaka (Tokyo Metropolitan Univ.), Isao Echizen (NII), Hitoshi Kiya (Tokyo Metropolitan Univ.) EMM2022-62
Deep neural networks are well known to be vulnerable to adversarial examples (AEs). In addition, AEs generated for a sou... [more] EMM2022-62
pp.7-12
SIS 2022-12-05
15:10
Osaka
(Primary: On-site, Secondary: Online)
Application of Adversarial Training in Detection of Calcification Regions from Dental Panoramic Radiographs
Sei Takano, Mitsuji Muneyasu, Soh Yoshida, Akira Asano (Kansai Univ.), Keiichi Uchida (Matsumoto Dental Univ. Hospital) SIS2022-28
Calcification regions that are a sign of vascular diseases may be observed on dental panoramic radiographs. The finding ... [more] SIS2022-28
pp.26-31
VLD, DC, RECONF, ICD, IPSJ-SLDM [detail] 2022-11-30
16:40
Kumamoto  
(Primary: On-site, Secondary: Online)
Evaluation of Model Quantization Method on Vitis-AI for Mitigating Adversarial Examples
Yuta Fukuda, Kota Yoshida, Takeshi Fujino (Ritsumeikan Univ.) VLD2022-51 ICD2022-68 DC2022-67 RECONF2022-74
Adversarial examples (AEs) are security threats in deep neural networks (DNNs). One of the countermeasures is adversaria... [more] VLD2022-51 ICD2022-68 DC2022-67 RECONF2022-74
pp.182-187
HWS, ICD 2022-10-25
13:50
Shiga
(Primary: On-site, Secondary: Online)
Fundamental Study of Adversarial Examples Created by Fault Injection Attack on Image Sensor Interface
Tatsuya Oyama, Kota Yoshida, Shunsuke Okura, Takeshi Fujino (Ritsumeikan Univ.) HWS2022-36 ICD2022-28
Adversarial examples (AEs), which cause misclassification by adding subtle perturbations to input images, have been prop... [more] HWS2022-36 ICD2022-28
pp.35-40
SIP 2022-08-26
14:26
Okinawa Nobumoto Ohama Memorial Hall (Ishigaki Island)
(Primary: On-site, Secondary: Online)
Generation method of Adversarial Examples using XAI
Ryo Kumagai, Shu Takemoto, Yusuke Nozaki, Masaya Yoshikawa (Meijo Univ.) SIP2022-72
With the advancement of AI technology, AI can be applied to various fields. Therefore the accountability for the decisio... [more] SIP2022-72
pp.115-120
NC, IBISML, IPSJ-BIO, IPSJ-MPS [detail] 2022-06-27
15:30
Okinawa
(Primary: On-site, Secondary: Online)
Evaluating and Enhancing Reliabilities of AI-Powered Tools -- Adversarial Robustness --
Jingfeng Zhang (RIKEN-AIP) NC2022-4 IBISML2022-4
When we deploy models trained by standard training (ST), they work well on natural test data. However, those models cann... [more] NC2022-4 IBISML2022-4
pp.20-46
IA, ICSS 2022-06-24
10:25
Nagasaki Univ. of Nagasaki
(Primary: On-site, Secondary: Online)
Application of Adversarial Examples to Physical ECG Signals
Taiga Ono (Waseda Univ.), Takeshi Sugawara (UEC), Jun Sakuma (Tsukuba Univ./RIKEN), Tatsuya Mori (Waseda Univ./RIKEN/NICT) IA2022-11 ICSS2022-11
This work aims to assess the reality and feasibility of applying adversarial examples to attack cardiac diagnosis system... [more] IA2022-11 ICSS2022-11
pp.61-66
CAS, SIP, VLD, MSS 2022-06-16
14:40
Aomori Hachinohe Institute of Technology
(Primary: On-site, Secondary: Online)
Adversarial Robustness of Secret Key-Based Defenses against AutoAttack
Miki Tanaka, April Pyone MaungMaung (Tokyo Metro Univ.), Isao Echizen (NII), Hitoshi Kiya (Tokyo Metro Univ.) CAS2022-7 VLD2022-7 SIP2022-38 MSS2022-7
Deep neural network (DNN) models are well-known to easily misclassify prediction results by using input images with smal... [more] CAS2022-7 VLD2022-7 SIP2022-38 MSS2022-7
pp.34-39
IT, EMM 2022-05-17
15:05
Gifu Gifu University
(Primary: On-site, Secondary: Online)
Generating patch-wise adversarial examples for avoidance of face recognition system and verification of its robustness
Hiroto Takiwaki, Minoru Kuribayashi, Nobuo Funabiki (Okayama univ.) IT2022-5 EMM2022-5
Advances in machine learning technologies such as Convolutional Neural Networks (CNN) have made it possible to identify ... [more] IT2022-5 EMM2022-5
pp.23-28
IT, EMM 2022-05-17
15:30
Gifu Gifu University
(Primary: On-site, Secondary: Online)
A study of adversarial example detection using the correlation between adversarial noise and JPEG compression-derived distortion
Kenta Tsunomori, Yuma Yamasaki, Minoru Kuribayashi, Nobuo Funabiki (Okayama Univ.), Isao Echizen (NII) IT2022-6 EMM2022-6
Adversarial examples cause misclassification of image classifiers. Higashi et al. proposed a method to detect adversari... [more] IT2022-6 EMM2022-6
pp.29-34
PRMU, IPSJ-CVIM 2022-03-10
17:30
Online Online Adversarial Training: A Survey
Hiroki Adachi, Tsubasa Hirakawa, Takayoshi Yamashita, Hironobu Fujiyoshi (Chubu Univ.) PRMU2021-73
Adversarial training (AT) is a training method that aims to obtain a robust model for defencing the adversarial attack b... [more] PRMU2021-73
pp.78-90
EMM 2022-03-07
15:25
Online (Primary: Online, Secondary: On-site)
(Primary: Online, Secondary: On-site)
[Poster Presentation] A Proposal for Emotion-Expressive Editor:EmoEditor by Font Changing
Yuuki Shimamura, Michiharu Niimi (KIT) EMM2021-100
Text media is one of important ways in communications on computers. For example, email, LINE or Twitter uses it frequent... [more] EMM2021-100
pp.46-51
EMM 2022-03-07
17:00
Online (Primary: Online, Secondary: On-site)
(Primary: Online, Secondary: On-site)
Extention of robust image classification system with Adversarial Example Detectors
Miki Tanaka, Takayuki Osakabe, Hitoshi Kiya (Tokyo Metro. Univ.) EMM2021-105
In image classification with deep learning, there is a risk that an attacker can intentionally manipulate the prediction... [more] EMM2021-105
pp.76-80
NLP, MICT, MBE, NC
(Joint) [detail]
2022-01-23
11:45
Online Online Adversarial Training with Knowledge Distillation considering Intermediate Feature Representation in CNNs
Hikaru Higuchi (The Univ. of Electro-Communications), Satoshi Suzuki (former NTT), Hayaru Shouno (The Univ. of Electro-Communications) NC2021-44
Adversarial examples are one of the vulnerability attacks to the convolution neural network (CNN). The adversarialexampl... [more] NC2021-44
pp.59-64
IBISML 2022-01-18
14:00
Online Online Robustness to Adversarial Examples by Mixtures of L1 Regularazation Models
Hironobu Takenouchi, Junichi Takeuchi (Kyushu Univ.) IBISML2021-26
We propose a method of adversarial training using L1 regularizationfor image classification.It is known that L1 regulari... [more] IBISML2021-26
pp.61-66
MIKA
(3rd)
2021-10-28
10:30
Okinawa
(Primary: On-site, Secondary: Online)
[Poster Presentation] Examination of Majority Decision Method for Network Intrusion Detection System Using Deep Learning
Koko Nishiura, Yuju Ogawa, Tomotaka Kimura, Jun Cheng (Doshisha Univ.)
In recent years, the importance of NIDS (Network Intrusion Detection Systems), which detects unauthorized access, has be... [more]
 Results 21 - 40 of 71 [Previous]  /  [Next]  
Choose a download format for default settings. [NEW !!]
Text format pLaTeX format CSV format BibTeX format
Copyright and reproduction : All rights are reserved and no part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher. Notwithstanding, instructors are permitted to photocopy isolated articles for noncommercial classroom use without fee. (License No.: 10GA0019/12GB0052/13GB0056/17GB0034/18GB0034)


[Return to Top Page]

[Return to IEICE Web Page]


The Institute of Electronics, Information and Communication Engineers (IEICE), Japan