IEICE Technical Committee Submission System
Conference Schedule
Online Proceedings
[Sign in]
Tech. Rep. Archives
    [Japanese] / [English] 
( Committee/Place/Topics  ) --Press->
 
( Paper Keywords:  /  Column:Title Auth. Affi. Abst. Keyword ) --Press->

All Technical Committee Conferences  (Searched in: All Years)

Search Results: Conference Papers
 Conference Papers (Available on Advance Programs)  (Sort by: Date Descending)
 Results 1 - 20 of 141  /  [Next]  
Committee Date Time Place Paper Title / Authors Abstract Paper #
IE, MVE, CQ, IMQ
(Joint) [detail]
2024-03-15
15:30
Okinawa Okinawa Sangyo Shien Center
(Primary: On-site, Secondary: Online)
High Precision Anomaly Detection using PaDiM based on Pre-training with Normality Constraint of Normal Features
Hiroki Kobayashi, Manabu Hashimoto (Chukyo Univ.) IMQ2023-89 IE2023-144 MVE2023-118
In recent years, automatic visual inspection is expected with machine learning. Among them, PaDiM is attracting attentio... [more] IMQ2023-89 IE2023-144 MVE2023-118
pp.408-413
NC, MBE
(Joint)
2024-03-12
11:40
Tokyo The Univ. of Tokyo
(Primary: On-site, Secondary: Online)
Generalization Model of Monkey V4 Neurons based on FCN encoder
Tsubasa Saito, Taisei Hara, Ko Sakai (Univ. of Tsukuba) NC2023-54
The V4 field in the ventral visual pathway is situated as an intermediary region processing visual information crucial f... [more] NC2023-54
pp.65-68
SeMI, IPSJ-UBI, IPSJ-MBL 2024-02-29
11:30
Fukuoka   Detecting Distress Variations Using Multimodal Data Obtained through Interaction with A Smart Speaker
Chingyuan Lin, Yuki Matsuda, Hirohiko Suwa, Keiichi Yasumoto (Naist) SeMI2023-73
Mental health significantly affects people, with excessive stress potentially causing depression, low productivity, and ... [more] SeMI2023-73
pp.13-18
NC, MBE
(Joint)
2023-10-27
13:55
Miyagi Tohoku Univ.
(Primary: On-site, Secondary: Online)
Effect on tuning properties to 1st- and 2nd-order stimuli by inactivation of internal units in Deep Convolutional Neural Network (DCNN)
Anqi Wang, Maryu Horyozaki, Takahisa M. Sanada (IPU) NC2023-26
Object recognition relies not only on luminance, which is considered 1st-order visual feature, but also on more complex ... [more] NC2023-26
pp.6-11
WIT, SP, IPSJ-SLP [detail] 2023-10-14
14:20
Fukuoka Kyushu Institute of Technology
(Primary: On-site, Secondary: Online)
Proposal for Music Visualization Method Using Chironomie for Enhancing Musical Experience of the Hearing Impaired
Tatsumi Kana, Shinji Sako (NITech) SP2023-30 WIT2023-21
The aim of this study is to enable both hearing-impaired and normal-hearing to enjoy music together by visualizing it us... [more] SP2023-30 WIT2023-21
pp.17-20
DE, IPSJ-DBS, IPSJ-IFAT [detail] 2023-09-21
15:50
Fukuoka Kitakyushu International Conference Center BoxPlotQA: Visual Question Answering for Measuring Five-Number Summary and Comparison Performance with Box Plot
Yusuke Tozaki, Hisashi Miyamori (Kyoto Sangyo Univ.) DE2023-16
Recently, visual question and answer (VQA) research on document and chart images, as well as natural images, has attract... [more] DE2023-16
pp.31-36
ET 2023-03-15
10:05
Tokushima Tokushima University
(Primary: On-site, Secondary: Online)
About the Comparison of Biometric Information during Learning of Visual- and Text-based Programming Language
Katsuyuki Umeawa (Shonan Inst. of Tech.), Makoto Nakazawa (Junior College of Aizu), Shigeichi Hirasawa (Waseda Univ.) ET2022-82
Beginners in learning programming learn visual-based programming languages such as Scratch, while experts use text-based... [more] ET2022-82
pp.142-147
ET 2023-03-15
14:25
Tokushima Tokushima University
(Primary: On-site, Secondary: Online)
Methods for Visualizing Discussions in Comments against Web News Considering Understanding Opinion Trends and Discussion Points
Hiroki Wakabayashi (Fukushima Univ.), Hiroki Nakayama (Yamagata Univ.), Ryo Onuma (Tsuda Univ.), Hiroaki Kaminaga (Fukushima Univ.), Youzou Miyadera (Tokyo Gakugei Univ.), Shoichi Nakamura (Fukushima Univ.) ET2022-89
Opportunities for conducting discussions and expressing opinions on the Web have continuously increased. It is desirable... [more] ET2022-89
pp.184-188
MI 2023-03-06
15:34
Okinawa OKINAWA SEINENKAIKAN
(Primary: On-site, Secondary: Online)
Reproducing method of cancer annotation by local features in pathological images
Shunya Mutsuda (KIT), Sohsuke Yamada (Kanazawa Medical Univ.), Toshiki Kindo (KIT) MI2022-90
In recent years, AI-based pathological image diagnosis technology has been actively researched. Therefore, in this study... [more] MI2022-90
pp.82-87
PRMU, IBISML, IPSJ-CVIM [detail] 2023-03-02
10:35
Hokkaido Future University Hakodate
(Primary: On-site, Secondary: Online)
Investigation of Appearance Inspection Method Considering the Number of Corresponding Local Patches
Katsuhisa Kitaguchi, Yohei Nishizaki, Mamoru Saito (ORIST) PRMU2022-74 IBISML2022-81
There has been a great deal of research on appearance inspection using deep learning, which learns only from normal imag... [more] PRMU2022-74 IBISML2022-81
pp.88-92
IE, ITS, ITE-MMS, ITE-ME, ITE-AIT [detail] 2023-02-21
10:30
Hokkaido Hokkaido Univ. A Note on Traffic Sign Recognition Based on Vision Transformer Adapter Using Visual Feature Matching
Yaozong Gan, Guang Li, Ren Togo, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama (Hokkaido Univ.)
Traffic sign recognition is a real-world task that involves many constraints and complications. Traffic sign recognition... [more]
HIP 2022-10-18
11:15
Kyoto Kyoto Terrsa
(Primary: On-site, Secondary: Online)
Does non-spatial auditory cue enhance visual search in older adults?
Shinya Harada, Ryo Teraoka, Naoki Kuroda (Kumamoto Univ), Souta Hidaka (Rikkyo Univ), Wataru Teramoto (Kumamoto Univ) HIP2022-53
A non-spatial auditory cue reportedly enhances visual search performance when the auditory cue is synchronized with a ch... [more] HIP2022-53
pp.32-35
ET 2022-09-17
15:45
Hiroshima Hiroshima University and Online
(Primary: On-site, Secondary: Online)
Methods for Dynamically Extracting the Human-Relationships on the Web based on Analysis of Closeness Centrality
Ayaka Ichikawa (Fukushima Univ.), Ryo Onuma (Tsuda Univ.), Hiroki Nakayama (Yamagata Univ.), Hiroaki Kaminaga (Fukushima Univ.), Youzou Miyadera (Tokyo Gakugei Univ.), Shoichi Nakamura (Fukushima Univ.) ET2022-20
To successfully conduct complicated research and explorations using the Web, it is important to accurately understand hu... [more] ET2022-20
pp.58-63
ET 2022-05-28
13:10
Chiba Chiba Institute of Technology / Online
(Primary: On-site, Secondary: Online)
A Proposal and Evaluation of Intermediate Content for Transition from Visual to Text-Based Languages
Katsuyuki Umezawa (Shonan Inst. of Tech.), Kouta IShida (Arrows Systems), Makoto Nakazawa (Junior College of Aizu), Shigeichi Hirasawa (Waseda Univ.) ET2022-1
Beginners in learning programming learn visual programming languages such as Scratch, while experts use text programming... [more] ET2022-1
pp.1-7
ET 2022-03-04
15:45
Online Online Data visualization function in the IoT learning system adapted to the educational needs
Takaaki Kato, Mizue Kayama (Shinshu Univ.), Takashi Nagai (iot), Yusaku Kanda, Takashi Shimizu (Shinshu Univ.) ET2021-79
We have been developing educational IoT materials for use in classes where experiments with measurement activities are r... [more] ET2021-79
pp.157-162
MBE, NC
(Joint)
2022-03-03
14:40
Online Online A study on EEG signal features for discriminating shapes and colors of simple visual images.
Akihiro Kato, Ryota Horie (SIT) MBE2021-96
In this study, we trained an LSTM-based discriminator proposed by Spampinato et al. to discriminate shapes and colors of... [more] MBE2021-96
pp.39-42
EMM 2022-01-27
15:25
Online Online A study of visual and acoustic features affecting optimal levels of whole-body vibration in multisensory content
Shota Abe, Shuichi Sakamoto (Tohoku Univ.), Zhenglie Cui (Aichi univ. of tech.), Yoiti Suzuki (Tohoku Bunka Gakuen Univ.), Jiro Gyoba (Shokei Gakuin Univ.) EMM2021-89
Although it is well known that whole-body vibration enhances sense of reality perceived by multimodal content, it is unc... [more] EMM2021-89
pp.31-36
HIP 2021-10-22
13:10
Online Online A scanpath prediction model using deep learning considering the context of the gazing objects
Yuhei Ohsawa, Takeshi Kohama (Kindai Univ.) HIP2021-43
Since the human gaze is a biological signal which reflects internal states such as consciousness and attention, it is po... [more] HIP2021-43
pp.69-74
HIP 2021-10-22
13:35
Online Online A saliency estimation model for drivers' egocentric vision movies considering self-motion velocity
Yuya Homma, Masashi Fujita, Takeshi Kohama (Kindai Univ.) HIP2021-44
In order to predict where a driver’s attention should be directed during driving, Kodama et al. have developed a salienc... [more] HIP2021-44
pp.75-80
MVE 2021-09-17
14:00
Online Online A CNN model Using Neural Style Features for Predicting Aesthetic Impressions Score Distribution
Yuya Ohagi (Kwansei Gakuin Univ.), Kensuke Tobitani (Univ. of Nagasaki), Iori Tani (Kobe Univ.), Sho Hashimoto (Seinan Gakuin Univ.), Noriko Nagata (Kwansei Gakuin Univ.) MVE2021-14
In this study, we propose a method for predicting the probability distribution of aesthetic impression scores considerin... [more] MVE2021-14
pp.33-37
 Results 1 - 20 of 141  /  [Next]  
Choose a download format for default settings. [NEW !!]
Text format pLaTeX format CSV format BibTeX format
Copyright and reproduction : All rights are reserved and no part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher. Notwithstanding, instructors are permitted to photocopy isolated articles for noncommercial classroom use without fee. (License No.: 10GA0019/12GB0052/13GB0056/17GB0034/18GB0034)


[Return to Top Page]

[Return to IEICE Web Page]


The Institute of Electronics, Information and Communication Engineers (IEICE), Japan