Committee |
Date Time |
Place |
Paper Title / Authors |
Abstract |
Paper # |
IE, MVE, CQ, IMQ (Joint) [detail] |
2024-03-13 10:00 |
Okinawa |
Okinawa Sangyo Shien Center (Primary: On-site, Secondary: Online) |
Estimation of Perceived Difficulty when reading VR-based Educational Comics Using Gaze and Facial Movement, Heart Rate, and Electroencephalography Hiroyuki Ishizuka (NAIST), Kenya Sakamoto, Shizuka Shirai, Jason Orlosky (OU), Yutaro Hirao, Monica Perusquia-Hernandez (NAIST), Naoya Isoyama (OWU), Hideaki Uchiyama, Kiyoshi Kiyokawa (NAIST) IMQ2023-13 IE2023-68 MVE2023-42 |
This study explores significant biometric features that can be used to estimate subjective difficulty while reading educ... [more] |
IMQ2023-13 IE2023-68 MVE2023-42 pp.1-6 |
AI |
2024-03-01 15:20 |
Aichi |
Room0221, Bldg.2-C, Nagoya Institute of Technology |
On Using Existing Facial Expression Recognition Model for Student Behavior Tracking Yuna Kaneko, Masato Kikuchi, Tadachika Ozono (NIT) AI2023-43 |
It is challenging for teachers to understand students' reactions during online lectures. Estimating student behaviors by... [more] |
AI2023-43 pp.37-40 |
HCGSYMPO (2nd) |
2023-12-11 - 2023-12-13 |
Fukuoka |
Asia pacific Import Mart (Kitakyushu) (Primary: On-site, Secondary: Online) |
Estimation of effective global dimension and local dimension of expression space Daigo Mihira, Jinhui Chao (Chuo Univ.) |
The classification of facial expressions by linguistic categories, which is currently widely used in facial expression r... [more] |
|
PRMU, IPSJ-CVIM, IPSJ-DCC, IPSJ-CGVI |
2023-11-16 16:50 |
Tottori |
(Primary: On-site, Secondary: Online) |
Understanding level estimation using similarities between users' understanding expression patterns Yuki Kitagishi, Naohiro Tawara, Atsunori Ogawa, Taichi Asami (NTT), Tomoko Yonezawa (Kansai Univ.) PRMU2023-26 |
We define three-degree understanding levels of low/neutral/high as an audience member looks like they are understanding ... [more] |
PRMU2023-26 pp.56-61 |
PRMU, IPSJ-CVIM, IPSJ-DCC, IPSJ-CGVI |
2023-11-17 09:20 |
Tottori |
(Primary: On-site, Secondary: Online) |
Research on automatic different sign language conversion and CG presentation based on sign language video analysis Masamitsu Miyata, Masamitsu Nishi, Shinya Fukumoto, Masayuki Kashima, Mutsumi Watanabe (Kagoshima Univ.) PRMU2023-31 |
Sign languages vary from country to country, making it difficult for signers from different countries to communicate wit... [more] |
PRMU2023-31 pp.86-91 |
HCS, CNR |
2023-11-05 15:05 |
Tokyo |
Kogakuin University (Primary: On-site, Secondary: Online) |
Investigation of Effective Features for Estimating Agreement or Disagreement by Facial images in Online Conferences Hiroki Saito, Kyotaro Sato, Junji Yamato (Kogakuin Univ.) CNR2023-15 HCS2023-77 |
In online conferences, it is more difficult to capture detailed facial expression changes than in offline conferences. I... [more] |
CNR2023-15 HCS2023-77 pp.45-49 |
HCS |
2023-01-22 14:55 |
Kyoto |
Kyoto Institute of Technology (Primary: On-site, Secondary: Online) |
Facial expression and recognition related to deliciousness during eating Kae Mukai, Kensuke Nakazato, Katsumi Watanabe (Waseda Univ.) HCS2022-72 |
We investigated the facial expression and facial recognition related to deliciousness during eating. The results reveale... [more] |
HCS2022-72 pp.98-101 |
HCS |
2022-10-27 14:20 |
Online |
Online |
Do mood states contribute to facial expression perception? Qi Fan (Waseda Univ.), Kyoko Ito (Kyoto Tachibana Univ./Osaka Univ.), Fusako Koshikawa (Waseda Univ.) HCS2022-51 |
This study intends to investigate how mood states affect facial expression perception. The study examined how depression... [more] |
HCS2022-51 pp.18-23 |
MVE, VRSJ-SIG-MR, IPSJ-EC, HI-SIG-DeMO, VRSJ-SIG-CS |
2022-10-07 09:20 |
Hokkaido |
(Primary: On-site, Secondary: Online) |
Detection of human boredom from video Yuki Tachikawa, Atsushi Nakazawa (Kyoto Univ.) MVE2022-27 |
Prediction of individual internal state is an essential element to realize future affective interactive systems. Neverth... [more] |
MVE2022-27 pp.52-56 |
HCS |
2022-01-28 16:20 |
Online |
Online |
Laterality effect in genuine smile perception Riko Nakashima (Waseda Univ.), Tomoko Isomura (Nagoya Univ.), Tatsunori Ishii, Katsumi Watanabe (Waseda Univ.) HCS2021-59 |
(To be available after the conference date) [more] |
HCS2021-59 pp.95-100 |
HCGSYMPO (2nd) |
2021-12-15 - 2021-12-17 |
Online |
Online |
Toward the ordinal scale based facial expression evaluation Kei Shimonishi, Kazuaki Kondo, Junyao Zhang, Yuichi Nakamura (Kyoto Univ.) |
Traditional frameworks of facial expression recognition mainly focus on a clear facial expression as a classification pr... [more] |
|
HCGSYMPO (2nd) |
2021-12-15 - 2021-12-17 |
Online |
Online |
Construction of a facial expression database of Japanese elderly people and Benchmarking by facial expression analysis system Hiroto Murakami, Naoto Yoshida (Nagoya Univ.), Tomoko Yonezawa (Kansai Univ.), Yu Enokibori, Kenji Mase (Nagoya Univ.) |
In this study, we constructed a database of facial expressions of Japanese elderly people and analyzed their features us... [more] |
|
HIP |
2021-10-21 10:10 |
Online |
Online |
Understanding Estimation of Web-Meeting Participants Using Multiple-Understanding States by Web Camera Video Yuki Kitagishi, Hosana Kamiyama, Takeshi Mori, Taichi Asami, Naohiro Tawara (NTT), Tomoko Yonezawa (Kansai Univ.) HIP2021-30 |
In this study, we propose a new estimation method of the five-level participant's understanding in a web conference from... [more] |
HIP2021-30 pp.1-6 |
IMQ, HIP |
2021-07-09 15:35 |
Online |
Online |
Recognition of facial expression transition caused by positive and negative visual stimuli Junyao Zhang, Kei Shimonishi, Kazuaki Kondo, Kanako Obata, Yuichi Nakamura (Kyoto Univ.) IMQ2021-4 HIP2021-19 |
This work aims to provide a novel objective measurement of facial expression, which can be used for dementia care, rehab... [more] |
IMQ2021-4 HIP2021-19 pp.11-16 |
HCS, HIP, HI-SIGCOASTER [detail] |
2021-05-25 12:50 |
Online |
Online |
Development of a Job Interview Training System with Multi-modal Behavior Analysis Nao Takeuchi, Tomoko Koda (OIT) HCS2021-10 HIP2021-10 |
This paper introduces a job interview training system that recognizes the nonverbal behaviors of the interviewee, namely... [more] |
HCS2021-10 HIP2021-10 pp.50-54 |
HCS |
2021-01-24 11:40 |
Online |
Online |
Recognizing detailed smile intensity changes for QOL estimation Taichi Nakamura, Kazuaki Kondo, Yuichi Nakamura (Kyoto Univ), Shinichi Satou (NII) HCS2020-59 |
This paper introduces a novel method for recognizing smile toward the use of QOL (Quality Of Life) estimation.
Facial ... [more] |
HCS2020-59 pp.39-44 |
NLP |
2020-05-15 11:25 |
Online |
Online |
Facial Expression Recognition by a Neural Network Inspired from Processing between the Visual Cortex and Amygdala Daiki Yoshihara, Toshikazu Samura (Yamaguchi Univ.) NLP2020-2 |
Facial expressions are important to communication. The visual cortex and amygdala are involved in the recognition of fac... [more] |
NLP2020-2 pp.7-10 |
HIP, HCS, HI-SIGCOASTER [detail] |
2020-05-14 14:20 |
Online |
Online |
A New Algorithm for Local Isometric Maps in Riemann Spaces Masashi Shinto, Chao Jinhui (Chuo Univ.) HCS2020-5 HIP2020-5 |
A psychophysical space is a space of physical stimuli in which the JND (Just-Noticeable-Difference) discrimination thres... [more] |
HCS2020-5 HIP2020-5 pp.21-24 |
IE, IMQ, MVE, CQ (Joint) [detail] |
2020-03-06 15:20 |
Fukuoka |
Kyushu Institute of Technology (Cancelled but technical report was issued) |
Facial Expression Recognition with Application of Augmented Reality Technologyon Amusement of Human Body Model in Virtual Reality with Converted Appearanceof Oneself Operated by Whole Body Motion Yang Bowen, Ikoma Norikazu (NIT) CQ2019-157 |
As an amusement that can be played in response to human motion in real time, the face of tester is beautified from image... [more] |
CQ2019-157 pp.119-123 |
HIP, ASJ-H |
2020-02-16 11:15 |
Okinawa |
IT Souzou-kan Bldg. (Naha) |
Estimation of the communication mood using facial expressions during conversation. Hiroyuki Umemura, Tomomi Fujimura (AIST) HIP2019-90 |
In the present study, we measured facial expressions during conversations had by two participants and investigated the r... [more] |
HIP2019-90 pp.67-71 |