Committee |
Date Time |
Place |
Paper Title / Authors |
Abstract |
Paper # |
HCGSYMPO (2nd) |
2023-12-11 - 2023-12-13 |
Fukuoka |
Asia pacific Import Mart (Kitakyushu) (Primary: On-site, Secondary: Online) |
Estimation of effective global dimension and local dimension of expression space Daigo Mihira, Jinhui Chao (Chuo Univ.) |
The classification of facial expressions by linguistic categories, which is currently widely used in facial expression r... [more] |
|
EMCJ, MICT (Joint) |
2023-03-17 14:15 |
Tokyo |
Kikai-Shinko-Kaikan Bldg (Primary: On-site, Secondary: Online) |
An Effect of Data Augmentation using 3D Models in Machine Lipreading on Recognition Accuracy Kazuma Kimura, Kenko Ota (NIT) MICT2022-59 |
In this study, we investigate the use of a three-dimensional model of a speaker's face as a data augmentation method for... [more] |
MICT2022-59 pp.17-21 |
PRMU, IBISML, IPSJ-CVIM [detail] |
2023-03-02 11:05 |
Hokkaido |
Future University Hakodate (Primary: On-site, Secondary: Online) |
A Study of Word Lip-Reading using Meta Learning Michinari Kodama, Takeshi Saitoh (kyutech) PRMU2022-77 IBISML2022-84 |
Lip-reading technology, which estimates utterance content using only visual information, is a kind of supervised learnin... [more] |
PRMU2022-77 IBISML2022-84 pp.102-106 |
PRMU, IPSJ-CVIM |
2022-05-13 10:45 |
Aichi |
Toyota Technological Institute |
Efficient DNN model for word lip-reading Daiki Arakane, Takeshi Saitoh (Kyutech) PRMU2022-4 |
This paper studies various deep learning models for lip-reading technology, including one of supervised learning of the ... [more] |
PRMU2022-4 pp.18-23 |
MICT, EMCJ (Joint) |
2022-03-04 09:45 |
Online |
Online |
A Study on Silent Word Recognition Based on Deep Learning Using Facial 3D Model Ryuji Wada, Kenko Ota (NIT) MICT2021-103 |
The aim of this study is to propose a method to realize silent word recognition removing the constraint on face orientat... [more] |
MICT2021-103 pp.13-18 |
MICT, EMCJ (Joint) |
2022-03-04 10:05 |
Online |
Online |
A study on silent word recognition using various sensors Masaya Kusamoto, Kenko Ota (NIT) MICT2021-104 |
The aim of this study is to clarify the effectiveness of silent word recognition using multiple sensors. When a visible ... [more] |
MICT2021-104 pp.19-24 |
SP |
2019-01-27 11:30 |
Ishikawa |
Kanazawa-Harmonie |
Multimodal Data Augmentation for Visual Speech Recognition using Deep Canonical Correlation Analysis Masaki Shimonishi, Satoshi Tamura, Satoru Hayamizu (Gifu University) SP2018-60 |
This paper proposes ta new data augmentation strategy for deep learning, in which feature vectors in one modality can be... [more] |
SP2018-60 pp.41-45 |
WIT, SP |
2018-10-28 09:50 |
Fukuoka |
Kyushu Institute of Technology(Kitakyushu) |
Experimental report on elderly speech disorder person for new substitute speech using lip reading Takeshi Saitoh, Michiko Kubokawa (kyutech), Misato Hirai, Yasuyuki Noyama (Okayama saiseikai General Hospital) SP2018-42 WIT2018-30 |
Writing and sign language are known as substitute methods of verbal communication. The laryngectomized person uses three... [more] |
SP2018-42 WIT2018-30 pp.51-55 |
PRMU, BioX |
2018-03-19 10:50 |
Tokyo |
|
SSSD: Japanese Speech Scene Database by Smart Device for Visual Speech Recognition Takeshi Saitoh, Michiko Kubokawa (Kyushu Inst. of Tech.) BioX2017-63 PRMU2017-199 |
Speech scenes of conventional database available for lip reading or visual speech recognition (VSR), were record with a ... [more] |
BioX2017-63 PRMU2017-199 pp.163-168 |
WIT, SP |
2017-10-20 09:30 |
Fukuoka |
Tobata Library of Kyutech (Kitakyushu) |
[Invited Talk]
Research Trends in Silent Speech Recognition
-- Focusing on Lip Reading -- Takeshi Saitoh (Kyutech) SP2017-48 WIT2017-44 |
Silent speech recognition is a technique for understanding speech content without using audio information. There are var... [more] |
SP2017-48 WIT2017-44 pp.77-81 |
IMQ |
2017-10-06 13:30 |
Hyogo |
Kobe Univesity |
Development of new lip movement analyzer and examination of the evaluation parameter of lip movement Yuki Kurosawa, Miyuki Suganuma, Shinya Mochiduki, Yuko Hoshino, Mitsuho Yamada (Tokai Univ.) IMQ2017-13 |
We have been working on utterance training by lip-movement and utterance recognition without using voice data. These stu... [more] |
IMQ2017-13 pp.1-4 |
IMQ |
2017-10-06 13:55 |
Hyogo |
Kobe Univesity |
Evaluation of fatigue focused on eye movement and lip movement Miyuki Suganuma, Yuki Kurosawa, Shinya Mochiduki, Yuko Hoshino, Mitsuho Yamada (Tokai Univ.) IMQ2017-14 |
We have been analyzed the concentration of drivers by the change of eye movement during gazing point while driving and s... [more] |
IMQ2017-14 pp.5-8 |
SIS |
2017-06-01 15:20 |
Oita |
Housen-Sou (Beppu) |
A method of lip recognition using a convolutional neural network Kazuya Okano (Kyutech), Hideaki orii (Fukuoka Univ), Hideaki kawano (Kyutech) SIS2017-10 |
In recent years, lip reading has been used in various fields such as communication with deaf person, understanding the c... [more] |
SIS2017-10 pp.51-54 |
PRMU, IPSJ-CVIM, MVE [detail] |
2017-01-19 15:55 |
Kyoto |
|
ngle independent lip reading using symmetrical 3D-AAM of facial images Takuya Watanabe (TUT), Kouichi Katsurada (TUS), Yasushi Kanazawa (TUT) PRMU2016-134 MVE2016-25 |
Lip reading is a technique to recognize spoken words from only visual images of a face. There have been proposed variou... [more] |
PRMU2016-134 MVE2016-25 pp.135-140 |
WIT |
2016-03-05 09:30 |
Ibaraki |
Tusukuba Univ. of Tech.(Tsukuba) |
Hearing Aid with Lip Reading
-- Speech Enhancement using Vowel Estimation -- Yuzuru Iinuma, Tetsuya Matsumoto (Nagoya Univ.), Yoshinori Takeuchi (Daido Univ.), Hiroaki Kudo, Noboru Ohnishi (Nagoya Univ.) WIT2015-98 |
Under highly noisy environments such as construction sites and cocktail parties, it is difficult for not only humans but... [more] |
WIT2015-98 pp.53-58 |
IMQ |
2015-11-27 16:55 |
Kyoto |
|
How to evaluate the English pronunciation learning by lip movement Miyuki Suganuma, Tomoki Yamamura, Yuko Hoshino, Mitsuho Yamada (Tokai Univ.) IMQ2015-26 |
In a previous study, we investigated utterance recognition by focusing on lip movements, which is one aspect of multimod... [more] |
IMQ2015-26 pp.31-36 |
SP |
2015-10-16 11:15 |
Hyogo |
Kobe Univ. |
Multi-modal speech recognition using deep bottleneck features Satoshi Tamura (Gifu Univ), Hiroshi Ninomiya (Nagoya Univ), Norihide Kitaoka (Tokushima Univ), Shin Osuga (Aisin Seiki), Yurie Iribe (Aichi Prefectural Univ), Kazuya Takeda (Nagoya Univ), Satoru Hayamizu (Gifu Univ) SP2015-69 |
In this paper, we propose a novel multi-modal speech recognition method which uses speech and lip images, employing Deep... [more] |
SP2015-69 pp.57-62 |
LOIS |
2015-03-06 16:30 |
Okinawa |
|
A Study of Multi-Modal Speech Visualization for Deaf and Hard of Hearing People Support Yusuke Toba, Hiroyasu Horiuchi, Shinsuke Matsumoto, Sachio Saiki, Masahide Nakamura (Kobe Univ.), Tomohito Uchino, Tomohiro Yokoyama, Yasuhiro Takebayashi (School for the Deaf, University of Tsukuba) LOIS2014-94 |
Although deaf and hard of hearing (D/HH) people have various communication ways such as sign language, conversation by w... [more] |
LOIS2014-94 pp.191-196 |
EMM |
2014-05-16 11:15 |
Tokyo |
|
[Tutorial Lecture]
Lip reading Takeshi Saitoh (Kyutech) EMM2014-6 |
Lip reading is a technique of understanding speech content by visually acquiring the movements of the lips. This technol... [more] |
EMM2014-6 pp.29-34 |
PRMU |
2014-03-14 10:45 |
Tokyo |
|
A study on multi-modal speech recognition using depth images Naoya Ukai, Satoshi Tamura, Satoru Hayamizu (Gifu Univ.) PRMU2013-198 |
This paper presents a novel framework which uses depth information of human face and mouth movements as yet another moda... [more] |
PRMU2013-198 pp.179-184 |