Committee |
Date Time |
Place |
Paper Title / Authors |
Abstract |
Paper # |
WIT, HI-SIGACI |
2024-12-05 11:40 |
Tokyo |
AIST Tokyo Waterfront |
Development of Eyeglass-type Switch for ALS Patients Using Eye Movement (Report 2) Ryuto Tamai, Takeshi Saitoh (Kyutech), Kazuyuki Itoh (NRCPD), Zhang Haibo (Kyutech) WIT2024-19 |
Our research group is working on developing a communication support device for ALS patients, specifically a glasses-type... [more] |
WIT2024-19 pp.48-53 |
WIT, SP, IPSJ-SLP [detail] |
2023-10-14 13:05 |
Fukuoka |
Kyushu Institute of Technology (Primary: On-site, Secondary: Online) |
Non-contact eye-switch using eye movement based on pupil center detection Ploywow Nhuthep, Ryuto Tamai, Takeshi Saitoh (Kyutech), Kazuyuki Itoh (NRCPD) SP2023-27 WIT2023-18 |
Our aim is to develop an eye switch that uses eye movements as a means of supporting communication in ALS patients. In p... [more] |
SP2023-27 WIT2023-18 pp.1-5 |
WIT, IPSJ-AAC |
2023-03-23 09:40 |
Online |
Online |
Mouth shape recognition for speech scene of patients with intractable neurological diseases Yuki Gondo, Yuya Nakamura, Takeshi Saitoh (kyutech), Kazuyuki Itoh (NRCPD) WIT2022-24 |
We are working on mouth shape recognition, which is the basic technology for the development of mouth-shape character me... [more] |
WIT2022-24 pp.27-31 |
PRMU, IBISML, IPSJ-CVIM [detail] |
2023-03-02 10:55 |
Hokkaido |
Future University Hakodate (Primary: On-site, Secondary: Online) |
Report on the 3rd Lip-Reading Challenge Takeshi Saitoh (Kyutech), Yuto Goto, Hiroyuki Nagano, Akihiro Kato, Masaki Nose (RICOH), Naoki Hiramoto (Mercoin), Tomohiro Hattori, Shiiya Aoyama, Yusuke Katoh, Ryuta Toshima, Takumi Nagawaki, Satoshi Tamura (Gifu University), Daiki Arakane (Kyutech) PRMU2022-76 IBISML2022-83 |
There is a machine lip-reading technology that uses a computer to estimate the utterance content using only visual infor... [more] |
PRMU2022-76 IBISML2022-83 pp.97-101 |
PRMU, IBISML, IPSJ-CVIM [detail] |
2023-03-02 11:05 |
Hokkaido |
Future University Hakodate (Primary: On-site, Secondary: Online) |
A Study of Word Lip-Reading using Meta Learning Michinari Kodama, Takeshi Saitoh (kyutech) PRMU2022-77 IBISML2022-84 |
Lip-reading technology, which estimates utterance content using only visual information, is a kind of supervised learnin... [more] |
PRMU2022-77 IBISML2022-84 pp.102-106 |
PRMU, IPSJ-CVIM |
2022-05-13 10:45 |
Aichi |
Toyota Technological Institute |
Efficient DNN model for word lip-reading Daiki Arakane, Takeshi Saitoh (Kyutech) PRMU2022-4 |
This paper studies various deep learning models for lip-reading technology, including one of supervised learning of the ... [more] |
PRMU2022-4 pp.18-23 |
WIT, HI-SIGACI |
2021-12-08 15:20 |
Online |
Online |
Report of Japanese lip reading ability test Takeshi Saitoh (Kyutech), Hajime Tachiiri (Ehime Univ.) WIT2021-34 |
In this paper, to measure Japanese lip-reading ability, we created four Japanese lip-reading ability tests of single sou... [more] |
WIT2021-34 pp.13-17 |
SP, WIT, IPSJ-SLP, ASJ-H [detail] |
2021-10-19 10:35 |
Online |
Online |
Development of eyeglass-type switch for ALS patients using eye movement based on pupil center detection Kazuki Sakamoto, Takeshi Saitoh (Kyushu Insti .of Tech.), Kazuyuki Itoh (NRCPD) SP2021-27 WIT2021-20 |
This research develops an eyeglass-type switch which can be connected to a communication device using the eye movement f... [more] |
SP2021-27 WIT2021-20 pp.18-23 |
WIT, SP, IPSJ-SLP [detail] |
2020-10-22 15:50 |
Online |
Online |
3D-CNN-based mouth shape recognition for patient with intractable neurological diseases Yuya Nakamura, Takeshi Saitoh (kyutech), Kazuyuki Itoh (NRCPD) SP2020-13 WIT2020-14 |
[more] |
SP2020-13 WIT2020-14 pp.27-32 |
NLC |
2020-02-17 09:55 |
Tokyo |
Seikei University |
Facilitator Identification Using Multimodal Information in Multi-party Conversation Kouki Honda, Tukasa Shiota, Kazutaka Shimada, Takeshi Saitoh (Kyutech) NLC2019-41 |
Predicting roles of participants in a conversation is one of the most important tasks for conversation understanding. In... [more] |
NLC2019-41 pp.27-32 |
NLC |
2020-02-17 10:20 |
Tokyo |
Seikei University |
Analysing emergency care team leaders' eye gaze for understanding non-verbal behaviours in emergency care interaction Keiko Tsuchiya, Akira Taneichi (YCU), Kyota Nakamura (YCU Medical Cetnre), Takuma Sakai (YKH), Takeru Abe (YCU Medical Cetnre), Takeshi Saitoh (Kyutech) NLC2019-42 |
In emergency care interaction, a team leader collaborates with his members to safely perform medical procedures. This pr... [more] |
NLC2019-42 pp.33-36 |
WIT, SP |
2019-10-27 10:50 |
Kagoshima |
Daiichi Institute of Technology |
Mouth shape recognition for patient with intractable neurological diseases Yuya Nakamura, Takeshi Saitoh (Kyutech), Kazuyuki Itoh (NRCPD) SP2019-33 WIT2019-32 |
Patients with intractable neuropathy may use a mouth-shape character-based communication as an alternative to communicat... [more] |
SP2019-33 WIT2019-32 pp.93-98 |
WIT, SP |
2018-10-28 09:50 |
Fukuoka |
Kyushu Institute of Technology(Kitakyushu) |
Experimental report on elderly speech disorder person for new substitute speech using lip reading Takeshi Saitoh, Michiko Kubokawa (kyutech), Misato Hirai, Yasuyuki Noyama (Okayama saiseikai General Hospital) SP2018-42 WIT2018-30 |
Writing and sign language are known as substitute methods of verbal communication. The laryngectomized person uses three... [more] |
SP2018-42 WIT2018-30 pp.51-55 |
PRMU, BioX |
2018-03-19 10:50 |
Tokyo |
|
SSSD: Japanese Speech Scene Database by Smart Device for Visual Speech Recognition Takeshi Saitoh, Michiko Kubokawa (Kyushu Inst. of Tech.) BioX2017-63 PRMU2017-199 |
Speech scenes of conventional database available for lip reading or visual speech recognition (VSR), were record with a ... [more] |
BioX2017-63 PRMU2017-199 pp.163-168 |
WIT, SP |
2017-10-20 09:30 |
Fukuoka |
Tobata Library of Kyutech (Kitakyushu) |
[Invited Talk]
Research Trends in Silent Speech Recognition
-- Focusing on Lip Reading -- Takeshi Saitoh (Kyutech) SP2017-48 WIT2017-44 |
Silent speech recognition is a technique for understanding speech content without using audio information. There are var... [more] |
SP2017-48 WIT2017-44 pp.77-81 |
WIT, SP |
2017-10-20 10:20 |
Fukuoka |
Tobata Library of Kyutech (Kitakyushu) |
Effective skeletons for sign language recognition Tomoya Kodama, Takeshi Saitoh (Kyutech) SP2017-49 WIT2017-45 |
With the reasonable motion sensor Microsoft Kinect was marketed, a lot of sign language recognition methods using Kinect... [more] |
SP2017-49 WIT2017-45 pp.83-87 |
WIT |
2016-10-16 14:55 |
Saga |
Karatsu Royal Hotel (Saga pref.) |
Sign language recognition by convolutional neural network with concatenated sequence image of depth image Keisuke Hashimura, Takeshi Saitoh (kyutech) WIT2016-36 |
We proposed a concatenated frame image (CFI) that some sampled frame images from the scene are concatenated like a two-d... [more] |
WIT2016-36 pp.17-22 |
PRMU, CNR |
2016-02-21 09:30 |
Fukuoka |
|
Fish Image Recognition using Convolutional Neural Network Kentaro Wakisaka, Takeshi Saitoh (kyutech) PRMU2015-132 CNR2015-33 |
We are studying on development of image-based fish identification system. Most traditional researches use the geometric ... [more] |
PRMU2015-132 CNR2015-33 pp.1-5 |
PRMU, CNR |
2016-02-22 11:30 |
Fukuoka |
|
[Poster Presentation]
Study on Facial Feature Point Detection using Constrained Local Model Kenta Hara, Takeshi Saitoh (kyutech) PRMU2015-154 CNR2015-55 |
(To be available after the conference date) [more] |
PRMU2015-154 CNR2015-55 pp.121-122 |
PRMU, CNR |
2016-02-22 11:30 |
Fukuoka |
|
[Poster Presentation]
Gaze analysis on reading sign language scene Takanori Sukemune, Masataka Shibuya, Kenji Kawada, Takeshi Saitoh (kyutech) PRMU2015-158 CNR2015-59 |
(To be available after the conference date) [more] |
PRMU2015-158 CNR2015-59 pp.129-130 |