Committee |
Date Time |
Place |
Paper Title / Authors |
Abstract |
Paper # |
SIP, SP, EA, IPSJ-SLP [detail] |
2024-02-29 10:30 |
Okinawa |
(Primary: On-site, Secondary: Online) |
Multi-task learning with age information model for highly accurate elderly speech recognition. Shine Takumi, Kinouchi Takahiro, Wakabayashi Yukoh, Kitaoka Norihide (TUT) EA2023-64 SIP2023-111 SP2023-46 |
The speech recognition of the elderly is less accurate, especially in smart speaker speech recognition, due to aging-rel... [more] |
EA2023-64 SIP2023-111 SP2023-46 pp.19-24 |
SIP, SP, EA, IPSJ-SLP [detail] |
2024-02-29 15:45 |
Okinawa |
(Primary: On-site, Secondary: Online) |
|
We have developed automatic speech recognition and dialect identification techniques by using COJADS, a corpus of Japane... [more] |
|
SIP, SP, EA, IPSJ-SLP [detail] |
2024-03-01 09:30 |
Okinawa |
(Primary: On-site, Secondary: Online) |
Constructing and Evaluating a Batch Voice Input System for Electronic Medical Records Using Large Language Models Ryo Maejima, Norihide Kitaoka (TUT) EA2023-99 SIP2023-146 SP2023-81 |
This study aims to develop an electronic medical record with a voice input interface that lets users input several items... [more] |
EA2023-99 SIP2023-146 SP2023-81 pp.226-231 |
SIP, SP, EA, IPSJ-SLP [detail] |
2024-03-01 09:30 |
Okinawa |
(Primary: On-site, Secondary: Online) |
Domain adaptation of speech recognition model based on multilingual SSL model with only nonparallel corpus. Takahiro Kinouchi (TUT), Atsunori Ogawa (NTT), Yukoh Wakabayashi (TUT), Kengo Ohta (NITA), Norihide Kitaoka (TUT) EA2023-100 SIP2023-147 SP2023-82 |
Automatic speech recognition (ASR) models are used in various services and businesses, and each domain’s recognition acc... [more] |
EA2023-100 SIP2023-147 SP2023-82 pp.232-237 |
SIP, SP, EA, IPSJ-SLP [detail] |
2024-03-01 09:30 |
Okinawa |
(Primary: On-site, Secondary: Online) |
Evaluation of Automatic Speech Recognition for Deaf and Hard-of-Hearing People by Speaker Adaptation. Kaito Takahashi, Takahiro Kinouchi, Yukoh Wakabayashi (TUT), Kengo Ohta (NITAC), Akio Kobayashi (Yamato Univ.), Norihide Kitaoka (TUT) EA2023-102 SIP2023-149 SP2023-84 |
Communication between normal-hearing people and the deaf is generally used sign language, written communication, and spe... [more] |
EA2023-102 SIP2023-149 SP2023-84 pp.244-249 |
SIP, SP, EA, IPSJ-SLP [detail] |
2024-03-01 10:40 |
Okinawa |
(Primary: On-site, Secondary: Online) |
An Investigation into Weighting Strategies for Model Averaging in Continual Learning for Automatic Speech Recognition Kentaro Shinayama, Hiroshi Sato, Tomoharu Iwata, Takeshi Mori, Taichi Asami (NTT) EA2023-105 SIP2023-152 SP2023-87 |
In recent years, the application scope of speech recognition AI has expanded, enabling the acquisition of diverse data d... [more] |
EA2023-105 SIP2023-152 SP2023-87 pp.262-267 |
SIP, SP, EA, IPSJ-SLP [detail] |
2024-03-01 10:40 |
Okinawa |
(Primary: On-site, Secondary: Online) |
Substitution of Implicit Linguistic Information in Beam Search Decoding Using CTC-based Speech Recognition Models Tatsunari Takagi, Yukoh Wakabayashi (TUT), Atsunori Ogawa (NTT), Norihide Kitaoka (TUT) EA2023-106 SIP2023-153 SP2023-88 |
The rise of neural networks in the field of automatic speech recognition has notably improved the accuracy of speech rec... [more] |
EA2023-106 SIP2023-153 SP2023-88 pp.268-273 |
SIP, SP, EA, IPSJ-SLP [detail] |
2024-03-01 16:35 |
Okinawa |
(Primary: On-site, Secondary: Online) |
Simulation Evaluation of Speech Detection Based on Distributed Sound-to-Light Conversion Device Blinkies Satoshi Motoyama, Natsuki Ueno, Masahiro Yasuda (TMU), Yuma Kinoshita (Tokai Univ.), Nobutaka Ono (TMU) EA2023-126 SIP2023-173 SP2023-108 |
The purpose of this study is speech detection using the distributed sound-to-light conversion device Blinkies. As an ini... [more] |
EA2023-126 SIP2023-173 SP2023-108 pp.382-387 |
SIP, SP, EA, IPSJ-SLP [detail] |
2024-03-01 16:35 |
Okinawa |
(Primary: On-site, Secondary: Online) |
Evaluations of Multi-channel Blind Source Separation for Speech Recognition in Car Environments Yutsuki Takeuchi, Natsuki Ueno, Nobutaka Ono (Tokyo Metropolitan Univ.), Takashi Takazawa, Shuhei Shimanoe, Tomoki Tanemura (MIRISE Technologies) EA2023-127 SIP2023-174 SP2023-109 |
In car environments, speech recognition is difficult due to various types of noise. For this issue, speech enhancement b... [more] |
EA2023-127 SIP2023-174 SP2023-109 pp.388-393 |
SP, NLC, IPSJ-SLP, IPSJ-NL [detail] |
2023-12-03 09:30 |
Tokyo |
Kikai-Shinko-Kaikan Bldg. (Primary: On-site, Secondary: Online) |
Enhancing Recognition of Rare Words in ASR through Error Detection and Context-Aware Error Correction Jiajun He, Zekun Yang, Tomoki Toda (Nagoya Univ.) NLC2023-16 SP2023-36 |
Automatic speech recognition (ASR) systems often suffer from errors, particularly when recognizing rare words. These err... [more] |
NLC2023-16 SP2023-36 pp.13-18 |
SP, NLC, IPSJ-SLP, IPSJ-NL [detail] |
2023-12-03 11:05 |
Tokyo |
Kikai-Shinko-Kaikan Bldg. (Primary: On-site, Secondary: Online) |
[Poster Presentation]
Enhancing Multi-Accent Automated Speech Recognition with Accent-Activated Adapters Yuqin Lin, Longbiao Wang, Jianwu Dang (Tianjin Univ. & Univ. of Tokyo), Nobuaki Minematsu (Univ. of Tokyo) NLC2023-18 SP2023-38 |
This paper proposes the Accent-Activated adapter (AccentAct) approach to address the challenge of speech variations in m... [more] |
NLC2023-18 SP2023-38 pp.25-30 |
SP, NLC, IPSJ-SLP, IPSJ-NL [detail] |
2023-12-03 11:05 |
Tokyo |
Kikai-Shinko-Kaikan Bldg. (Primary: On-site, Secondary: Online) |
[Poster Presentation]
Enhancing Dysarthric Speech Recognition with Auxiliary Feature Fusion Module: Exploring Articulatory-related Features from Foundation Models Yuqin Lin, Longbiao Wang, Jianwu Dang (Tianjin Univ. & Univ. of Tokyo), Nobuaki Minematsu (Univ. of Tokyo) NLC2023-19 SP2023-39 |
Addressing dysarthric speech variability in Automatic Speech Recognition (ASR) is crucial for improving human-computer i... [more] |
NLC2023-19 SP2023-39 pp.31-36 |
EMM, EA, ASJ-H |
2023-11-23 13:00 |
Toyama |
|
[Poster Presentation]
** , (**) |
As a study of speech intelligibility estimation methods using speech recognition, we simulated a subjective evaluation t... [more] |
EA2023-45 EMM2023-76 pp.93-97 |
ET |
2023-10-21 15:30 |
Nagano |
Shinshu University Faculty of Engineering |
"Listening" Performance of Generative AI and Elementary Foreign Language Learners in Code-Switching Discourse Sunaoka Kazuko (Waseda Univ.), Qin Xu (Kyoto Univ.) ET2023-23 |
We used the Whisper model to automatically recognize and process teachers' Japanese and Chinese code-switching (CS) in a... [more] |
ET2023-23 pp.33-37 |
ET |
2023-07-14 13:10 |
Hokkaido |
Muroran Institute of Technology / Online (Primary: On-site, Secondary: Online) |
English Pronunciation Practice Using the Speech Recognition Function Katsuyuki Umezawa (Shonan Inst. of Tech.), Makoto Nakazawa (Junior College of Aizu), Michiko Nakano, Shigeichi Hirasawa (Waseda Univ.) ET2023-9 |
The development of the AI field in recent years has been remarkable, and the speech recognition function has become wide... [more] |
ET2023-9 pp.1-6 |
SP, IPSJ-MUS, IPSJ-SLP [detail] |
2023-06-23 13:50 |
Tokyo |
(Primary: On-site, Secondary: Online) |
Speech Emotion Recognition based on Emotional Label Sequence Estimation Considering Phoneme Class Attribute Ryotaro Nagase, Takahiro Fukumori, Yoichi Yamashita (Ritsumeikan Univ.) SP2023-9 |
Recently, many researchers have tackled speech emotion recognition (SER), which predicts emotion conveyed by speech. In ... [more] |
SP2023-9 pp.42-47 |
SP, IPSJ-MUS, IPSJ-SLP [detail] |
2023-06-23 13:50 |
Tokyo |
(Primary: On-site, Secondary: Online) |
[Poster Presentation]
Generation of colored subtitle images based on emotional information of speech utterances Fumiya Nakamura (Kobe Univ.), Ryo Aihara (Mitsubishi Electric), Ryoichi Takashima, Tetsuya Takiguchi (Kobe Univ.), Yusuke Itani (Mitsubishi Electric) SP2023-11 |
Conventional automatic subtitle generation systems based on speech recognition do not take into account paralinguistic i... [more] |
SP2023-11 pp.54-59 |
SP, IPSJ-MUS, IPSJ-SLP [detail] |
2023-06-23 13:50 |
Tokyo |
(Primary: On-site, Secondary: Online) |
Streaming End-to-End speech recognition using a CTC decoder with substituted linguistic information Tatsunari Takagi (TUT), Atsunori Ogawa (NTT), Norihide Kitaoka, Yukoh Wakabayashi (TUT) SP2023-12 |
Speech recognition technology has been employed in various fields due to the enhancement of speech recognition model acc... [more] |
SP2023-12 pp.60-64 |
SP, IPSJ-MUS, IPSJ-SLP [detail] |
2023-06-24 13:50 |
Tokyo |
(Primary: On-site, Secondary: Online) |
Domain adaptation of speech recognition models based on self-supervised learning using target domain speech Takahiro Kinouchi (TUT), Atsunori Ogawa (NTT), Yuko Wakabayashi, Norihide Kitaoka (TUT) SP2023-19 |
In this study, we propose a domain adaptation method using only speech data in the target domain without using transcrib... [more] |
SP2023-19 pp.91-96 |
SP, IPSJ-MUS, IPSJ-SLP [detail] |
2023-06-24 13:50 |
Tokyo |
(Primary: On-site, Secondary: Online) |
Automatic speech recognition model simultaneously recognizes linguistic information and verbal/non-verbal phenomena Nagito Shione, Yukoh Wakabayashi, Norihide Kitaoka (TUT) SP2023-22 |
Although speech recognition technology has advanced in recent years, most of them recognize only linguistic information ... [more] |
SP2023-22 pp.109-113 |