IEICE Technical Committee Submission System
Conference Schedule
Online Proceedings
[Sign in]
Tech. Rep. Archives
    [Japanese] / [English] 
( Committee/Place/Topics  ) --Press->
 
( Paper Keywords:  /  Column:Title Auth. Affi. Abst. Keyword ) --Press->

All Technical Committee Conferences  (Searched in: All Years)

Search Results: Conference Papers
 Conference Papers (Available on Advance Programs)  (Sort by: Date Descending)
 Results 1 - 20 of 71  /  [Next]  
Committee Date Time Place Paper Title / Authors Abstract Paper #
SIP, SP, EA, IPSJ-SLP [detail] 2024-03-01
09:30
Okinawa
(Primary: On-site, Secondary: Online)
Constructing and Evaluating a Batch Voice Input System for Electronic Medical Records Using Large Language Models
Ryo Maejima, Norihide Kitaoka (TUT) EA2023-99 SIP2023-146 SP2023-81
This study aims to develop an electronic medical record with a voice input interface that lets users input several items... [more] EA2023-99 SIP2023-146 SP2023-81
pp.226-231
SIP, SP, EA, IPSJ-SLP [detail] 2024-03-01
10:40
Okinawa
(Primary: On-site, Secondary: Online)
An Investigation on the Speech Recovery from EEG Signals Using Transformer
Tomoaki Mizuno (The Univ. of Electro-Communications), Takuya Kishida (Aichi Shukutoku Univ.), Natsue Yoshimura (Tokyo Tech), Toru Nakashika (The Univ. of Electro-Communications) EA2023-108 SIP2023-155 SP2023-90
Synthesizing full speech from ElectroEncephaloGraphy(EEG) signals is a challenging task. In this paper, speech reconstru... [more] EA2023-108 SIP2023-155 SP2023-90
pp.277-282
SP, NLC, IPSJ-SLP, IPSJ-NL [detail] 2023-12-02
16:00
Tokyo Kikai-Shinko-Kaikan Bldg.
(Primary: On-site, Secondary: Online)
Development and effects of English speech training drills to improve perception and production skills seamlessly with interactive gamification
Nobuaki Minematsu, Yingxiang Gao (UTokyo), Noriko Nakanishi (KGU), Yusuke Inoue, Hiroaki Mizuno (Carriage) NLC2023-15 SP2023-35
To improve aural/oral proficiency in English, various skills have to be acquired such as 1) spoken word perception, 2) m... [more] NLC2023-15 SP2023-35
pp.7-12
SP, IPSJ-SLP, EA, SIP [detail] 2023-02-28
13:00
Okinawa
(Primary: On-site, Secondary: Online)
[Invited Talk] Multiple sound spot synthesis meets multilingual speech synthesis -- Implementation is really all we need --
Takuma Okamoto (NICT) EA2022-87 SIP2022-131 SP2022-51
A multilingual multiple sound spot synthesis system is implemented as a user interface for real-time speech translation ... [more] EA2022-87 SIP2022-131 SP2022-51
pp.73-76
HCS 2023-01-22
16:00
Kyoto Kyoto Institute of Technology
(Primary: On-site, Secondary: Online)
Decoding of average ERPs during silent Japanese words by attention-based RNN with encoder-decoder
Toshimasa Yamazaki, Yuko Tokunaga, Chieko Ito (KIT) HCS2022-74
This study attempted to decode average event-related potentials (ERPs) during silent Japanese words by attention-based r... [more] HCS2022-74
pp.108-111
PRMU 2022-10-21
15:25
Tokyo Miraikan - The National Museum of Emerging Science and Innovation
(Primary: On-site, Secondary: Online)
Features and Deep Learning Models Suitable for Speech Source Discrimination Method in Plural Voice User Interfaces Environment
Kengo Maeda, Takahiro Yoshida (TUS) PRMU2022-27
Under the situation that plural devices equipped with a voice user interface exist in the user’s environment in the near... [more] PRMU2022-27
pp.29-34
SP, IPSJ-SLP, IPSJ-MUS 2021-06-19
09:30
Online Online [Invited Talk] Toward a Unification of Various Speech Processing Tasks Based on End-to-End Neural networks
Shinji Watanabe (CMU) SP2021-8
This presentation will introduce the recent progress of speech processing technologies based on end-to-end neural networ... [more] SP2021-8
p.38
SP, IPSJ-SLP, IPSJ-MUS 2021-06-19
15:00
Online Online Dynamic Display of Guidelines in Interactive Speech Synthesizer
Daiki Goto (Hokkai Gakuen Univ.), Naofumi Aoki, Keisuke ai (Hokkaido Univ.), Kunitoshi Motoki (Hokkai Gakuen Univ.) SP2021-18
We are developing a speech synthesis system that can play sounds by interactive control, just like playing a musical ins... [more] SP2021-18
pp.80-84
EA, US, SP, SIP, IPSJ-SLP [detail] 2021-03-04
16:10
Online Online Estimation of imagined speech from electrocorticogram with an encoder-decoder model
Kotaro Hayashi, Shuji Komeiji (TUAT), Takumi Mitsuhashi, Yasushi Iimura, Hiroharu Suzuki, Hidenori Sugano (Juntendo Univ.), Koichi Shinoda (TokyoTech), Toshihisa Tanaka (TUAT) EA2020-87 SIP2020-118 SP2020-52
Recent advances in signal processing and machine learning technologies have made it possible to estimate and reconstruct... [more] EA2020-87 SIP2020-118 SP2020-52
pp.164-169
SC 2020-03-16
10:30
Online Online Cloud Speech Recognition Process Management Method Based on Device Sensor Information
Yu Fujita, Isao Tazawa, Masaharu Ukeda (Hitachi) SC2019-38
In recent years, the use of voice User Interface (VUI) to make the service interactive using speech is widespread. When ... [more] SC2019-38
pp.23-28
WIT, IPSJ-AAC 2020-03-15
15:20
Ibaraki Tsukuba University of Technology
(Cancelled but technical report was issued)
Developing a communication system for an ALS patient with his voice. -- Towards the patient's and Caretakers' QOL improvement --
Akemi Ishii Iida, Daishi Miura, Yuko Yamashita (SIT), Satoshi Watanabe (HTS Tokyo), Chen Feng, Midori Sugaya (SIT) WIT2019-66
This paper describes our ongoing work on developing a communication assistive system for an ALS patient who already had ... [more] WIT2019-66
pp.171-176
NC, MBE
(Joint)
2020-03-05
14:35
Tokyo University of Electro Communications
(Cancelled but technical report was issued)
Detection of covert-speech-related potentials
Sho Tsukiyama, Toshimasa Yamazaki (KIT) MBE2019-87
Recently, Brain-Computer Interfaces (BCIs) using speeches for communications have been researched by electroencephalogra... [more] MBE2019-87
pp.35-40
WIT, SP 2019-10-27
09:00
Kagoshima Daiichi Institute of Technology Extraction of linguistic representation and syllable recognition from EEG signal of speech-imagery
Kentaro Fukai, Hidefumi Ohmura, Kouichi Katsurada (Tokyo Univ. of Science), Satoka Hirata, Yurie Iribe (Aichi Prefectural Univ.), Mingchua Fu, Ryo Taguchi (Nagoya Inst. of Technology), Tsuneo Nitta (Waseda Univ./Toyohashi Univ. of Technology) SP2019-28 WIT2019-27
Speech imagery recognition from Electroencephalogram (EEG) is one of the challenging technologies for non-invasive brain... [more] SP2019-28 WIT2019-27
pp.63-68
WIT, SP 2019-10-27
09:20
Kagoshima Daiichi Institute of Technology Word Recognition using word likelihood vector from speech-imagery EEG
Satoka Hirata, Yurie Iribe (Aichi Prefectual Univ.), Kentaro Fukai, Kouichi Katsurada (Tokyo Univ. of Science), Tsuneo Nitta (Waseda Univ./Toyohashi Univ. of Tech.) SP2019-29 WIT2019-28
Previous research suggests that humans manipulate the machine using their electroencephalogram called BCI (Brain Compute... [more] SP2019-29 WIT2019-28
pp.69-73
EA, ASJ-H, ASJ-AA 2019-07-17
15:30
Hokkaido SAPPORO COMMUNITY PLAZA Synthesis of Unvoiced Speech in the Interactive Speech Synthesizer "Voice Pad"
Naofumi Aoki, Keisuke Ai (Hokkaido Univ.) EA2019-20
This study has developed an interactive speech synthesizer that can enable users to generate artificial speech as playin... [more] EA2019-20
pp.103-107
OCS, PN, NS
(Joint)
2019-06-21
13:20
Iwate MALIOS(Morioka) Propose an approach to defend against the attack for smart speaker
Yuya Tarutani (Okayama Univ.), Kensuke Ueda, Yoshiaki Kato (Mitsubishi Electric) NS2019-39
Smart speakers have become widespread as voice operation interface. Users can operate the various function such as home ... [more] NS2019-39
pp.23-28
EA, SIP, SP 2019-03-15
13:30
Nagasaki i+Land nagasaki (Nagasaki-shi) [Poster Presentation] A Design of Reduced Phoneme Set Based on a Language Model
Shuji Komeiji, Toshihisa Tanaka (Tokyo Univ. of Agriculture and Tech.) EA2018-134 SIP2018-140 SP2018-96
A design of reduced phoneme set based on a language model is proposed. The reduction of the phoneme set improves discrim... [more] EA2018-134 SIP2018-140 SP2018-96
pp.205-210
EMM 2019-01-11
10:20
Miyagi Tohoku Univ. Development of an Interactive Speech Synthesizer "Voice Pad"
Naofumi Aoki, Keisuke Ai (Hokkaido Univ.) EMM2018-87
This study has developed an interactive speech synthesizer that can enable users to generate artificial speech as playin... [more] EMM2018-87
pp.31-32
HCGSYMPO
(2nd)

Mie Sinfonia Technology Hibiki Hall Ise Mood Improvement by Multiple Personality Assistant Agent in Speech Recognition Failure
Takehiro Hondo, Ippei Naganuma, Kazuki Kobayashi (Shinshu Univ.)
This paper proposes a method to create a mood in a human agent speech interaction and investigates the created mood by a... [more]
MBE, BioX 2018-07-27
14:00
Tottori   Biometric-bit-string generation from speech information on smart-phones
Aki Harada, Yasushi Yamazaki (The Univ. of Kitakyushu), Tetsushi Ohki (Shizuoka Univ.) BioX2018-18 MBE2018-26
Because smart-phones are in wide use, biometric authentication has attracted substantial attention as a user authenticat... [more] BioX2018-18 MBE2018-26
pp.63-68
 Results 1 - 20 of 71  /  [Next]  
Choose a download format for default settings. [NEW !!]
Text format pLaTeX format CSV format BibTeX format
Copyright and reproduction : All rights are reserved and no part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher. Notwithstanding, instructors are permitted to photocopy isolated articles for noncommercial classroom use without fee. (License No.: 10GA0019/12GB0052/13GB0056/17GB0034/18GB0034)


[Return to Top Page]

[Return to IEICE Web Page]


The Institute of Electronics, Information and Communication Engineers (IEICE), Japan