Committee |
Date Time |
Place |
Paper Title / Authors |
Abstract |
Paper # |
SP, IPSJ-MUS, IPSJ-SLP [detail] |
2024-06-15 13:50 |
Tokyo |
(Primary: On-site, Secondary: Online) |
[Poster Presentation]
A voice synthesizer operated by fingers to control its vocal-tract area function. Amane Koriki, Masashi Ito (Tohtech) SP2024-7 |
(To be available after the conference date) [more] |
SP2024-7 pp.33-36 |
SIP, SP, EA, IPSJ-SLP [detail] |
2024-03-01 10:40 |
Okinawa |
(Primary: On-site, Secondary: Online) |
Intermediate speaker speech synthesis between two speakers using x-vector speaker space Sota Hosoi, Takahiro Kinouchi, Yukoh Wakabayashi, Norihide Kitaoka (TUT) EA2023-103 SIP2023-150 SP2023-85 |
Recent advancements in speech synthesis technologies have enabled the synthesis of speeches of speakers not in the train... [more] |
EA2023-103 SIP2023-150 SP2023-85 pp.250-255 |
SP, IPSJ-SLP, IPSJ-MUS |
2021-06-19 15:00 |
Online |
Online |
Dynamic Display of Guidelines in Interactive Speech Synthesizer Daiki Goto (Hokkai Gakuen Univ.), Naofumi Aoki, Keisuke ai (Hokkaido Univ.), Kunitoshi Motoki (Hokkai Gakuen Univ.) SP2021-18 |
We are developing a speech synthesis system that can play sounds by interactive control, just like playing a musical ins... [more] |
SP2021-18 pp.80-84 |
WIT, SP, IPSJ-SLP [detail] |
2020-10-22 13:00 |
Online |
Online |
[Invited Talk]
NHK's activities on Japanese end-to-end speech synthesis Kiyoshi Kurihara (NHK) SP2020-11 WIT2020-12 |
The main business of NHK (Japan Broadcasting Corporation) is the production and broadcasting of programs. Many programs ... [more] |
SP2020-11 WIT2020-12 pp.19-20 |
EA, ASJ-H |
2020-07-21 11:00 |
Online |
Online |
Possibilities of Gamification for Learning How to Use an Interactive Speech Synthesizer "Voice Pad" Daiki Goto (Hokkai Gakuen Univ.), Naofumi Aoki, Keisuke Ai (Hokkaido Univ.), Kunitoshi Motoki (Hokkai Gakuen Univ.) EA2020-11 |
This study has developed an interactive speech synthesizer that can enable users to synthesize speech as playing musical... [more] |
EA2020-11 pp.63-66 |
WIT, IPSJ-AAC |
2020-03-15 15:20 |
Ibaraki |
Tsukuba University of Technology (Cancelled but technical report was issued) |
Developing a communication system for an ALS patient with his voice.
-- Towards the patient's and Caretakers' QOL improvement -- Akemi Ishii Iida, Daishi Miura, Yuko Yamashita (SIT), Satoshi Watanabe (HTS Tokyo), Chen Feng, Midori Sugaya (SIT) WIT2019-66 |
This paper describes our ongoing work on developing a communication assistive system for an ALS patient who already had ... [more] |
WIT2019-66 pp.171-176 |
SP, EA, SIP |
2020-03-03 09:00 |
Okinawa |
Okinawa Industry Support Center (Cancelled but technical report was issued) |
[Poster Presentation]
An Educational Study on Prosodic Symbols and Their Acoustic Realization Using Japanese End-to-end Speech Synthesis Fuki Yoshizawa (UTokyo), Tadashi Kumano (NHK), Nobuaki Minematsu (UTokyo), Kiyoshi Kurihara (NHK) EA2019-137 SIP2019-139 SP2019-86 |
In order to examine the educational effect of presenting prosodic symbols to learners of Japanese, a method was proposed... [more] |
EA2019-137 SIP2019-139 SP2019-86 pp.207-212 |
EA, ASJ-H, ASJ-AA |
2019-07-17 15:30 |
Hokkaido |
SAPPORO COMMUNITY PLAZA |
Synthesis of Unvoiced Speech in the Interactive Speech Synthesizer "Voice Pad" Naofumi Aoki, Keisuke Ai (Hokkaido Univ.) EA2019-20 |
This study has developed an interactive speech synthesizer that can enable users to generate artificial speech as playin... [more] |
EA2019-20 pp.103-107 |
EA, SIP, SP |
2019-03-14 13:30 |
Nagasaki |
i+Land nagasaki (Nagasaki-shi) |
[Poster Presentation]
Use and evaluation of Tacotron and context features in rakugo speech synthesis Shuhei Kato (SOKENDAI/NII), Shinji Takaki, Junichi Yamagishi (NII), Yusuke Yasuda (SOKENDAI/NII), Xin Wang (NII) EA2018-126 SIP2018-132 SP2018-88 |
We have been working on constructing rakugo (a traditional Japanese verbal entertainment) speech synthesis toward speech... [more] |
EA2018-126 SIP2018-132 SP2018-88 pp.161-166 |
EMM |
2019-01-11 10:20 |
Miyagi |
Tohoku Univ. |
Development of an Interactive Speech Synthesizer "Voice Pad" Naofumi Aoki, Keisuke Ai (Hokkaido Univ.) EMM2018-87 |
This study has developed an interactive speech synthesizer that can enable users to generate artificial speech as playin... [more] |
EMM2018-87 pp.31-32 |
MVE |
2017-10-19 17:30 |
Hokkaido |
KitamiInstitute of Technology |
Development of an Interactive Speech Synthesizer Using Pure Data Naofumi Aoki, Keisuke AI (Hokkaido Univ.) MVE2017-31 |
This study aims at making an interactive speech synthesizer that can enable users to produce synthesized speech as in pl... [more] |
MVE2017-31 pp.55-58 |
EA, SP, SIP |
2016-03-29 09:00 |
Oita |
Beppu International Convention Center B-ConPlaza |
[Poster Presentation]
An experimental study of designing context labels for infant-directed storytelling speech synthesis Kyota Hyakutake, Daisuke Saito, Nobuaki Minematsu (UTokyo) EA2015-112 SIP2015-161 SP2015-140 |
Context labels for infant-directed storytelling speech synthesis are investigated. After collecting one-hour storytellin... [more] |
EA2015-112 SIP2015-161 SP2015-140 pp.255-260 |
WIT |
2016-03-04 15:50 |
Ibaraki |
Tusukuba Univ. of Tech.(Tsukuba) |
Ball Detection Function for the Blind Bowling Support System using a Depth Sensor Makoto Kobayashi (NTUT) WIT2015-95 |
A blind bowling support system is being developed with purpose to realize an alternative solution to supporting by a sig... [more] |
WIT2015-95 pp.37-40 |
SP |
2016-01-14 11:20 |
Kanagawa |
Sunpian Kawasaki |
Experimental verification of improvement of naturalness by prosody training of Japanese with OJAD Nobuaki Minematsu (UTokyo), Hiroko Hirano, Noriko Nakamura (TUFS), Koji Oikawa (JASLON) SP2015-87 |
To support Japanese prosody instruction, the Online Japanese Accent Dictionary (OJAD) has been developed by using NLP an... [more] |
SP2015-87 pp.13-18 |
SP, IPSJ-MUS |
2014-05-25 11:30 |
Tokyo |
|
Speech waveform generation on subband domain Nobuyuki Nishizawa, Tsuneo Kato (KDDI R&D Labs) SP2014-35 |
To reduce the computational cost for waveform generation in speech synthesis based on analysis-synthesis systems like HM... [more] |
SP2014-35 pp.349-354 |
SP, IPSJ-SLP |
2013-12-20 17:05 |
Tokyo |
|
Application of HMM-Based Speech Synthesis Techniques to a Singing Style Synthesis Job Plugin Makoto Tachibana, Keijiro Saino, Yuji Hisaminato (Yamaha Corp) SP2013-94 |
Recent HMM-based speech synthesis systems have the capability to control speaker/style characteristics by statistically ... [more] |
SP2013-94 pp.123-128 |
SP, EA, SIP |
2013-05-17 09:45 |
Okayama |
|
Fast speech waveform generation using subband coding for speech synthesis Nobuyuki Nishizawa, Tsuneo Kato (KDDI Labs) EA2013-15 SIP2013-15 SP2013-15 |
For fast waveform generation in HMM-based speech synthesizers, a new method using a subband coding method that is also u... [more] |
EA2013-15 SIP2013-15 SP2013-15 pp.85-90 |
SP |
2013-02-28 15:00 |
Aichi |
Daido University |
[Poster Presentation]
An emotional speech synthesis method considering a difference of sentence-final expression Takaaki Yuki, Motoyuki Suzuki (Osaka Inst. Tech.) SP2012-119 |
Most of conventional speech synthesizers use a prosodic pattern for representing an emotion. However, it should depend o... [more] |
SP2012-119 pp.21-22 |
WIT |
2013-02-02 15:25 |
Aichi |
Nagoya Institute of Technology |
Eye motion input based speech synthesis interface for communication aids Fuming Fang, Takahiro Shinozaki, Yasuo Horiuchi, Shingo Kuroiwa (Chiba Univ), Sadaoki Furui (Tokyo Tech), Toshimitsu Musha (BFL) WIT2012-38 |
In order to provide an efficient means of communication for those who cannot move muscles of their whole body except eye... [more] |
WIT2012-38 pp.29-34 |
WIT, SP |
2009-10-29 16:30 |
Aomori |
ASPAM |
Spoken Dialog System for Learning Braille Kana Shibahara, Masahiro Araki (Kyoto Inst. of Tech.) SP2009-54 WIT2009-60 |
Learning Braille for visually impaired person needs a help by accompanying person because they cannot learn corresponden... [more] |
SP2009-54 WIT2009-60 pp.31-36 |