Committee |
Date Time |
Place |
Paper Title / Authors |
Abstract |
Paper # |
EA, SIP, SP, IPSJ-SLP [detail] |
2025-03-03 14:36 |
Okinawa |
|
[No paper] Impression Caption Dataset for Environmental Sounds Yuki Okamoto (UTokyo), Ryotaro Nagase (Ritsumeikan Univ.), Keisuke Imoto (Doshisha Univ.), Junichi Yamagishi (NII), Yuki Saito (UTokyo), Takahiro Fukumori, Yoichi Yamashita (Ritsumeikan Univ.) |
[more] |
|
SP, NLC, IPSJ-SLP, IPSJ-NL [detail] |
2024-12-12 14:50 |
Aichi |
Nagoya Univ. (Primary: On-site, Secondary: Online) |
Study on Data Creation and Model Construction for Speech Emotion Captioning Ryotaro Nagase, Takahiro Fukumori, Yoichi Yamashita (Ritsumeikan Univ.) NLC2024-21 SP2024-12 |
In previous studies on speech emotion recognition, the results of the prediction are represented by categorical or dimen... [more] |
NLC2024-21 SP2024-12 pp.12-17 |
EA |
2024-05-22 14:55 |
Online |
Online |
Environmental sound synthesis and creation of dataset using vocal imitations Yuki Okamoto (Ritsumeikan Univ.), Keisuke Imoto (Doshisha Univ.), Shinnosuke Takamichi (The Univ. of Tokyo/Keio Univ.), Ryotaro Nagase, Takahiro Fukumori, Yoichi Yamashita (Ritsumeikan Univ.) EA2024-5 |
One way to represent the characteristics of environmental sounds is to imitate the environmental sounds by human voice c... [more] |
EA2024-5 p.22 |
SP, IPSJ-MUS, IPSJ-SLP [detail] |
2023-06-23 13:50 |
Tokyo |
(Primary: On-site, Secondary: Online) |
Speech Emotion Recognition based on Emotional Label Sequence Estimation Considering Phoneme Class Attribute Ryotaro Nagase, Takahiro Fukumori, Yoichi Yamashita (Ritsumeikan Univ.) SP2023-9 |
Recently, many researchers have tackled speech emotion recognition (SER), which predicts emotion conveyed by speech. In ... [more] |
SP2023-9 pp.42-47 |
SP, IPSJ-MUS, IPSJ-SLP [detail] |
2023-06-24 13:50 |
Tokyo |
(Primary: On-site, Secondary: Online) |
Environmental Sound Separation Considering Separation Distortion and Remixing Error Kanta Shimonishi, Takahiro Fukumori, Yoichi Yamashita (Ritsumeikan Univ.) SP2023-24 |
This report aims to improve the performance of environmental sound separation by considering not only the separated soun... [more] |
SP2023-24 pp.119-124 |
EA, SIP, SP, IPSJ-SLP [detail] |
2022-03-02 15:35 |
Okinawa |
(Primary: On-site, Secondary: Online) |
[Poster Presentation]
A study of shout detection for clipped speech Taito Ishida, Kazuhiro Matsuda, Takahiro Fukumori, Yoichi Yamashita (Ritsumeikan Univ.) EA2021-97 SIP2021-124 SP2021-82 |
Recently, several audio surveillance systems using shouted speech have been proposed for safety in daily life.
Although... [more] |
EA2021-97 SIP2021-124 SP2021-82 pp.207-212 |
NLC, IPSJ-NL, SP, IPSJ-SLP [detail] |
2020-12-02 13:50 |
Online |
Online |
Multi-Modal Emotion Recognition by Integrating of Acoustic and Linguistic Features Ryotaro Nagase, Takahiro Fukumori, Yoichi Yamashita (Ritsumeikan Univ.) NLC2020-14 SP2020-17 |
In recent years, the advanced techique of deep learning has improved the performance of Speech Emotional Recognition as ... [more] |
NLC2020-14 SP2020-17 pp.7-12 |
SP, EA, SIP |
2020-03-02 13:00 |
Okinawa |
Okinawa Industry Support Center (Cancelled but technical report was issued) |
Learning of Classification Models using Emotion-specific Soft Labels for Speech Emotion Recognition Mayuko Ozawa, Keisuke Imoto, Ryosuke Yamanishi, Yoichi Yamashita (Ritsumeikan Univ.) EA2019-107 SIP2019-109 SP2019-56 |
[more] |
EA2019-107 SIP2019-109 SP2019-56 pp.35-40 |
SP, EA, SIP |
2020-03-03 09:00 |
Okinawa |
Okinawa Industry Support Center (Cancelled but technical report was issued) |
Evaluation of vocal personality and expression for speech synthesized by non-parallel voice conversion with narrative speech Ryotaro Nagase, Keisuke Imoto, Ryosuke Yamanishi, Yoichi Yamashita (Ritsumeikan Univ.) EA2019-138 SIP2019-140 SP2019-87 |
In the technology of voice conversion, reproduction of emotion and intonation, pause is one of the research issues. Howe... [more] |
EA2019-138 SIP2019-140 SP2019-87 pp.213-218 |
SP |
2018-08-27 11:35 |
Kyoto |
Kyoto Univ. |
[Poster Presentation]
A Study on Representation of Speaker Information for DNN Speech Synthesis Lin Yuhan, Keisuke Imoto, Masahiro Niitsuma, Ryosuke Yamanishi, Yoichi Yamashita (Ritsumeikan Univ.) SP2018-25 |
Recent studies have shown that DNN speech synthesis can generate natural synthesized speech than HMM-based speech synthe... [more] |
SP2018-25 pp.15-18 |
SP |
2017-08-30 11:00 |
Kyoto |
Kyoto Univ. |
[Poster Presentation]
Emotion Recognition in Speech Using Deep Neural Network Li ShiChuan, Tomoki Ishikawa, Masahiro Niitsuma, Keisuke Imoto, Yoichi Yamashita (Ritsumeikan Univ.) SP2017-24 |
Speech conveys not only linguistic information but also paralinguistic and non-linguistic information such as emotions,a... [more] |
SP2017-24 pp.25-28 |
SP, SIP, EA |
2017-03-02 15:45 |
Okinawa |
Okinawa Industry Support Center |
Speech enhancement with phase reconstruction using phase distortion in harmonic frequency Yukoh Wakabayashi, Takahiro Fukumori, Masato Nakayama, Takanobu Nishiura, Yoichi Yamashita (Ritsumeikan Univ.) EA2016-145 SIP2016-200 SP2016-140 |
Conventionally the speech enhancement in noisy environment is widely studied by modifying only the amplitude spectrum of... [more] |
EA2016-145 SIP2016-200 SP2016-140 pp.351-356 |
EA, ASJ-H |
2016-08-09 14:00 |
Miyagi |
Tohoku Gakuin Univ., Tagajo Campus |
Phase reconstruction based on time-frequency domain harmonic structure for speech enhancement Yukoh Wakabayashi, Takahiro Fukumori, Masato Nakayama, Takanobu Nishiura, Yoichi Yamashita (Ritsumeikan Univ.) EA2016-22 |
Conventional speech enhancement in noisy environment is widely studied by modifying only an amplitude spectrum of a spee... [more] |
EA2016-22 pp.13-18 |
EA, SP, SIP |
2016-03-29 09:00 |
Oita |
Beppu International Convention Center B-ConPlaza |
[Poster Presentation]
Speech Enhancement Based on Phase Reconstruction Using Fundamental Frequency and a Priori SNR Yukoh Wakabayashi, Takahiro Fukumori, Masato Nakayama, Takanobu Nishiura, Yoichi Yamashita (Ritsumeikan Univ.) EA2015-122 SIP2015-171 SP2015-150 |
Conventionally the speech enhancement in noisy environment is widely studied by modifying only the amplitude spectrum of... [more] |
EA2015-122 SIP2015-171 SP2015-150 pp.311-316 |
HCGSYMPO (2nd) |
2015-12-16 - 2015-12-18 |
Toyama |
Toyama International Conference Center |
Toward Evolving Comics to be With Sounds Ryosuke Yamanishi, Junichi Fukumoto, Yoichi Yamashita (Ritsumeikan Univ.) |
Digital comics are available with digital devices, e.g., smartphone and tablet, thus it is easy to play sounds with the ... [more] |
|
SIP, EA, SP |
2015-03-02 11:40 |
Okinawa |
|
Optimization of impulse responses for model training in reverberant speech recognition Takahiro Fukumori, Masato Nakayama, Takanobu Nishiura, Yoichi Yamashita (Ritsumeikan Univ.) EA2014-78 SIP2014-119 SP2014-141 |
The reverberant speech degrades the speech recognition performance in the field of distant-talking speech. As one of app... [more] |
EA2014-78 SIP2014-119 SP2014-141 pp.37-42 |
SIP, EA, SP |
2015-03-03 10:45 |
Okinawa |
|
[Poster Presentation]
Design of 3D moving sound image with spherical parametric loudspeaker Daisuke Ikefuji, Masato Nakayama, Takanobu Nishiura, Yoichi Yamashita (Ritsumeikan Univ.) EA2014-116 SIP2014-157 SP2014-179 |
We have previously proposed the system with parametric loudspeakers for three-dimensional sound field reproduction.
It ... [more] |
EA2014-116 SIP2014-157 SP2014-179 pp.243-248 |
SIP, EA, SP |
2015-03-03 10:45 |
Okinawa |
|
[Poster Presentation]
Multimodal source number estimation in multi-party conversations Yukoh Wakabayashi, Masato Nakayama, Takanobu Nishiura, Yoichi Yamashita (Ritsumeikan Univ.) EA2014-120 SIP2014-161 SP2014-183 |
Source number plays an important role in blind source separation (BSS) and sound source localization (SSL) based on subs... [more] |
EA2014-120 SIP2014-161 SP2014-183 pp.267-272 |