IEICE Technical Committee Submission System
Conference Schedule
Online Proceedings
[Sign in]
Tech. Rep. Archives
    [Japanese] / [English] 
( Committee/Place/Topics  ) --Press->
 
( Paper Keywords:  /  Column:Title Auth. Affi. Abst. Keyword ) --Press->

All Technical Committee Conferences  (Searched in: All Years)

Search Results: Conference Papers
 Conference Papers (Available on Advance Programs)  (Sort by: Date Descending)
 Results 1 - 19 of 19  /   
Committee Date Time Place Paper Title / Authors Abstract Paper #
SP, IPSJ-MUS, IPSJ-SLP [detail] 2023-06-23
13:50
Tokyo
(Primary: On-site, Secondary: Online)
Speech Emotion Recognition based on Emotional Label Sequence Estimation Considering Phoneme Class Attribute
Ryotaro Nagase, Takahiro Fukumori, Yoichi Yamashita (Ritsumeikan Univ.) SP2023-9
Recently, many researchers have tackled speech emotion recognition (SER), which predicts emotion conveyed by speech. In ... [more] SP2023-9
pp.42-47
SP, IPSJ-MUS, IPSJ-SLP [detail] 2023-06-23
13:50
Tokyo
(Primary: On-site, Secondary: Online)
[Poster Presentation] Generation of colored subtitle images based on emotional information of speech utterances
Fumiya Nakamura (Kobe Univ.), Ryo Aihara (Mitsubishi Electric), Ryoichi Takashima, Tetsuya Takiguchi (Kobe Univ.), Yusuke Itani (Mitsubishi Electric) SP2023-11
Conventional automatic subtitle generation systems based on speech recognition do not take into account paralinguistic i... [more] SP2023-11
pp.54-59
SP, IPSJ-MUS, IPSJ-SLP [detail] 2023-06-24
13:50
Tokyo
(Primary: On-site, Secondary: Online)
Evaluation of multi-speaker text-to-speech synthesis using a corpus for speech recognition with x-vectors for various speech styles
Koki Hida (Wakayama Univ/NICT), Takuma Okamoto (NICT), Ryuichi Nisimura (Wakayama Univ), Yamato Ohtani (NICT), Tomoki Toda (Nagoya Univ/NICT), Hisashi Kawai (NICT) SP2023-25
We have implemented multi-speaker end-to-end text-to-speech synthesis based on JETS using x-vectors as speaker embedding... [more] SP2023-25
pp.125-130
NLC, IPSJ-NL, SP, IPSJ-SLP [detail] 2021-12-02
15:20
Online Online improvement of multilingual speech emotion recognition by normalizing features using CRNN
Jinhai Qi, Motoyuki Suzuki (OIT) NLC2021-22 SP2021-43
In this research, a new multilingual emotion recognition method by normalizing features using CRNN has been proposed. We... [more] NLC2021-22 SP2021-43
pp.22-26
NLC, IPSJ-NL, SP, IPSJ-SLP [detail] 2020-12-02
13:50
Online Online Multi-Modal Emotion Recognition by Integrating of Acoustic and Linguistic Features
Ryotaro Nagase, Takahiro Fukumori, Yoichi Yamashita (Ritsumeikan Univ.) NLC2020-14 SP2020-17
In recent years, the advanced techique of deep learning has improved the performance of Speech Emotional Recognition as ... [more] NLC2020-14 SP2020-17
pp.7-12
HCGSYMPO
(2nd)
2019-12-11
- 2019-12-13
Hiroshima Hiroshima-ken Joho Plaza (Hiroshima) Crosslingual Emotion Recognition using English and Japanese Speech Data
Yuta Nirasawa, Atom Scotto, Ryota Sakuma, Yuki Hujita, Keiich Zempo (Tsukuba Univ.)
Since reasearch in Speech Emotion Recognition(SER) is performed with mostly English data, applying these models to Japan... [more]
NLC, IPSJ-NL, SP, IPSJ-SLP
(Joint) [detail]
2019-12-06
16:25
Tokyo NHK Science & Technology Research Labs. An evaluation of representation learning using phoneme posteriorgrams and data augmentation in speech emotion recognition
Shintaro Okada (Nagoya Univ.), Atsushi Ando (Nagoya Univ./NTT), Tomoki Toda (Nagoya Univ.) SP2019-43
This paper presents a new speech emotion recognition method based on representation learning and data augmentation.
To ... [more]
SP2019-43
pp.91-96
AI 2018-12-07
15:55
Fukuoka  
Toyoaki Kuwahara, Yuichi Sei, Yasuyuki Tahara, Akihiko Ohsuga (UEC) AI2018-30
The emotion estimation by speech makes it possible to estimate with higher precision with the development of deep learni... [more] AI2018-30
pp.25-29
SP, IPSJ-SLP
(Joint)
2017-07-27
14:30
Miyagi Akiu Resort Hotel Crescent [Invited Talk] Synthesis, Recognition and Conversion of Various Speech Using Deep Learning and Their Applications
Takashi Nose (Tohoku Univ.) SP2017-16
This paper focuses on synthesis, recognition and conversion of various speech in the speech processing using deep learni... [more] SP2017-16
pp.3-8
SP 2015-10-15
17:10
Hyogo Kobe Univ. Design and evaluation of prosodically balanced emotion-dependent sentence set based on entropy
Emika Takeishi, Takashi Nose, Taketo Kase, Akinori Ito (Tohoku Univ.) SP2015-65
We designed an emotional speech database that can be used for emition recognition as well as recognition and synthsis of... [more] SP2015-65
pp.33-38
NLC, IPSJ-NL, SP, IPSJ-SLP, JSAI-SLUD
(Joint) [detail]
2014-12-15
09:30
Kanagawa Tokyo Institute of Technology (Suzukakedai Campus) Recognition and Analysis of Emotion in Indonesian Conversational Speech
Nurul Lubis, Sakriani Sakti, Graham Neubig, Tomoki Toda (NAIST), Dessi Lestari, Ayu Purwarianti (ITB), Satoshi Nakamura (NAIST) SP2014-106
The importance of incorporating emotional aspect in human computer interaction continues to arise. Unfortunately, explor... [more] SP2014-106
pp.1-6
NLC 2014-06-14
14:45
Fukuoka Kyushu Institute of Technology Automatically-Generated Dictionary-based Emotion Recognition from Tweet Speech
Eri Yanase, Hiromitsu Nishizaki, Yoshihiro Sekiguchi (Univ. of Yamanashi) NLC2014-4
We make a study of utilization of linguistic information for classification of emotion in spoken utterances. In Japanese... [more] NLC2014-4
pp.17-22
SP 2014-02-28
12:40
Tokushima The University of Tokushima [Poster Presentation] Evaluation of the generation of text balloon using the acoustic features
Sho Matsumiya, Sakriani Sakti, Graham Neubig, Tomoki Toda, Satoshi Nakamura (NAIST) SP2013-115
Expansion of technology to automatically generate subtitles has advanced due to development of speech recognition techno... [more] SP2013-115
pp.29-30
SP 2013-03-01
11:00
Aichi Daido University Automatic Speech Emotion Recognition Based on Dimensional Approach
Reda Elbarougy, Masato Akagi (JAIST) SP2012-127
This paper proposes a three-layer model for estimating the expressed emotions in a speech signal based on a dimensional ... [more] SP2012-127
pp.41-46
HCGSYMPO
(2nd)
2012-12-10
- 2012-12-12
Kumamoto Kumamoto-Shintoshin-plaza Impression Classification Using Speech Segments in the Utterance
Masahiro Uchida, Takahiro Shinozaki, Yasuo Horiuchi, Shingo Kuroiwa (Chiba Univ.)
Impression that a speaker gives to a receiver plays an important role in human speech conversation. However, the... [more]
SP 2012-06-14
13:30
Kanagawa NTT Atsugi R&D Center Comparison of Methods for Emotion Dimensions Estimation in Speech Using a Three-Layered Model
Reda Elbarougy, Masato Akagi (JAIST) SP2012-36
This paper proposes a three-layer model for estimating the expressed emotions in a
speech signal based on a dimensional... [more]
SP2012-36
pp.19-24
SP, NLC 2008-12-09
14:10
Tokyo Waseda Univ. Acoustic Model Training Technique for Speech Recognition using Style Estimation with Multiple-Regression HMM
Yusuke Ijima, Makoto Tachibana, Takashi Nose, Takao Kobayashi (Tokyo Tech) NLC2008-30 SP2008-85
We propose a technique for emotional speech recognition based on multiple-regression HMM (MRHMM). To achieve emotional s... [more] NLC2008-30 SP2008-85
pp.37-42
SP, NLC 2008-12-10
16:10
Tokyo Waseda Univ. Driver's irritation detection using speech recognition results
Lucas Malta, Chiyomi Miyajima, Akira Ozaki, Norihide Kitaoka, Kazuya Takeda (Nagoya Univ.) NLC2008-65 SP2008-120
In this work we present our efforts towards the multi-modal estimation of a driver's affective state under naturalistic ... [more] NLC2008-65 SP2008-120
pp.245-248
SP 2008-07-17
- 2008-07-19
Iwate Iwate Prefectural Univ. An On-line Acoustic Model Adaptation Technique Based on Style Estimation
Yusuke Ijima, Makoto Tachibana, Takashi Nose, Takao Kobayashi (Tokyo Tech) SP2008-48
We propose a model adaptation technique for emotional speech recognition based on multiple-regression HMM(MR-HMM). We us... [more] SP2008-48
pp.31-36
 Results 1 - 19 of 19  /   
Choose a download format for default settings. [NEW !!]
Text format pLaTeX format CSV format BibTeX format
Copyright and reproduction : All rights are reserved and no part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher. Notwithstanding, instructors are permitted to photocopy isolated articles for noncommercial classroom use without fee. (License No.: 10GA0019/12GB0052/13GB0056/17GB0034/18GB0034)


[Return to Top Page]

[Return to IEICE Web Page]


The Institute of Electronics, Information and Communication Engineers (IEICE), Japan