IEICE Technical Committee Submission System
Conference Schedule
Online Proceedings
[Sign in]
Tech. Rep. Archives
    [Japanese] / [English] 
( Committee/Place/Topics  ) --Press->
 
( Paper Keywords:  /  Column:Title Auth. Affi. Abst. Keyword ) --Press->

All Technical Committee Conferences  (Searched in: All Years)

Search Results: Conference Papers
 Conference Papers (Available on Advance Programs)  (Sort by: Date Descending)
 Results 1 - 20 of 39  /  [Next]  
Committee Date Time Place Paper Title / Authors Abstract Paper #
CAS, CS 2024-03-14
15:55
Okinawa   Residual Noise Removal in of Sound Source Separation Signal by Spectral Replacement
Taiga Saito, Kenji Suyama (Tokyo Denki Univ.) CAS2023-122 CS2023-115
Although sound source separation method based on a multiplication of multiple weighted sum circuits has high suppression... [more] CAS2023-122 CS2023-115
pp.64-69
SIP, SP, EA, IPSJ-SLP [detail] 2024-03-01
16:35
Okinawa
(Primary: On-site, Secondary: Online)
Evaluations of Multi-channel Blind Source Separation for Speech Recognition in Car Environments
Yutsuki Takeuchi, Natsuki Ueno, Nobutaka Ono (Tokyo Metropolitan Univ.), Takashi Takazawa, Shuhei Shimanoe, Tomoki Tanemura (MIRISE Technologies) EA2023-127 SIP2023-174 SP2023-109
In car environments, speech recognition is difficult due to various types of noise. For this issue, speech enhancement b... [more] EA2023-127 SIP2023-174 SP2023-109
pp.388-393
SP, IPSJ-MUS, IPSJ-SLP [detail] 2022-06-18
15:00
Online Online Unsupervised Training of Sequential Neural Beamformer Using Blindly-separated and Non-separated Signals
Kohei Saijo, Tetsuji Ogawa (Waseda Univ.) SP2022-25
We present an unsupervised training method of the sequential neural beamformer (Seq-NBF) using the separated signals fro... [more] SP2022-25
pp.110-115
EA, SIP, SP, IPSJ-SLP [detail] 2022-03-01
14:45
Okinawa
(Primary: On-site, Secondary: Online)
Target speaker extraction based on conditional variational autoencoder and directional information in underdetermined condition
Rui Wang, Li Li, Tomoki Toda (Nagoya Univ) EA2021-76 SIP2021-103 SP2021-61
This paper deals with a dual-channel target speaker extraction problem in underdetermined conditions. A blind source sep... [more] EA2021-76 SIP2021-103 SP2021-61
pp.76-81
EA, US, SP, SIP, IPSJ-SLP [detail] 2021-03-03
13:05
Online Online [Invited Talk] *
Masahito Togami (LINE) EA2020-64 SIP2020-95 SP2020-29
Recently, deep learning based speech source separation has been evolved rapidly. A neural network (NN) is usually learne... [more] EA2020-64 SIP2020-95 SP2020-29
pp.27-32
EA, SIP, SP 2019-03-14
10:25
Nagasaki i+Land nagasaki (Nagasaki-shi) Blind speech separation based on approximate joint diagonalization utilizing correlation between neighboring frequency bins
Taiki Asamizu, Toshihiro Furukawa (TUS) EA2018-100 SIP2018-106 SP2018-62
In this paper, we propose a new method that extends the approximate joint diagonalization blind speech separation (BSS).... [more] EA2018-100 SIP2018-106 SP2018-62
pp.7-12
EA, SIP, SP 2019-03-15
13:30
Nagasaki i+Land nagasaki (Nagasaki-shi) [Poster Presentation] Design and Evaluation of Ladder Denoising Autoencoder for Auditory Speech Feature Extraction of Overlapped Speech Separation
Hiroshi Sekiguchi, Yoshiaki Narusue, Hiroyuki Morikawa (Univ. of Tokyo) EA2018-155 SIP2018-161 SP2018-117
Primates and mammalian distinguish overlapped speech sounds from one another by recognizing a single sound source whethe... [more] EA2018-155 SIP2018-161 SP2018-117
pp.329-333
EA, ASJ-H 2018-08-23
12:55
Miyagi Tohoku Gakuin Univ. Self-produced speech enhancement and suppression method with wearable air- and body-conductive microphones
Moe Takada, Shogo Seki, Tomoki Toda (Nagoya Univ.) EA2018-29
This paper presents a self-produced speech enhancement and suppression method for multichannel signals recorded with bot... [more] EA2018-29
pp.7-12
SIP, EA, SP, MI
(Joint) [detail]
2018-03-19
09:25
Okinawa   Stable Estimation Method of Spatial Correlation Matrices for Multi-channel NMF
Yuuki Tachioka (Denso IT Lab) EA2017-103 SIP2017-112 SP2017-86
Multi-channel non-negative matrix factorization (MNMF) achieves a high sound source separation performance but its initi... [more] EA2017-103 SIP2017-112 SP2017-86
pp.7-12
EA 2018-02-16
13:10
Hiroshima Pref. Univ. Hiroshima The effect of increasing the number of channels with multi-channel non-negative matrix factorization for noisy speech recognition
Takanobu Uramoto (Oita Univ.), Youhei Okato, Toshiyuki Hanazawa (Mitsubishi Electric), Iori Miura, Shingo Uenohara, Ken'ich Furuya (Oita Univ.) EA2017-99
Nonnegative Matrix Factorization (NMF) factorizes a non-negative matrix into two non-negative matrices. In the field of ... [more] EA2017-99
pp.33-38
NLC, IPSJ-NL, SP, IPSJ-SLP
(Joint) [detail]
2017-12-22
11:20
Tokyo Waseda Univ. Green Computing Systems Research Organization A Sound Source Separation Method for Multiple Person Speech Recognition using Wavelet Analysis Based on Sound Source Position Obtained by Depth Sensor
Nobuhiro Uehara, Kazuo Ikeshiro, Hiroki Imamura (Soka Univ.) SP2017-63
Recently, voice information guidance systems are used for only one person in operating at a city hall. To realize operat... [more] SP2017-63
pp.79-83
WIT, SP 2017-10-19
13:20
Fukuoka Tobata Library of Kyutech (Kitakyushu) Speech enhancement of utterance while playing with werewolf game "JINRO" based on NMF
Shunsuke Kawano, Toru Takahashi (OSU) SP2017-35 WIT2017-31
We describe that speech enhancement for natural and multi speaker dialognue. To record natural and multi speaker dialogn... [more] SP2017-35 WIT2017-31
pp.7-12
SP 2017-08-30
11:00
Kyoto Kyoto Univ. [Poster Presentation] Semi-blind speech separation and enhancement using recurrent neural network
Masaya Wake, Yoshiaki Bando, Masato Mimura, Katsutoshi Itoyama, Kazuyoshi Yoshii, Tatsuya Kawahara (Kyoto Univ.) SP2017-22
This paper describes a semi-blind speech enhancement method using a neural network.
In a human-robot speech interaction... [more]
SP2017-22
pp.13-18
CAS, ICTSSL 2017-01-26
09:00
Tokyo Kikai-Shinko-Kaikan Bldg. Target Sound Enhancement by Post Processing of Sound Source Separation
Naoki Shinohara, Kenji Suyama (Tokyo Denki Univ.) CAS2016-77 ICTSSL2016-31
Although several methods have been proposed for sound source separation, a suppression ability of interference sound is ... [more] CAS2016-77 ICTSSL2016-31
pp.1-6
EA, EMM 2015-11-12
17:00
Kumamoto Kumamoto Univ. Noise suppression method for body-conducted soft speech based on external noise monitoring
Yusuke Tajiri (NAIST), Tomoki Toda (Nagoya Univ.), Satoshi Nakamura (NAIST) EA2015-31 EMM2015-52
As one of the silent speech interfaces, nonaudible murmur (NAM) microphone has been developed for detecting an extremely... [more] EA2015-31 EMM2015-52
pp.41-46
EA 2014-10-24
14:20
Tokyo Central Research Laboratory, Hitachi, Ltd. [Invited Talk] Speech enhancement techniques in multi-speaker spontaneous speech recognition for conversation scene analysis
Shoko Araki, Takaaki Hori, Tomohiro Nakatani (NTT) EA2014-25
This paper illustrates speech enhancement techniques for multi-speaker distant-talk speech recognition, where a conversa... [more] EA2014-25
pp.9-14
SIS 2013-12-12
13:00
Tottori Torigin Bunka Kaikan (Tottori) [Tutorial Lecture] Enhancement and Separation for Speech Signals
Arata Kawamura (Osaka Univ.) SIS2013-35
In this paper, we discus about three main topics of speech processing technologies. First, we review and discuss about a... [more] SIS2013-35
pp.47-52
SP, IPSJ-SLP 2012-12-21
14:40
Tokyo TITECH(Ookayama) Reduction of cross spectrum for feature-domain sound source separation
Atsushi Ando (Nagoya Univ.), Kenta Niwa (NTT), Norihide Kitaoka, Kazuya Takeda (Nagoya Univ.) SP2012-93
Speech source separation is utilized for recognition of simultaneous speech. Conventional source separation methods, esp... [more] SP2012-93
pp.107-112
EA, EMM 2012-11-16
12:10
Oita OITA Univ. Auxiliary-function-based independent vector analysis with non-speech frame information for speech enhancement
Masataka Suzuki (Univ. of Tokyo), Nobutaka Ono (NII), Toru Taniguchi, Masaru Sakai, Akinori Kawamura (Toshiba Corp.), Miquel Espi, Shigeki Sagayama (Univ. of Tokyo) EA2012-87 EMM2012-69
In this study, we discuss a technique to enhance the speech of interest in the noisy environment with using microphone a... [more] EA2012-87 EMM2012-69
pp.35-38
PRMU, SP 2012-02-10
15:20
Miyagi   Multi-band speech recognition using confidence of blind source separation
Atsushi Ando, Hiromasa Ohashi (Nagoya Univ.), Sunao Hara (NAIST), Norihide Kitaoka, Kazuya Takeda (Nagoya Univ.) PRMU2011-234 SP2011-149
One of the main applications of Blind Source Separation (BSS) is to improve performance of Automatic Speech Recognition ... [more] PRMU2011-234 SP2011-149
pp.219-224
 Results 1 - 20 of 39  /  [Next]  
Choose a download format for default settings. [NEW !!]
Text format pLaTeX format CSV format BibTeX format
Copyright and reproduction : All rights are reserved and no part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher. Notwithstanding, instructors are permitted to photocopy isolated articles for noncommercial classroom use without fee. (License No.: 10GA0019/12GB0052/13GB0056/17GB0034/18GB0034)


[Return to Top Page]

[Return to IEICE Web Page]


The Institute of Electronics, Information and Communication Engineers (IEICE), Japan