IEICE Technical Committee Submission System
Conference Schedule
Online Proceedings
[Sign in]
Tech. Rep. Archives
    [Japanese] / [English] 
( Committee/Place/Topics  ) --Press->
 
( Paper Keywords:  /  Column:Title Auth. Affi. Abst. Keyword ) --Press->

All Technical Committee Conferences  (Searched in: All Years)

Search Results: Conference Papers
 Conference Papers (Available on Advance Programs)  (Sort by: Date Descending)
 Results 1 - 20 of 50  /  [Next]  
Committee Date Time Place Paper Title / Authors Abstract Paper #
CAS, CS 2024-03-14
15:55
Okinawa   Residual Noise Removal in of Sound Source Separation Signal by Spectral Replacement
Taiga Saito, Kenji Suyama (Tokyo Denki Univ.) CAS2023-122 CS2023-115
Although sound source separation method based on a multiplication of multiple weighted sum circuits has high suppression... [more] CAS2023-122 CS2023-115
pp.64-69
SIP, SP, EA, IPSJ-SLP [detail] 2024-03-01
16:35
Okinawa
(Primary: On-site, Secondary: Online)
Evaluations of Multi-channel Blind Source Separation for Speech Recognition in Car Environments
Yutsuki Takeuchi, Natsuki Ueno, Nobutaka Ono (Tokyo Metropolitan Univ.), Takashi Takazawa, Shuhei Shimanoe, Tomoki Tanemura (MIRISE Technologies) EA2023-127 SIP2023-174 SP2023-109
In car environments, speech recognition is difficult due to various types of noise. For this issue, speech enhancement b... [more] EA2023-127 SIP2023-174 SP2023-109
pp.388-393
SP, IPSJ-MUS, IPSJ-SLP [detail] 2022-06-18
15:00
Online Online Unsupervised Training of Sequential Neural Beamformer Using Blindly-separated and Non-separated Signals
Kohei Saijo, Tetsuji Ogawa (Waseda Univ.) SP2022-25
We present an unsupervised training method of the sequential neural beamformer (Seq-NBF) using the separated signals fro... [more] SP2022-25
pp.110-115
EA, SIP, SP, IPSJ-SLP [detail] 2022-03-01
14:45
Okinawa
(Primary: On-site, Secondary: Online)
Target speaker extraction based on conditional variational autoencoder and directional information in underdetermined condition
Rui Wang, Li Li, Tomoki Toda (Nagoya Univ) EA2021-76 SIP2021-103 SP2021-61
This paper deals with a dual-channel target speaker extraction problem in underdetermined conditions. A blind source sep... [more] EA2021-76 SIP2021-103 SP2021-61
pp.76-81
PRMU, IPSJ-CVIM 2021-03-05
09:45
Online Online Improved Speech Separation Performance from Monaural Mixed Speech Based on Deep Embedding Network
Shaoxiang Dang, Tetsuya Matsumoto, Hiroaki Kudo (Nagoya Univ.), Yoshinori Takeuchi (Daido Univ.) PRMU2020-85
Speech separation refers to the separation of utterances in which multiple people are speaking simultaneously. The idea ... [more] PRMU2020-85
pp.91-96
EA, US, SP, SIP, IPSJ-SLP [detail] 2021-03-03
13:05
Online Online [Invited Talk] *
Masahito Togami (LINE) EA2020-64 SIP2020-95 SP2020-29
Recently, deep learning based speech source separation has been evolved rapidly. A neural network (NN) is usually learne... [more] EA2020-64 SIP2020-95 SP2020-29
pp.27-32
EA, US, SP, SIP, IPSJ-SLP [detail] 2021-03-03
14:05
Online Online [Poster Presentation] Noise-robust time-domain speech separation with basis signals for noise
Kohei Ozamoto (Tokyo Tech), Koji Iwano (TCU), Kuniaki Uto, Koichi Shinoda (Tokyo Tech) EA2020-70 SIP2020-101 SP2020-35
Recently, speech separation using deep learning has been extensively studied. TasNet, a time-domain method that directly... [more] EA2020-70 SIP2020-101 SP2020-35
pp.63-67
EA, SIP, SP 2019-03-14
10:25
Nagasaki i+Land nagasaki (Nagasaki-shi) Blind speech separation based on approximate joint diagonalization utilizing correlation between neighboring frequency bins
Taiki Asamizu, Toshihiro Furukawa (TUS) EA2018-100 SIP2018-106 SP2018-62
In this paper, we propose a new method that extends the approximate joint diagonalization blind speech separation (BSS).... [more] EA2018-100 SIP2018-106 SP2018-62
pp.7-12
EA, SIP, SP 2019-03-15
13:30
Nagasaki i+Land nagasaki (Nagasaki-shi) [Poster Presentation] Design and Evaluation of Ladder Denoising Autoencoder for Auditory Speech Feature Extraction of Overlapped Speech Separation
Hiroshi Sekiguchi, Yoshiaki Narusue, Hiroyuki Morikawa (Univ. of Tokyo) EA2018-155 SIP2018-161 SP2018-117
Primates and mammalian distinguish overlapped speech sounds from one another by recognizing a single sound source whethe... [more] EA2018-155 SIP2018-161 SP2018-117
pp.329-333
EA, ASJ-H 2018-08-23
12:55
Miyagi Tohoku Gakuin Univ. Self-produced speech enhancement and suppression method with wearable air- and body-conductive microphones
Moe Takada, Shogo Seki, Tomoki Toda (Nagoya Univ.) EA2018-29
This paper presents a self-produced speech enhancement and suppression method for multichannel signals recorded with bot... [more] EA2018-29
pp.7-12
SP, IPSJ-SLP
(Joint)
2018-07-26
16:15
Shizuoka Sago-Royal-Hotel (Hamamatsu) Ladder Network Driven from Auditory Computational Model for Multi-talker Speech Separation
Hiroshi Sekiguchi, Yoshiaki Narusue, Hiroyuki Morikawa (Univ. of Tokyo) SP2018-18
This paper introduces ladder network implementation induced by auditory computational model for multi-talker speech sepa... [more] SP2018-18
pp.9-13
SIP, EA, SP, MI
(Joint) [detail]
2018-03-19
09:25
Okinawa   Stable Estimation Method of Spatial Correlation Matrices for Multi-channel NMF
Yuuki Tachioka (Denso IT Lab) EA2017-103 SIP2017-112 SP2017-86
Multi-channel non-negative matrix factorization (MNMF) achieves a high sound source separation performance but its initi... [more] EA2017-103 SIP2017-112 SP2017-86
pp.7-12
EA 2018-02-16
13:10
Hiroshima Pref. Univ. Hiroshima The effect of increasing the number of channels with multi-channel non-negative matrix factorization for noisy speech recognition
Takanobu Uramoto (Oita Univ.), Youhei Okato, Toshiyuki Hanazawa (Mitsubishi Electric), Iori Miura, Shingo Uenohara, Ken'ich Furuya (Oita Univ.) EA2017-99
Nonnegative Matrix Factorization (NMF) factorizes a non-negative matrix into two non-negative matrices. In the field of ... [more] EA2017-99
pp.33-38
NLC, IPSJ-NL, SP, IPSJ-SLP
(Joint) [detail]
2017-12-22
11:20
Tokyo Waseda Univ. Green Computing Systems Research Organization A Sound Source Separation Method for Multiple Person Speech Recognition using Wavelet Analysis Based on Sound Source Position Obtained by Depth Sensor
Nobuhiro Uehara, Kazuo Ikeshiro, Hiroki Imamura (Soka Univ.) SP2017-63
Recently, voice information guidance systems are used for only one person in operating at a city hall. To realize operat... [more] SP2017-63
pp.79-83
SIS 2017-12-14
10:50
Tottori Tottori Prefectural Center for Lifelong Learning Harmonic Structure Detection in Speech Separation Using Modified DFT Pair Based on ASA
Motohiro Ichikawa, Isao Nakanishi (Tottori Univ) SIS2017-34
Humans have the ability of cocktail party effect to be able to recognized the target voice from the various conversation... [more] SIS2017-34
pp.5-9
WIT, SP 2017-10-19
13:20
Fukuoka Tobata Library of Kyutech (Kitakyushu) Speech enhancement of utterance while playing with werewolf game "JINRO" based on NMF
Shunsuke Kawano, Toru Takahashi (OSU) SP2017-35 WIT2017-31
We describe that speech enhancement for natural and multi speaker dialognue. To record natural and multi speaker dialogn... [more] SP2017-35 WIT2017-31
pp.7-12
SP 2017-08-30
11:00
Kyoto Kyoto Univ. [Poster Presentation] Semi-blind speech separation and enhancement using recurrent neural network
Masaya Wake, Yoshiaki Bando, Masato Mimura, Katsutoshi Itoyama, Kazuyoshi Yoshii, Tatsuya Kawahara (Kyoto Univ.) SP2017-22
This paper describes a semi-blind speech enhancement method using a neural network.
In a human-robot speech interaction... [more]
SP2017-22
pp.13-18
PRMU, SP 2017-06-22
14:45
Miyagi   Postfiltering of STFT Spectrograms Based on Generative Adversarial Networks
Takuhiro Kaneko (NTT), Shinji Takaki (NII), Hirokazu Kameoka (NTT), Junichi Yamagishi (NII) PRMU2017-28 SP2017-4
This paper presents postfiltering of short-term Fourier transform (STFT) spectrograms based on Generative Adversarial Ne... [more] PRMU2017-28 SP2017-4
pp.17-22
CAS, ICTSSL 2017-01-26
09:00
Tokyo Kikai-Shinko-Kaikan Bldg. Target Sound Enhancement by Post Processing of Sound Source Separation
Naoki Shinohara, Kenji Suyama (Tokyo Denki Univ.) CAS2016-77 ICTSSL2016-31
Although several methods have been proposed for sound source separation, a suppression ability of interference sound is ... [more] CAS2016-77 ICTSSL2016-31
pp.1-6
EA, EMM 2015-11-12
17:00
Kumamoto Kumamoto Univ. Noise suppression method for body-conducted soft speech based on external noise monitoring
Yusuke Tajiri (NAIST), Tomoki Toda (Nagoya Univ.), Satoshi Nakamura (NAIST) EA2015-31 EMM2015-52
As one of the silent speech interfaces, nonaudible murmur (NAM) microphone has been developed for detecting an extremely... [more] EA2015-31 EMM2015-52
pp.41-46
 Results 1 - 20 of 50  /  [Next]  
Choose a download format for default settings. [NEW !!]
Text format pLaTeX format CSV format BibTeX format
Copyright and reproduction : All rights are reserved and no part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher. Notwithstanding, instructors are permitted to photocopy isolated articles for noncommercial classroom use without fee. (License No.: 10GA0019/12GB0052/13GB0056/17GB0034/18GB0034)


[Return to Top Page]

[Return to IEICE Web Page]


The Institute of Electronics, Information and Communication Engineers (IEICE), Japan