IEICE Technical Report

Online edition: ISSN 2432-6380

Volume 121, Number 66

Speech

Workshop Date : 2021-06-18 - 2021-06-19 / Issue Date : 2021-06-11

[PREV] [NEXT]

[TOP] | [2018] | [2019] | [2020] | [2021] | [2022] | [2023] | [2024] | [Japanese] / [English]

[PROGRAM] [BULK PDF DOWNLOAD]


Table of contents

SP2021-1
Tools and practice for supporting recommended protocol for acoustic recording of speech data for high usability -- Application of a cascaded all-pass filters with randomized center frequencies and phase polarities --
Hideki Kawahara (Wakayama Univ.), Kohei Yatabe (Waseda Univ.), Ken-Ichi Sakakibara (Health Sci. Univ. Hokkaido), Mitsunori Mizumachi (Kyushu Inst. Tech), Masanori Morise (Meiji Univ.), Hideki Banno (Meijo Univ.), Toshio Irino (Wakayama Univ.)
pp. 1 - 6

SP2021-2
F0 estimation of speech based on l2-norm regularized TV-CAR analysis
Keiichi Funaki (Univ. of the Ryukyus)
pp. 7 - 12

SP2021-3
A Beginner's Introduction to Sound Programming for Digital Stomp Boxes
Naofumi Aoki (Hokkaido Univ.)
pp. 13 - 18

SP2021-4
Protection method with audio processing against Audio Adversarial Example
Taisei Yamamoto, Yuya Tarutani, Yukinobu Fukusima, Tokumi Yokohira (Okayama Univ)
pp. 19 - 24

SP2021-5
Speech Intelligibility Experiments using crowdsourcing -- from designing Web page to Data screening --
Ayako Yamamoto, Toshio Irino (Wakayama Univ.), Kenichi Arai, Shoko Araki, Atsunori Ogawa, Keisuke Kinoshita, Tomohiro Nakatani (NTT)
pp. 25 - 30

SP2021-6
[Poster Presentation] Scream detection based on deep learning using time-sequential spectral and cepstral features
Takahiro Fukumori (Ritsumeikan Univ.)
pp. 31 - 36

SP2021-7
[Invited Talk] Spoken Dialogue System for Android ERICA -- A Multimodal Turing Test Challenge --
Koji Inoue (Kyoto Univ.)
p. 37

SP2021-8
[Invited Talk] Toward a Unification of Various Speech Processing Tasks Based on End-to-End Neural networks
Shinji Watanabe (CMU)
p. 38

SP2021-9
Creating of Japanese Phoneme Balanced Sentences for Speech Synthesis
Yuko Takai, Naofumi Aoki, Yoshinori Dobashi (Hokkaido Univ.)
pp. 39 - 41

SP2021-10
Verifying the Method to Generate Stage Data for Rhythm Game Using Machine Learning
Atsuhito Udo, Naofumi Aoki, Yoshinori Dobashi (Hokkaido Univ.)
pp. 42 - 45

SP2021-11
Low Loss Machine Learning for Digital Modeling of Distortion Stomp Boxes.
Yuto Matsunaga, Naofumi Aoki, Yoshinori Dobashi (Hokkaido Univ.), Tetsuya Kojima (NITTC)
pp. 46 - 50

SP2021-12
A Study on Error Correction for Improving the Accuracy of Acoustic Models
Saki Anazawa, Naofumi Aoki, Yoshinori Dobashi (Hokkaido Univ.)
pp. 51 - 52

SP2021-13
A Research Related to the Fricative Sound Determination in Digital Pattern Playback
Hiroki Otake, Naofumi Aoki, Kosei Ozeki, Yoshinori Dobashi (Hokkaido Univ.)
pp. 53 - 56

SP2021-14
Study on the background cancellation system for speech privacy
Jiangning Huang, Akinori Ito (Tohoku Univ.)
pp. 57 - 62

SP2021-15
Simulation of Body-conducted Speech and Synthesis of One's Own Voice with a Sound-proof Earmuff and Bone-conduction Microphones
Chen Ruiyan, Nishimura Tazuko, Minematsu Nobuaki, Saito Daisuke (UTokyo)
pp. 63 - 68

SP2021-16
How logical properties in speech are processed in the brian -- Digital Linguistics --
Kumon Tokumaru (Writer)
pp. 69 - 74

SP2021-17
Investigation on fine-tuning with image classification networks for deep neural network-based musical instrument classification
Yuki Shiroma, Yuma Kinoshita, Sayaka Shiota, Hitoshi Kiya (TMU)
pp. 75 - 79

SP2021-18
Dynamic Display of Guidelines in Interactive Speech Synthesizer
Daiki Goto (Hokkai Gakuen Univ.), Naofumi Aoki, Keisuke ai (Hokkaido Univ.), Kunitoshi Motoki (Hokkai Gakuen Univ.)
pp. 80 - 84

SP2021-19
Preliminary study on synthesizing relaxing voices -- from a perspective of recognized/evoked emotions and acoustic features --
Yuki Watanabe, Shuichi Sakamoto (Tohoku Univ.), Takayuki Hoshi, Yoshiki Nagatani, Manabu Nakano (Pixie Dust Technologies)
pp. 85 - 90

SP2021-20
Unseen speaker's Voice Conversion by FaderNetVC with Speaker Feature Extractor
Takumi Isako, Takuya Kishida, Toru Nakashika (UEC)
pp. 91 - 96

SP2021-21
Development of ultrasonic signal classification system using deep learning
Kosei Ozeki, Naofumi Aoki, Yoshinori Dobashi (Hokkaido Univ.), Kenichi Ikeda, Hiroshi Yasuda (SST)
pp. 97 - 100

SP2021-22
Source Separation for Asynchronous Recordings of Conversation Using Time-Frequency Masking and Independent Vector Analysis
Haruki Nammoku, Kouei Yamaoka, Yukoh Wakabayashi, Nobutaka Ono (TMU)
pp. 101 - 106

SP2021-23
Neural speech synthesis using local phrase dependency structure information
Nobuyoshi Kaiki, Sakriani Sakti, Satoshi Nakamura (NIST)
pp. 107 - 112

Note: Each article is a technical report without peer review, and its polished version will be published elsewhere.


The Institute of Electronics, Information and Communication Engineers (IEICE), Japan