IEICE Technical Committee Submission System
Conference Paper's Information
Online Proceedings
[Sign in]
Tech. Rep. Archives
 Go Top Page Go Previous   [Japanese] / [English] 

Paper Abstract and Keywords
Presentation 2023-06-24 13:50
Domain adaptation of speech recognition models based on self-supervised learning using target domain speech
Takahiro Kinouchi (TUT), Atsunori Ogawa (NTT), Yuko Wakabayashi, Norihide Kitaoka (TUT) SP2023-19
Abstract (in Japanese) (See Japanese page) 
(in English) In this study, we propose a domain adaptation method using only speech data in the target domain without using transcribed text data in the target domain based on a speech recognition model that has been pre-trained in the source domain. Speech recognition is used in various services and businesses, and it is known that the accuracy of speech recognition in each of these domains depends on the amount of speech data in that domain. Generally, it is desirable to train or fine-tune speech recognition models from scratch using a large amount of speech data and transcribed text data to build highly accurate models. However, preparing such data is expensive and difficult every time a model is built in each domain. Therefore, we focused on the fact that it is relatively inexpensive to prepare only audio data. Under these conditions, we developed an Encoder-Decoder speech recognition model using a Wav2Vec2.0 model as the Encoder, which was pre-trained with a large amount of target-domain speech only, and a large corpus of fine-tuned transcriptions in the non-target domain. We propose adapting an Encoder-Decoder type speech recognition model to the target domain by fine-tuning it with a large corpus of transcriptions in the off-target domain. The proposed method consists of three steps: 1) additional pre-training of wav2vec 2.0, 2) fine-tuning of wav2vec 2.0, and 3) building a Joint CTC/Transformer model with wav2vec 2.0 as the Encoder. This method improved the character error rate by approximately 3.8 pts compared to the case where the Encoder was not pre-trained in the target domain for the target domain evaluation data.
Keyword (in Japanese) (See Japanese page) 
(in English) wav2vec 2.0 / domain adaptation / end-to-end speech recognition / Encoder-Decoder model / / / /  
Reference Info. IEICE Tech. Rep., vol. 123, no. 88, SP2023-19, pp. 91-96, June 2023.
Paper # SP2023-19 
Date of Issue 2023-06-16 (SP) 
ISSN Online edition: ISSN 2432-6380
Copyright
and
reproduction
All rights are reserved and no part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher. Notwithstanding, instructors are permitted to photocopy isolated articles for noncommercial classroom use without fee. (License No.: 10GA0019/12GB0052/13GB0056/17GB0034/18GB0034)
Download PDF SP2023-19

Conference Information
Committee SP IPSJ-MUS IPSJ-SLP  
Conference Date 2023-06-23 - 2023-06-24 
Place (in Japanese) (See Japanese page) 
Place (in English)  
Topics (in Japanese) (See Japanese page) 
Topics (in English)  
Paper Information
Registration To SP 
Conference Code 2023-06-SP-MUS-SLP 
Language Japanese 
Title (in Japanese) (See Japanese page) 
Sub Title (in Japanese) (See Japanese page) 
Title (in English) Domain adaptation of speech recognition models based on self-supervised learning using target domain speech 
Sub Title (in English)  
Keyword(1) wav2vec 2.0  
Keyword(2) domain adaptation  
Keyword(3) end-to-end speech recognition  
Keyword(4) Encoder-Decoder model  
Keyword(5)  
Keyword(6)  
Keyword(7)  
Keyword(8)  
1st Author's Name Takahiro Kinouchi  
1st Author's Affiliation Toyohashi University of Technology (TUT)
2nd Author's Name Atsunori Ogawa  
2nd Author's Affiliation NIPPON TELEGRAPH AND TELEPHONE CORPORATION (NTT)
3rd Author's Name Yuko Wakabayashi  
3rd Author's Affiliation Toyohashi University of Technology (TUT)
4th Author's Name Norihide Kitaoka  
4th Author's Affiliation Toyohashi University of Technology (TUT)
5th Author's Name  
5th Author's Affiliation ()
6th Author's Name  
6th Author's Affiliation ()
7th Author's Name  
7th Author's Affiliation ()
8th Author's Name  
8th Author's Affiliation ()
9th Author's Name  
9th Author's Affiliation ()
10th Author's Name  
10th Author's Affiliation ()
11th Author's Name  
11th Author's Affiliation ()
12th Author's Name  
12th Author's Affiliation ()
13th Author's Name  
13th Author's Affiliation ()
14th Author's Name  
14th Author's Affiliation ()
15th Author's Name  
15th Author's Affiliation ()
16th Author's Name  
16th Author's Affiliation ()
17th Author's Name  
17th Author's Affiliation ()
18th Author's Name  
18th Author's Affiliation ()
19th Author's Name  
19th Author's Affiliation ()
20th Author's Name  
20th Author's Affiliation ()
Speaker Author-1 
Date Time 2023-06-24 13:50:00 
Presentation Time 140 minutes 
Registration for SP 
Paper # SP2023-19 
Volume (vol) vol.123 
Number (no) no.88 
Page pp.91-96 
#Pages
Date of Issue 2023-06-16 (SP) 


[Return to Top Page]

[Return to IEICE Web Page]


The Institute of Electronics, Information and Communication Engineers (IEICE), Japan