お知らせ 2023年度・2024年度 学生員 会費割引キャンペーン実施中です
お知らせ 技術研究報告と和文論文誌Cの同時投稿施策(掲載料1割引き)について
お知らせ 電子情報通信学会における研究会開催について
お知らせ NEW 参加費の返金について
電子情報通信学会 研究会発表申込システム
講演論文 詳細
技報閲覧サービス
[ログイン]
技報アーカイブ
 トップに戻る 前のページに戻る   [Japanese] / [English] 

講演抄録/キーワード
講演名 2018-10-12 13:30
Automatic Target Recognition based on Generative Adversarial Networks for Synthetic Aperture Radar Images
Yang-Lang ChangBo-Yao Chen・○Chih-Yuan ChuSina HadipourNTUT)・Hirokazu KobayashiOITSANE2018-51
抄録 (和) Synthetic Aperture Radar (SAR) is an all day and all weather condition imaging technique which is widely used in national defense, remote sensing, disaster prevention, interferometry and forest and urban footprint mapping. Recently, convolutional neural networks have been used for automatic target recognition (SAR-ATR) and classification. The drawback, however, is the difficulty obtaining sufficient and reliable data in order to train a high accuracy classifier for automatic target recognition. As the number of training samples is reduced, the SAR-ATR accuracy rate decreases rapidly. Our study proposes a deep learning model based on Generative Adversarial Network (GAN) to overcome the problem of insufficient training samples and improve the performance of target classification. GAN is composed of two networks: A Generator network and a Discriminator network. The generator network produces SAR images from a series of random numbers. The discriminator network is a classifier which is trained using supervised learning to classify real and fake SAR images. The Generator and the Discriminator compete with each other in the training process in order to learn robust and reliable target features in SAR images. However, traditional GAN cannot be used to solve the classification problems in SAR-ATR. Our network is a variant of GAN called Auxiliary Classifier GAN (AC-GAN). The structure of AC-GAN allows separating large datasets into subsets by class and training a generator and discriminator for each subset. In this experiment, the SAR images in Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset were used to train the network. Using all the images in the dataset for training resulted in a classification accuracy of 98%. When only less than one-fifth of the images were used, AC-GAN reached an accuracy of 90%. This is a considerable increase in accuracy in comparison with traditional CNNs where for the same number of training samples, the accuracy rapidly decreased to 80%. 
(英) Synthetic Aperture Radar (SAR) is an all day and all weather condition imaging technique which is widely used in national defense, remote sensing, disaster prevention, interferometry and forest and urban footprint mapping. Recently, convolutional neural networks have been used for automatic target recognition (SAR-ATR) and classification. The drawback, however, is the difficulty obtaining sufficient and reliable data in order to train a high accuracy classifier for automatic target recognition. As the number of training samples is reduced, the SAR-ATR accuracy rate decreases rapidly. Our study proposes a deep learning model based on Generative Adversarial Network (GAN) to overcome the problem of insufficient training samples and improve the performance of target classification. GAN is composed of two networks: A Generator network and a Discriminator network. The generator network produces SAR images from a series of random numbers. The discriminator network is a classifier which is trained using supervised learning to classify real and fake SAR images. The Generator and the Discriminator compete with each other in the training process in order to learn robust and reliable target features in SAR images. However, traditional GAN cannot be used to solve the classification problems in SAR-ATR. Our network is a variant of GAN called Auxiliary Classifier GAN (AC-GAN). The structure of AC-GAN allows separating large datasets into subsets by class and training a generator and discriminator for each subset. In this experiment, the SAR images in Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset were used to train the network. Using all the images in the dataset for training resulted in a classification accuracy of 98%. When only less than one-fifth of the images were used, AC-GAN reached an accuracy of 90%. This is a considerable increase in accuracy in comparison with traditional CNNs where for the same number of training samples, the accuracy rapidly decreased to 80%.
キーワード (和) synthetic aperture radar / automatic target recognition / generative adversarial networks (GAN) / auxiliary classifiers GAN / / / /  
(英) synthetic aperture radar / automatic target recognition / generative adversarial networks (GAN) / auxiliary classifiers GAN / / / /  
文献情報 信学技報, vol. 118, no. 239, SANE2018-51, pp. 41-44, 2018年10月.
資料番号 SANE2018-51 
発行日 2018-10-05 (SANE) 
ISSN Online edition: ISSN 2432-6380
著作権に
ついて
技術研究報告に掲載された論文の著作権は電子情報通信学会に帰属します.(許諾番号:10GA0019/12GB0052/13GB0056/17GB0034/18GB0034)
PDFダウンロード SANE2018-51

研究会情報
研究会 SANE  
開催期間 2018-10-12 - 2018-10-12 
開催地(和) 電気通信大学 
開催地(英) The University of Electro-Communications 
テーマ(和) レーダ信号処理,リモートセンシング及び一般 
テーマ(英) Radar signal processing and general issues 
講演論文情報の詳細
申込み研究会 SANE 
会議コード 2018-10-SANE 
本文の言語 英語 
タイトル(和)  
サブタイトル(和)  
タイトル(英) Automatic Target Recognition based on Generative Adversarial Networks for Synthetic Aperture Radar Images 
サブタイトル(英)  
キーワード(1)(和/英) synthetic aperture radar / synthetic aperture radar  
キーワード(2)(和/英) automatic target recognition / automatic target recognition  
キーワード(3)(和/英) generative adversarial networks (GAN) / generative adversarial networks (GAN)  
キーワード(4)(和/英) auxiliary classifiers GAN / auxiliary classifiers GAN  
キーワード(5)(和/英) /  
キーワード(6)(和/英) /  
キーワード(7)(和/英) /  
キーワード(8)(和/英) /  
第1著者 氏名(和/英/ヨミ) Yang-Lang Chang / Yang-Lang Chang /
第1著者 所属(和/英) National Taipei University of Technology (略称: NTUT)
National Taipei University of Technology (略称: NTUT)
第2著者 氏名(和/英/ヨミ) Bo-Yao Chen / Bo-Yao Chen /
第2著者 所属(和/英) National Taipei University of Technology (略称: NTUT)
National Taipei University of Technology (略称: NTUT)
第3著者 氏名(和/英/ヨミ) Chih-Yuan Chu / Chih-Yuan Chu /
第3著者 所属(和/英) National Taipei University of Technology (略称: NTUT)
National Taipei University of Technology (略称: NTUT)
第4著者 氏名(和/英/ヨミ) Sina Hadipour / Sina Hadipour /
第4著者 所属(和/英) National Taipei University of Technology (略称: NTUT)
National Taipei University of Technology (略称: NTUT)
第5著者 氏名(和/英/ヨミ) Hirokazu Kobayashi / Hirokazu Kobayashi /
第5著者 所属(和/英) Osaka Institute of Technology (略称: OIT)
Osaka Institute of Technology (略称: OIT)
第6著者 氏名(和/英/ヨミ) / /
第6著者 所属(和/英) (略称: )
(略称: )
第7著者 氏名(和/英/ヨミ) / /
第7著者 所属(和/英) (略称: )
(略称: )
第8著者 氏名(和/英/ヨミ) / /
第8著者 所属(和/英) (略称: )
(略称: )
第9著者 氏名(和/英/ヨミ) / /
第9著者 所属(和/英) (略称: )
(略称: )
第10著者 氏名(和/英/ヨミ) / /
第10著者 所属(和/英) (略称: )
(略称: )
第11著者 氏名(和/英/ヨミ) / /
第11著者 所属(和/英) (略称: )
(略称: )
第12著者 氏名(和/英/ヨミ) / /
第12著者 所属(和/英) (略称: )
(略称: )
第13著者 氏名(和/英/ヨミ) / /
第13著者 所属(和/英) (略称: )
(略称: )
第14著者 氏名(和/英/ヨミ) / /
第14著者 所属(和/英) (略称: )
(略称: )
第15著者 氏名(和/英/ヨミ) / /
第15著者 所属(和/英) (略称: )
(略称: )
第16著者 氏名(和/英/ヨミ) / /
第16著者 所属(和/英) (略称: )
(略称: )
第17著者 氏名(和/英/ヨミ) / /
第17著者 所属(和/英) (略称: )
(略称: )
第18著者 氏名(和/英/ヨミ) / /
第18著者 所属(和/英) (略称: )
(略称: )
第19著者 氏名(和/英/ヨミ) / /
第19著者 所属(和/英) (略称: )
(略称: )
第20著者 氏名(和/英/ヨミ) / /
第20著者 所属(和/英) (略称: )
(略称: )
講演者 第3著者 
発表日時 2018-10-12 13:30:00 
発表時間 25分 
申込先研究会 SANE 
資料番号 SANE2018-51 
巻番号(vol) vol.118 
号番号(no) no.239 
ページ範囲 pp.41-44 
ページ数
発行日 2018-10-05 (SANE) 


[研究会発表申込システムのトップページに戻る]

[電子情報通信学会ホームページ]


IEICE / 電子情報通信学会