Paper Abstract and Keywords |
Presentation |
2023-01-29 15:05
Predictions and Attentions Acquired by Vision Transformer with Source-Target Attention from Dilated Convolutions on Small Data Sets Tatsuki Shimura, Katsumi Tadamura, Toshikazu Samura (Yamaguchi Univ) NLP2022-104 NC2022-88 |
Abstract |
(in Japanese) |
(See Japanese page) |
(in English) |
Vision Transformer (ViT) requires large data sets during pre-training phase to acquire high classification accuracy on any data sets. It has been proposed that ViT with convolutional input structure reduce the pre-training cost. In this study, we proposed ViT with source-target attention from dilated convolutions. We show that the proposed ViT acquire the same accuracy and attention as the conventional ViT trained with large data set even when the number of data is reduced in the pre-training phase. |
Keyword |
(in Japanese) |
(See Japanese page) |
(in English) |
Vision Transformer / Source-Target Attention / Dilated Convolution / Small data / / / / |
Reference Info. |
IEICE Tech. Rep., vol. 122, no. 373, NLP2022-104, pp. 123-128, Jan. 2023. |
Paper # |
NLP2022-104 |
Date of Issue |
2023-01-21 (NLP, NC) |
ISSN |
Online edition: ISSN 2432-6380 |
Copyright and reproduction |
All rights are reserved and no part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher. Notwithstanding, instructors are permitted to photocopy isolated articles for noncommercial classroom use without fee. (License No.: 10GA0019/12GB0052/13GB0056/17GB0034/18GB0034) |
Download PDF |
NLP2022-104 NC2022-88 |
Conference Information |
Committee |
NC NLP |
Conference Date |
2023-01-28 - 2023-01-29 |
Place (in Japanese) |
(See Japanese page) |
Place (in English) |
Future University Hakodate |
Topics (in Japanese) |
(See Japanese page) |
Topics (in English) |
NC, NLP, etc. |
Paper Information |
Registration To |
NLP |
Conference Code |
2023-01-NC-NLP |
Language |
Japanese |
Title (in Japanese) |
(See Japanese page) |
Sub Title (in Japanese) |
(See Japanese page) |
Title (in English) |
Predictions and Attentions Acquired by Vision Transformer with Source-Target Attention from Dilated Convolutions on Small Data Sets |
Sub Title (in English) |
|
Keyword(1) |
Vision Transformer |
Keyword(2) |
Source-Target Attention |
Keyword(3) |
Dilated Convolution |
Keyword(4) |
Small data |
Keyword(5) |
|
Keyword(6) |
|
Keyword(7) |
|
Keyword(8) |
|
1st Author's Name |
Tatsuki Shimura |
1st Author's Affiliation |
Yamaguchi University (Yamaguchi Univ) |
2nd Author's Name |
Katsumi Tadamura |
2nd Author's Affiliation |
Yamaguchi University (Yamaguchi Univ) |
3rd Author's Name |
Toshikazu Samura |
3rd Author's Affiliation |
Yamaguchi University (Yamaguchi Univ) |
4th Author's Name |
|
4th Author's Affiliation |
() |
5th Author's Name |
|
5th Author's Affiliation |
() |
6th Author's Name |
|
6th Author's Affiliation |
() |
7th Author's Name |
|
7th Author's Affiliation |
() |
8th Author's Name |
|
8th Author's Affiliation |
() |
9th Author's Name |
|
9th Author's Affiliation |
() |
10th Author's Name |
|
10th Author's Affiliation |
() |
11th Author's Name |
|
11th Author's Affiliation |
() |
12th Author's Name |
|
12th Author's Affiliation |
() |
13th Author's Name |
|
13th Author's Affiliation |
() |
14th Author's Name |
|
14th Author's Affiliation |
() |
15th Author's Name |
|
15th Author's Affiliation |
() |
16th Author's Name |
|
16th Author's Affiliation |
() |
17th Author's Name |
|
17th Author's Affiliation |
() |
18th Author's Name |
|
18th Author's Affiliation |
() |
19th Author's Name |
|
19th Author's Affiliation |
() |
20th Author's Name |
|
20th Author's Affiliation |
() |
Speaker |
Author-1 |
Date Time |
2023-01-29 15:05:00 |
Presentation Time |
25 minutes |
Registration for |
NLP |
Paper # |
NLP2022-104, NC2022-88 |
Volume (vol) |
vol.122 |
Number (no) |
no.373(NLP), no.374(NC) |
Page |
pp.123-128 |
#Pages |
6 |
Date of Issue |
2023-01-21 (NLP, NC) |
|