Committee |
Date Time |
Place |
Paper Title / Authors |
Abstract |
Paper # |
IE, MVE, CQ, IMQ (Joint) [detail] |
2024-03-13 13:20 |
Okinawa |
Okinawa Sangyo Shien Center (Primary: On-site, Secondary: Online) |
[Invited Talk]
The Past, Current and Future of Fashion Image Retrieval: Toward a User-Centered Orientation Ling Xiao (UTokyo) IMQ2023-28 IE2023-83 MVE2023-57 |
Fashion image retrieval (FIR) plays a pivotal role in enhancing the online shopping experience on retail and e-commerce ... [more] |
IMQ2023-28 IE2023-83 MVE2023-57 pp.87-89 |
RCS, SR, SRW (Joint) |
2024-03-14 11:40 |
Tokyo |
The University of Tokyo (Hongo Campus), and online (Primary: On-site, Secondary: Online) |
A Study of 6G human interface
-- Exploring the possibility of using cross-modal phenomena of the five senses -- Honoka Sasaki, Kaisei Namiki, Satsuki Noda, Yusuke Yoshida, Masabumi Katagiri (Yokosuka High school), Gen-ichiro Ohta (YRP) RCS2023-275 |
In 2030, the 6G era will require human interfaces for all five senses. This study focused on the undeveloped senses of s... [more] |
RCS2023-275 pp.121-124 |
HCGSYMPO (2nd) |
2023-12-11 - 2023-12-13 |
Fukuoka |
Asia pacific Import Mart (Kitakyushu) (Primary: On-site, Secondary: Online) |
Compact Emotional Space Simulating Human Percieve of Emotion Based on Crossmodal Contrastive Learning with Softlabel Seiichi Harata, Takuto Sakuma, Shohei Kato (NITech) |
This study aims to explore data-driven emotion modeling by extracting the latent space of emotions from human emotion ex... [more] |
|
CS |
2023-11-10 09:40 |
Shizuoka |
Plaza Verde |
[Invited Lecture]
An AI Platform for Smart City Digital Twins Koji Zettsu (NICT) CS2023-74 |
In recent years, extensive researches and developments have been made to collect, monitor, and manage urban data to faci... [more] |
CS2023-74 pp.42-46 |
HIP |
2023-10-11 09:25 |
Kyoto |
Kyoto Keizai Center (Primary: On-site, Secondary: Online) |
Does audiovisual correspondence alter pupil size? Ryosuke Niimi (Niigata Univ) HIP2023-67 |
This study experimentally tested whether the pupil size of human participants depends on audio-visual correspondence. Th... [more] |
HIP2023-67 pp.35-38 |
MI |
2023-03-06 09:18 |
Okinawa |
OKINAWA SEINENKAIKAN (Primary: On-site, Secondary: Online) |
[Short Paper]
FUSE-2, Aided Diagnosis Method for Dementia using Cross-Modality Translation with Alignment by Deep Learning Kodai Yamashita, Souta Okabe, Hiroyuki Kudo (Univ. of Tsukuba) MI2022-73 |
The diagnosis of cognitive impairment can be performed using functional imaging techniques such as PET and SPECT, which ... [more] |
MI2022-73 pp.3-4 |
IE, ITS, ITE-MMS, ITE-ME, ITE-AIT [detail] |
2023-02-21 11:00 |
Hokkaido |
Hokkaido Univ. |
A note on text prompt tuning in cross-modal image retrieval for a specific database Huaying Zhang, Rintaro Yanagi, Ren Togo, Takahiro Ogawa, Miki Haseyama (Hokkaido Univ.) |
With the development of storage devices and the Internet, the number of users creating personal image databases has incr... [more] |
|
HIP |
2022-12-23 10:00 |
Miyagi |
Research Institute of Electrical Communication (Primary: On-site, Secondary: Online) |
Audiovisual correspondence and linguistic environment
-- A comparison between Japanese and Mandarin speakers -- Kentaro Yamamoto, Yamei Zhang (Kyushu Univ.) HIP2022-66 |
In this study, we focused on the effect of changing pitch of sounds on the judgment of visual motion direction, and exam... [more] |
HIP2022-66 pp.44-47 |
PRMU, IPSJ-CVIM |
2022-03-11 17:10 |
Online |
Online |
PRMU2021-90 |
No English abstract [more] |
PRMU2021-90 pp.186-191 |
HIP, VRSJ |
2021-02-18 11:10 |
Online |
Online |
Stimulus-response compatibility between visual lightness and vocal pitch. Yusuke Suzuki, Masayoshi Nagai (Ritsumeikan Univ.) HIP2020-72 |
Previous studies showed that the correspondence between perceptual and motor systems, such as the correspondence between... [more] |
HIP2020-72 pp.1-5 |
HIP, VRSJ |
2021-02-18 11:35 |
Online |
Online |
Cross-modal correspondence between instrument sounds and colors
-- its relationship with the period of musical experience -- Hirotaka Yamasaki, Wataru Teramoto (Kumamoto Univ.) HIP2020-73 |
Studies have investigated crossmodal correspondence between instrumental sounds and colors. Here we investigated whether... [more] |
HIP2020-73 pp.6-11 |
HIP, VRSJ |
2021-02-18 16:40 |
Online |
Online |
Quantitative Measurement of "zoku-zoku" sensation by approaching visual stimulus Ryo Teraoka, Naoki Kuroda, Wataru Teramoto (Kumamoto Univ.) HIP2020-81 |
When someone is being located nearby we can sometimes sense the situation with a "zoku-zoku'' (frisson) sensation even i... [more] |
HIP2020-81 pp.44-47 |
PRMU |
2020-09-02 16:15 |
Online |
Online |
Pseudo Perfect Coding for Discrete Cross-modal Hashing Yusuke Masuda, Gou Koutaki (Kumamoto Univ.) PRMU2020-16 |
Recently, cross-modal hashing has attracted attention in the field of image processing.
Compressing data which have di... [more] |
PRMU2020-16 pp.53-57 |
ITE-HI, IE, ITS, ITE-MMS, ITE-ME, ITE-AIT [detail] |
2020-02-28 16:50 |
Hokkaido |
Hokkaido Univ. (Cancelled but technical report was issued) |
A Note on Image Retrieval Focusing on Objects in Images
-- Improving Retrieval Performance Based on Object Recognition Using RetinaNet -- Rintaro Yanagi, Ren Togo, Takahiro Ogawa, Miki Haseyama (Hokkaido Univ.) |
Image retrieval method using a text as a query is effective because it enables users to prepare query easily. In these m... [more] |
|
HIP, ASJ-H |
2020-02-16 10:25 |
Okinawa |
IT Souzou-kan Bldg. (Naha) |
Effects of modulated facial expression feedback on taste perception Rong Ma, Katsunori Okajima (YNU) HIP2019-88 |
Visual information affects taste perception. However, it has been unclear how visual information indirectly reflects the... [more] |
HIP2019-88 pp.57-61 |
IMQ |
2019-12-20 16:45 |
Tokyo |
|
Effect of Visual and Auditory Adaptation on Material Perception Takumi Nakamura, Kuangzhe Xu, Toshihiko Matsuka, Keita Hirai (Chiba Univ) IMQ2019-11 |
In this paper, we investigated the effects of visual and auditory adaptation on material perception. The target material... [more] |
IMQ2019-11 pp.9-13 |
HIP |
2019-12-19 16:30 |
Miyagi |
RIEC, Tohoku University |
The measurement of spatial extent of audiovisual crossmodal attention by EEG Shin Ono, Shuichi Sakamoto, Ryo Teraoka, Yoshiyuki Sato, Yasuhiro Hatori, Chia-huei Tseng, Ichiro Kuriki, Satoshi Shioiri (Tohoku Univ.) HIP2019-69 |
We developed an experimental procedure to measure the spatial extent of crossmodal attention with Steady-State Responses... [more] |
HIP2019-69 pp.25-30 |
AP, RCS (Joint) |
2019-11-20 14:15 |
Saga |
Saga Univ. |
Analysis for Gap Waveguide Considering Structural Periodicity and Design of Mode Converter Keisuke Ejiri, Takashi Tomura, Jiro Hirokawa (Tokyo Tech) AP2019-112 |
Gap waveguide consists of parallel metal plates and waffle iron structure, which has electromagnetic bandgap. The waffle... [more] |
AP2019-112 pp.35-40 |
HCS |
2019-10-26 14:25 |
Tokyo |
Nihon Univ. |
Procedural Text Generation from a Photo Sequence Taichi Nishimura (Kyoto Univ.), Atsushi Hashimoto (OSX), Shinsuke Mori (Kyoto Univ.) HCS2019-47 |
In this paper, we tackle a problem to generate a procedural text from a photo sequence, which aims to help users create ... [more] |
HCS2019-47 pp.41-46 |
PRMU |
2019-10-18 13:55 |
Tokyo |
|
[Fellow Memorial Lecture]
Knowledge Acquisition and Media Search Based on Crossmodal Information Processing Kunio Kashino (NTT) PRMU2019-36 |
Media search can be roughly divided into two methods: a method in which fragments of media content are used as queries a... [more] |
PRMU2019-36 p.27 |