Committee |
Date Time |
Place |
Paper Title / Authors |
Abstract |
Paper # |
MVE, CQ, IMQ, IE (Joint) [detail] |
2025-03-05 13:20 |
Okinawa |
(Okinawa, Online) (Primary: On-site, Secondary: Online) |
Construction of Saliency Model Reflecting Differences in Resolutions
-- Aiming to Derive a Model Applicable to High Resolutions -- Shun Ushioda, Yoshiaki Shishikui (Meidai) CQ2024-96 |
With the advancement of high-resolution imaging systems, their impact on the viewing experience has garnered increasing ... [more] |
CQ2024-96 pp.27-32 |
HIP, HCS (Joint) [detail] |
2023-09-11 15:00 |
Ehime |
University of Human Environments (Ehime) |
Relationship between visual field size and head motion in eyepiece displays Sumio Yano (Shimane Univ.), Shuichi Ojima (Sojo Univ.) HCS2023-60 HIP2023-50 |
We examined the relationship between the size of the visual field and head motion by using an eyepiece-type display that... [more] |
HCS2023-60 HIP2023-50 pp.19-24 |
NC, IBISML, IPSJ-BIO, IPSJ-MPS [detail] |
2023-06-29 15:10 |
Okinawa |
OIST Conference Center (Okinawa, Online) (Primary: On-site, Secondary: Online) |
Selective Inference for DNN-driven Saliency Map Daiki Miwa (NITech), Vo Nguyen Le Duy (RIKEN), Tomohiro Shiraishi (Nagoya Univ.), Ichiro Takeuchi (Nagoya Univ./RIKEN) NC2023-5 IBISML2023-5 |
The usefulness of image classification using DNN models has been confirmed in various fields, but the prediction mechani... [more] |
NC2023-5 IBISML2023-5 pp.30-34 |
IE, ITS, ITE-MMS, ITE-ME, ITE-AIT [detail] |
2023-02-22 11:00 |
Hokkaido |
Hokkaido Univ. (Hokkaido) |
Discussion and user study of displaying 360-degree video that follows RoI Yuuki Sawabe (UTokyo), Satoshi Ikehata (NII), Kiyoharu Aizawa (UTokyo) ITS2022-63 IE2022-80 |
Although 360° video images contain information in all directions, the user's viewing angle is limited, resulting in over... [more] |
ITS2022-63 IE2022-80 pp.118-123 |
MBE, MICT, IEE-MBE [detail] |
2023-01-17 09:50 |
Saga |
(Saga) |
Oral Cytology Based on Representation Learning of Visually Salient Cells Kazuki Matsuo, Eiji Mitate, Tomoya Sakai (Nagasaki Univ.) MICT2022-44 MBE2022-44 |
We classify microscopically photographed cells for screening tests to find oral cancer in its early stages. Oral cancer ... [more] |
MICT2022-44 MBE2022-44 pp.7-12 |
IE, ITS, ITE-AIT, ITE-ME, ITE-MMS [detail] |
2022-02-21 15:05 |
Online |
Online (Online) |
A Note on Personalized Saliency Prediction Based on User Similarity Considering Object Information in Images Yuya Moroto, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama (Hokkaido Univ.) |
This paper presents a personalized saliency map (PSM) prediction method using a small amount of gaze data based on user ... [more] |
|
IE, ITS, ITE-AIT, ITE-ME, ITE-MMS [detail] |
2022-02-21 15:50 |
Online |
Online (Online) |
Statistical Analysis on Estimation of Laparoscopic Image Region of Interest Including Contrast Enhancement Based on Saliency Map Norifumi Kawabata (Hokkaido Univ.), Toshiya Nakaguchi (Chiba Univ.) ITS2021-34 IE2021-43 |
In the medical field such as hospital, there are many chance for medical workers to see medical images visually. Therefo... [more] |
ITS2021-34 IE2021-43 pp.55-60 |
HIP |
2021-10-22 13:35 |
Online |
Online (Online) |
A saliency estimation model for drivers' egocentric vision movies considering self-motion velocity Yuya Homma, Masashi Fujita, Takeshi Kohama (Kindai Univ.) HIP2021-44 |
In order to predict where a driver’s attention should be directed during driving, Kodama et al. have developed a salienc... [more] |
HIP2021-44 pp.75-80 |
MVE, IMQ, IE, CQ (Joint) [detail] |
2021-03-02 09:20 |
Online |
Online (Online) |
Creation of a gazing point database when appreciating a painting and comparison with a saliency map Yusuke Nosaka (Tokai Univ.), Eriko Ishii (Kagoshima Prefectural College), Yuko Hoshino, Mitsuho Yamada (Tokai Univ.) CQ2020-111 |
It is said that the painter draw painting in order to lead the gaze of the viewer to the intended subject. We assumed t... [more] |
CQ2020-111 pp.16-21 |
HCGSYMPO (2nd) |
2020-12-15 - 2020-12-17 |
Online |
Online (Online) |
Effects of contextual saliency on users' attention: VR experimental psychology Yuki Harada, Ohyama Junji (AIST) |
Most studies have measured attentional characteristics in real-world environments where subjects are exposed to exogenou... [more] |
|
IMQ, IE, MVE, CQ (Joint) [detail] |
2019-03-15 10:15 |
Kagoshima |
Kagoshima University (Kagoshima) |
Person Re-identification with Simultaneous Use of Individual and Group Appearance Features Shingo Inami (Tokyo Univ. of Science), Daisuke Sugimura (Tsuda Univ.), Takayuki Hamamoto (Tokyo Univ. of Science) IMQ2018-58 IE2018-142 MVE2018-89 |
In this paper, we propose a method for person re-identification (ReID).
Conventional methods have utilized appearance f... [more] |
IMQ2018-58 IE2018-142 MVE2018-89 pp.201-204 |
SIS |
2019-03-07 12:35 |
Tokyo |
Tokyo Univ. Science, Katsushika Campus (Tokyo) |
A Proposal of Color Quantization Method Taking Account of Saliency Yoshiaki Ueda, Seiichi Kojima, Noriaki Suetake, Eiji Uchino (Yamaguchi Univ.) SIS2018-52 |
One of the most popular limited-color images, graphic interchange format (GIF) image is used in various situations. The ... [more] |
SIS2018-52 pp.81-84 |
HIP |
2018-08-03 09:25 |
Tokyo |
Tokyo Woman's Christian University (Tokyo) |
A Study of Presenting Information Based on Human Visual Characteristics Using Eye Tracking and Saliency Map Yusei Harigaya, Susumu Shirayama (Univ. of Tokyo) HIP2018-50 |
Researchers have been trying to assist human work by presenting visual information in recent years. However, inappropria... [more] |
HIP2018-50 pp.49-52 |
PRMU, BioX |
2018-03-18 15:20 |
Tokyo |
(Tokyo) |
Saliency Map Estimation for Omni-Directional Image Considering Prior Distribution Tatsuya Suzuki, Takao Yamanaka (Sophia Univ.) BioX2017-50 PRMU2017-186 |
In recent years, the Deep Learning techniques have been applied to the estimation of saliency maps, which represent prob... [more] |
BioX2017-50 PRMU2017-186 pp.85-90 |
PRMU, MVE, IPSJ-CVIM [detail] |
2018-01-18 11:50 |
Osaka |
(Osaka) |
Comparison between saliency maps using top-down and bottom-up factors Shoichi Adachi, Aya Shiraiwa (Tottori Univ.), Shigang Li (Hirosima City Univ.) PRMU2017-120 MVE2017-41 |
It is known that humans possess the ability to analyze complex scenes in real time. Based on this ability, saliency maps... [more] |
PRMU2017-120 MVE2017-41 pp.75-80 |
HIP |
2017-10-24 10:00 |
Kyoto |
Kyoto Terrsa (Kyoto) |
Scan path simulation in free-viewing condition based on a probabilistic saliency map model Tomoya Okazaki, Takeshi Kohama (Kindai Univ.) HIP2017-69 |
Recently, saliency based mathematical models are frequently used to predict observers' gaze shifts. Although these model... [more] |
HIP2017-69 pp.57-60 |
MBE, NC (Joint) |
2017-05-26 09:55 |
Toyama |
Toyama Prefectural Univ. (Toyama) |
Characteristics of Gaze Estimation by Saliency Map Under Bottom-up Attention Yuma Kobayashi, Hironobu Takano, Kiyomi Nakamura (Toyama Pref. Univ.) MBE2017-2 |
Saliency map is an estimation model of bottom-up attention which is derived with physical features.
However, the measur... [more] |
MBE2017-2 pp.7-11 |
MBE, NC (Joint) |
2017-03-13 15:00 |
Tokyo |
Kikai-Shinko-Kaikan Bldg. (Tokyo) |
Saliency-Based Analysis of Event-Related Potentials during Movie Viewing Yasuyuki Hamada, Shin Ishii (Kyoto Univ.) NC2016-72 |
Human beings deal with much visual information in our daily life. Though natural movies can imitate such daily visual in... [more] |
NC2016-72 pp.49-53 |
ITS, IEE-ITS |
2017-03-07 15:15 |
Kyoto |
Kyoto Univ. (Kyoto) |
Estimating stress factors of an autonomous vehicle passenger using onboard videos labeled by several persons Kohei Hagihara (NAIST), Norimichi Ukita (TTI), Masayuki Kanbara (NAIST), Hironori Hagita (ATR) ITS2016-88 |
Currently, various researches on autonomous driving are being conducted.
The purpose is to reduce traffic accidents du... [more] |
ITS2016-88 pp.69-74 |
EID, ITE-IDY, ITE-HI, ITE-3DMT, IEE-OQD, SID-JC [detail] |
2016-10-28 13:30 |
Tokyo |
Kikai-Shinko-Kaikan Bldg (Tokyo) |
Comparison of Salient Feature for Material Appearance Shoji Yamamoto (TMCIT), Yuto Hirasawa, Norimichi Tsumura (Chiba Univ.) EID2016-6 |
Material appearance, especially gloss, is very important visual cue of object recognition related to the surface smoothn... [more] |
EID2016-6 pp.5-8 |