Committee |
Date Time |
Place |
Paper Title / Authors |
Abstract |
Paper # |
DE, IPSJ-DBS |
2023-12-26 14:20 |
Tokyo |
Institute of Industrial Science, The University of Tokyo |
A study on selective reuse of local policies in transfer learning agents Hiroya Hamada, Fumiaki Saitoh (CIT) |
(To be available after the conference date) [more] |
|
NS, RCS (Joint) |
2023-12-14 16:50 |
Fukuoka |
Kyushu Institute of Technology Tobata campus, and Online (Primary: On-site, Secondary: Online) |
Dueling Networks Architecture in the Deep Reinforcement Learning for the Automated ICT System Design Tianchen Zhou (Sophia Univ.), Yutaka Yakuwa (NEC), Natsuki Okamura, Hiroyuki Hochigai (Sophia Univ.), Takayuki Kuroda (NEC), Ikuko E. Yairi (Sophia Univ.) |
(To be available after the conference date) [more] |
|
NS, RCS (Joint) |
2023-12-15 11:45 |
Fukuoka |
Kyushu Institute of Technology Tobata campus, and Online (Primary: On-site, Secondary: Online) |
Deep Reinforcement Learning Based Computing Resource Allocation in Fog Radio Access Networks Tong Zhaowei (Kyushu Univ.), Ahmad Gendia (Al-Azhar Univ.), Osamu Muta (Kyushu Univ.) |
[more] |
|
NS, RCS (Joint) |
2023-12-15 15:15 |
Fukuoka |
Kyushu Institute of Technology Tobata campus, and Online (Primary: On-site, Secondary: Online) |
Investigation on Shortening the Time to Fix Transmission Timing in Wireless Sensor Networks Using Reinforcement Learning Kureha Ikeda, Yasushi Fuwa, David Asano (Shinshu Univ.) |
(To be available after the conference date) [more] |
|
HCGSYMPO (2nd) |
2023-12-11 - 2023-12-13 |
Fukuoka |
Asia pacific Import Mart (Kitakyushu) (Primary: On-site, Secondary: Online) |
Transition and analysis by mutual learning within a group in incomplete information game in " Hol's der Geier" Shintaro Abe, Kazuki Takahashi, Takashi Takekawa (Kogakuin Univ) |
(To be available after the conference date) [more] |
|
NC, MBE (Joint) |
2023-11-27 10:30 |
Osaka |
Kindai Univ. (Primary: On-site, Secondary: Online) |
Improving the reproduction of animal intelligence using reinforcement learning with World Model Takumi Fukaya, Hirokazu Tanaka (Tokyo City Univ.) NC2023-34 |
One way to evaluate artificial intelligence models that reproduce animal intelligence is to have reinforcement learning ... [more] |
NC2023-34 pp.6-9 |
MSS, CAS, IPSJ-AL [detail] |
2023-11-16 16:30 |
Okinawa |
|
Deep Reinforcement Learning for Multi-Agent Systems with Temporal Logic Specifications Keita Terashima, Koichi Kobayashi, Yuh Yamashita (Hokkaido Univ.) CAS2023-70 MSS2023-40 |
In multi-agent systems, the challenge is how a group of agents collaborate to achieve a common goal. In our previous wor... [more] |
CAS2023-70 MSS2023-40 pp.54-58 |
RISING (3rd) |
2023-10-31 13:00 |
Hokkaido |
Kaderu 2・7 (Sapporo) |
[Poster Presentation]
Wireless MAC Protocol Adaptation Method Considering Application Layer Koshiro Aruga, Takeo Fujii (UEC) |
In recent years, with the development of the Internet of Things (IoT), the number of devices performing wireless communi... [more] |
|
RISING (3rd) |
2023-10-31 13:00 |
Hokkaido |
Kaderu 2・7 (Sapporo) |
[Poster Presentation]
Blind center frequency estimation using deep reinforcement learning for modulation scheme identification. Shunsuke Uehashi, Yasutaka Yamashita, Mari Ochiai (Mitsubishi Electric Corp.) |
Identification of modulation schemes in wireless signals is a crucial technology for analyzing the status of wireless co... [more] |
|
NC, MBE (Joint) |
2023-10-27 13:30 |
Miyagi |
Tohoku Univ. (Primary: On-site, Secondary: Online) |
Significance of single cell recording
-- Reverse engineering from supplementary motor cortex neuronal activity to reinforcement learning model -- Nao Matsumoto, Naoki M. Tamura, Hajime Mushiake (Tohoku Univ. Sch. Med.), Kazuhiro Sakamoto (TMPU) NC2023-25 |
Elucidating the regions of the brain that are active in a given cognitive activity is an important mission in neuroscien... [more] |
NC2023-25 pp.1-5 |
AI |
2023-09-12 15:15 |
Hokkaido |
|
A Study on the Implementation of Cooperative CAVs by Sharing the Observation Information Using Simulations and Considerations Based on Qualitative Evaluation Ken Matsuda (Graduate School of FUN), Ei-Ichi Osawa (FUN) AI2023-4 |
This study focuses on cooperative connected autonomous vehicles (cooperative CAVs). This research aims to propose a simu... [more] |
AI2023-4 pp.19-24 |
AI |
2023-09-12 14:55 |
Hokkaido |
|
Event-Driven Reinforcement Learning with Semi Markov Models for Stable Air-Conditioning Control Hayato Chujo, Arai Sachiyo (Chiba Univ) AI2023-16 |
This study deals with air conditioning control that optimizes room temperature by switching heaters on/off. The control ... [more] |
AI2023-16 pp.83-86 |
AP |
2023-08-31 13:25 |
Tokyo |
KOZO KEIKAKU ENGINEERING Inc. (Primary: On-site, Secondary: Online) |
2-layer Joint Interference Coordination for A Cellular System with Cluster-wise Distributed MU-MIMO Chang Ge, Sijie Xia, Qiang Chen, Fumiyuki Adachi (Tohoku Univ.) AP2023-70 |
In a cellular system with distributed MU-MIMO, virtual small cells (called the user-clusters) are formed to reduce the h... [more] |
AP2023-70 pp.21-24 |
CCS, IN (Joint) |
2023-08-03 11:09 |
Hokkaido |
Banya-no-yu |
Reinforcement learning-based control of CWmin and Carrier Sense Threshold for IEEE 802.11 WLAN. Yuto Higashiyama, Kosuke Sanada, Hiroyuki Hatano, Kazuo Mori (Mie Univ.) CCS2023-20 |
Distribute Coordination Function (DCF) is a basic channel access protocol in IEEE 802.11 Wireless Local Area Networks (W... [more] |
CCS2023-20 pp.19-24 |
SeMI, RCS, RCC, NS, SR (Joint) |
2023-07-13 16:25 |
Osaka |
Osaka University Nakanoshima Center + Online (Primary: On-site, Secondary: Online) |
[Short Paper]
A study of Cross-Layer Adaptation using Learning in Wireless MAC Protocols Koshiro Aruga, Takeo Fujii (UEC) SR2023-40 |
In recent years, with the development of wireless communication technology, networks have become larger and more complex... [more] |
SR2023-40 pp.55-57 |
SRW |
2023-06-12 14:25 |
Tokyo |
Kikai-Shinko-Kaikan Bldg. (Primary: On-site, Secondary: Online) |
[Invited Lecture]
An Analog Beamforming Control Method using Deep Reinforcement Learning Daisuke Sasaki, Hang Zhou, Xiaoyan Wang (Ibaraki Univ.), Masahiro Umehira (Nanzan Univ.) SRW2023-8 |
As the development of small cell configurations in B5G networks, the frequency utilization efficiency could be significa... [more] |
SRW2023-8 pp.39-44 |
CS, CQ (Joint) |
2023-05-18 15:10 |
Kagawa |
Rexxam Hall (Kagawa Kenmin Hall) (Primary: On-site, Secondary: Online) |
On the performance of sorting out invalid jobs in scheduling using the policy gradient method for deadline-aware jobs Tatusya Sagisaka, Kohei Shiomoto (TCU), Takashi Kurimoto (NII) CQ2023-1 |
When transferring data in the field of communication between data centers, existing methods such as Earliest Deadline Fi... [more] |
CQ2023-1 pp.1-6 |
DC, CPSY, IPSJ-SLDM, IPSJ-EMB, IPSJ-ARC [detail] |
2023-03-24 14:30 |
Kagoshima |
Amagi Town Disaster Prevention Center (Tokunoshima) (Primary: On-site, Secondary: Online) |
A study of reinforcemtent learning-based AGV route scheduling using local graph information Hirotada Sugimoto, Shaswot Shresthamali, Masaaki Kondo (Keio Univ.) CPSY2022-49 DC2022-108 |
In this paper, we propose a reinforcement learning-based route planning method for multiple AGVs. The proposed scheduli... [more] |
CPSY2022-49 DC2022-108 pp.89-94 |
ICM |
2023-03-17 09:10 |
Okinawa |
Okinawa Prefectural Museum and Art Museum (Primary: On-site, Secondary: Online) |
Automation of human decision making by using reinforcement-learning for office work with PC Misa Fukai, Masashi Tadokoro, Haruo Oishi (NTT) ICM2022-50 |
AI technology is a key which improve diversified and complex business with saving personnel. However, the technologies c... [more] |
ICM2022-50 pp.31-36 |
NLP, MSS |
2023-03-17 16:25 |
Nagasaki |
(Primary: On-site, Secondary: Online) |
Investigation on improving diversity of options in option-critic reinforcement learning Aya Nakagawa, Hidehiro Nakano (Tokyo City Univ.) MSS2022-109 NLP2022-154 |
Recently, reinforcement learning has been attracting attention in various fields such as automatic control and game AI. ... [more] |
MSS2022-109 NLP2022-154 pp.225-230 |