Committee |
Date Time |
Place |
Paper Title / Authors |
Abstract |
Paper # |
RCS, SR, SRW (Joint) |
2024-03-15 16:15 |
Tokyo |
The University of Tokyo (Hongo Campus), and online (Primary: On-site, Secondary: Online) |
Study on Small Cell ON/OFF Control Using Different Frequency Cell Information Takaharu Kobayashi, Takashi Dateki (Fujitsu) RCS2023-292 |
In this paper, we propose small cell ON/OFF control without using UE position information and information on the proximi... [more] |
RCS2023-292 pp.176-181 |
AI |
2024-03-01 15:00 |
Aichi |
Room0221, Bldg.2-C, Nagoya Institute of Technology |
Performance Improvement for Mobile Edge Computing with Multi-Agent Deep Reinforcement Learning Kohei Suzuki, Toshiharu Sugawara (Waseda Univ.) AI2023-42 |
In this paper, we propose a method for mobile edge computing using unmanned aerial vehicles (UAVs) to improve both the n... [more] |
AI2023-42 pp.31-36 |
NS, IN (Joint) |
2024-03-01 11:35 |
Okinawa |
Okinawa Convention Center |
Application of a Deep Reinforcement Learning Algorithm to Virtual Machine Migration Control in Multi-Stage Information Processing Systems Yuki Kojitani (Okayama Univ.), Kazutoshi Nakane (Nagoya Univ.), Yuya Tarutani (Okayama Univ.), Celimuge Wu (UEC), Yusheng Ji (NII), Tokumi Yokohira (Okayama Univ.), Tutomu Murase (Nagoya Univ.), Yukinobu Fukushima (Okayama Univ.) IN2023-87 |
This paper tackles a virtual machine (VM) migration control problem to maximize the progress (accuracy) of information p... [more] |
IN2023-87 pp.130-135 |
SR |
2024-01-25 13:10 |
Nagano |
Nagano-ken JA building (Primary: On-site, Secondary: Online) |
[Short Paper]
Performance Evaluations on Deep Reinfocement Leanring based Analog Beamforming in Dynamic Senarios Daisuke Sasaki, Xiaoyan Wang, Zhou Hang (Ibaraki Univ.), Umehira Masahiro (Nanzan Univ.) SR2023-72 |
As the development of small cell architecture in B5G networks, on one hand, the frequency utilization efficiency could b... [more] |
SR2023-72 pp.22-24 |
SR |
2024-01-25 13:25 |
Nagano |
Nagano-ken JA building (Primary: On-site, Secondary: Online) |
[Short Paper]
A Performance Evaluation on Deep Reinforcement Learning based Transmit Power Control for Uplink NOMA Kaito Sawada, Xiaoyan Wang, Zhou Hang (Ibaraki Univ.), Masahiro Umehira (Nanzan Univ.) SR2023-73 |
Non-Orthogonal Multiple Access (NOMA) technology has attracted much attention in order to improve frequency utilization ... [more] |
SR2023-73 pp.25-27 |
SS, MSS |
2024-01-17 14:30 |
Ishikawa |
(Primary: On-site, Secondary: Online) |
Extrinsicaly Rewarded Soft Q Imitation Learning with Discriminator Ryoma Furuyama, Daiki Kuyoshi, Yamane Satoshi (Kanazawa Univ.) MSS2023-55 SS2023-34 |
Imitation learning is often used in addition to reinforcement learning in environments where reward design is difficult ... [more] |
MSS2023-55 SS2023-34 pp.19-24 |
SS, MSS |
2024-01-18 11:30 |
Ishikawa |
(Primary: On-site, Secondary: Online) |
Deep Reinforcement Learning Using LMM's Studying Papers and Intrinsic Rewards Sota Nagano, Satoshi Yamane (Kanazawa Univ.) MSS2023-64 SS2023-43 |
Research combining deep reinforcement learning with a large language model (LLM) produced high scores even for open-worl... [more] |
MSS2023-64 SS2023-43 pp.70-75 |
MSS, CAS, IPSJ-AL [detail] |
2023-11-16 16:30 |
Okinawa |
|
Deep Reinforcement Learning for Multi-Agent Systems with Temporal Logic Specifications Keita Terashima, Koichi Kobayashi, Yuh Yamashita (Hokkaido Univ.) CAS2023-70 MSS2023-40 |
In multi-agent systems, the challenge is how a group of agents collaborate to achieve a common goal. In our previous wor... [more] |
CAS2023-70 MSS2023-40 pp.54-58 |
RISING (3rd) |
2023-10-31 13:00 |
Hokkaido |
Kaderu 2・7 (Sapporo) |
[Poster Presentation]
Wireless MAC Protocol Adaptation Method Considering Application Layer Koshiro Aruga, Takeo Fujii (UEC) |
In recent years, with the development of the Internet of Things (IoT), the number of devices performing wireless communi... [more] |
|
RISING (3rd) |
2023-10-31 13:00 |
Hokkaido |
Kaderu 2・7 (Sapporo) |
[Poster Presentation]
Blind center frequency estimation using deep reinforcement learning for modulation scheme identification. Shunsuke Uehashi, Yasutaka Yamashita, Mari Ochiai (Mitsubishi Electric Corp.) |
Identification of modulation schemes in wireless signals is a crucial technology for analyzing the status of wireless co... [more] |
|
AI |
2023-09-13 09:40 |
Hokkaido |
|
AI2023-13 |
(To be available after the conference date) [more] |
AI2023-13 pp.66-71 |
SeMI, RCS, RCC, NS, SR (Joint) |
2023-07-13 16:25 |
Osaka |
Osaka University Nakanoshima Center + Online (Primary: On-site, Secondary: Online) |
[Short Paper]
A study of Cross-Layer Adaptation using Learning in Wireless MAC Protocols Koshiro Aruga, Takeo Fujii (UEC) SR2023-40 |
In recent years, with the development of wireless communication technology, networks have become larger and more complex... [more] |
SR2023-40 pp.55-57 |
SRW |
2023-06-12 14:25 |
Tokyo |
Kikai-Shinko-Kaikan Bldg. (Primary: On-site, Secondary: Online) |
[Invited Lecture]
An Analog Beamforming Control Method using Deep Reinforcement Learning Daisuke Sasaki, Hang Zhou, Xiaoyan Wang (Ibaraki Univ.), Masahiro Umehira (Nanzan Univ.) SRW2023-8 |
As the development of small cell configurations in B5G networks, the frequency utilization efficiency could be significa... [more] |
SRW2023-8 pp.39-44 |
DC, CPSY, IPSJ-SLDM, IPSJ-EMB, IPSJ-ARC [detail] |
2023-03-24 13:15 |
Kagoshima |
Amagi Town Disaster Prevention Center (Tokunoshima) (Primary: On-site, Secondary: Online) |
CPSY2022-46 DC2022-105 |
(To be available after the conference date) [more] |
CPSY2022-46 DC2022-105 pp.72-76 |
DC, CPSY, IPSJ-SLDM, IPSJ-EMB, IPSJ-ARC [detail] |
2023-03-24 14:30 |
Kagoshima |
Amagi Town Disaster Prevention Center (Tokunoshima) (Primary: On-site, Secondary: Online) |
A study of reinforcemtent learning-based AGV route scheduling using local graph information Hirotada Sugimoto, Shaswot Shresthamali, Masaaki Kondo (Keio Univ.) CPSY2022-49 DC2022-108 |
In this paper, we propose a reinforcement learning-based route planning method for multiple AGVs. The proposed scheduli... [more] |
CPSY2022-49 DC2022-108 pp.89-94 |
DC, CPSY, IPSJ-SLDM, IPSJ-EMB, IPSJ-ARC [detail] |
2023-03-25 13:40 |
Kagoshima |
Amagi Town Disaster Prevention Center (Tokunoshima) (Primary: On-site, Secondary: Online) |
CPSY2022-53 DC2022-112 |
(To be available after the conference date) [more] |
CPSY2022-53 DC2022-112 pp.112-117 |
IMQ, IE, MVE, CQ (Joint) [detail] |
2023-03-16 16:05 |
Okinawa |
Okinawaken Seinenkaikan (Naha-shi) (Primary: On-site, Secondary: Online) |
Automated Driving Methods Using Federated Learning Koki Ono, Celimuge Wu, Tsutomu Yoshinaga (UEC) CQ2022-99 |
When learning autonomous driving behavior using machine learning, a huge amount of driving data is required, and a large... [more] |
CQ2022-99 pp.96-101 |
NC, MBE (Joint) |
2023-03-14 15:50 |
Tokyo |
The Univ. of Electro-Communications (Primary: On-site, Secondary: Online) |
Curiosity-based deep reinforcement learning with profit sharing Kouki Hayashi, Kazuma Yamaguchi, Yukari Yamauchi (Nihon Univ.) NC2022-107 |
Recently, "DQN with PS," which incorporates profit sharing in deep reinforcement learning, was proposed. This method sp... [more] |
NC2022-107 pp.90-93 |
IN, NS (Joint) |
2023-03-02 13:30 |
Okinawa |
Okinawa Convention Centre + Online (Primary: On-site, Secondary: Online) |
Online Deep Reinforcement Learning for Network Slice Reconfiguration under Variable Number of Service Function Chains Kairi Tokuda, Takehiro Sato, Eiji Oki (Kyoto Univ.) NS2022-181 |
This paper proposes Deep reinforcement learning model for Network Slice Reconfiguration with Dummy and Partial greedy ex... [more] |
NS2022-181 pp.83-88 |
HCS |
2023-03-03 13:10 |
Shizuoka |
Tokoha University(KusanagiCampus) (Primary: On-site, Secondary: Online) |
Study on the resilient role in coordinated behavior of a triad using deep reinforcement learning and rule-based modeling Jun Ichikawa (Shizuoka Univ.), Kazushi Tsutsui, Keisuke Fujii (Nagoya Univ.) HCS2022-93 |
Group can often implement a task, which is difficult to do alone, or achieve higher performance than an individual. For ... [more] |
HCS2022-93 pp.100-105 |