Committee |
Date Time |
Place |
Paper Title / Authors |
Abstract |
Paper # |
ITE-HI, IE, ITS, ITE-MMS, ITE-ME, ITE-AIT [detail] |
2020-02-28 15:10 |
Hokkaido |
Hokkaido Univ. (Cancelled but technical report was issued) |
Unpaired Learning for Noise-free, Scale Invariant, and Interpretable Image Enhancement Satoshi Kosugi, Toshihiko Yamasaki (Univ. of Tokyo) ITS2019-52 IE2019-90 |
This paper tackles unpaired image enhancement, a task of learning a mapping function which transforms input images into ... [more] |
ITS2019-52 IE2019-90 pp.311-316 |
HCS |
2020-01-26 10:30 |
Oita |
Room407, J:COM HorutoHall OITA (Oita) |
Acquisition of Function Words That Represent Dialogue Acts
-- Constructing a Hybrid Model of Automatic and Deliberate Processing -- Akane Matsushima, Natsuki Oka, Chie Fukada (Kyoto Institute of Technology), Yuko Yoshimura (Kanazawa Univ.), Koji Kawahara (Nagoya University of Foreign Studies) HCS2019-70 |
(To be available after the conference date) [more] |
HCS2019-70 pp.93-98 |
SR |
2019-12-06 13:50 |
Okinawa |
Ishigaki City Hall (Ishigaki Island) |
Performance Evaluation of Machine Learning Based Channel Selection Algorithm Implemented on IoT Sensor Devices and Its Application to Data Collection System for Building Monitoring So Hasegawa, Ryoma Kitagawa, Takumi Ito, Takashi Nakajima (TUS), Song-Ju Kim (KU), Yozo Shoji (NICT), Mikio Hasegawa (TUS) SR2019-106 |
The IoT wave have spread and the number of IoT devices have rapidly increased. Numerous IoT devices may generate enormou... [more] |
SR2019-106 pp.103-108 |
MIKA (2nd) |
2019-10-03 11:15 |
Hokkaido |
Hokkaido Univ. |
[Poster Presentation]
Improving Learning Efficiency of Graph-Based Reinforcement Learning for Wireless LAN Channel Selection Kazuki Ohtsu, Shotaro Kamiya, Koji Yamamoto, Takayuki Nishio, Masahiro Morikura (Kyoto Univ.) |
This report proposes to improve learning efficiency with graph isomorphism for reinforcement learning-based wireless loc... [more] |
|
IBISML |
2019-03-05 14:30 |
Tokyo |
RIKEN AIP |
Efficient Exploration by Variational Information Maximizing Exploration on Reinforcement Learning Kazuki Doi, Keigo Okawa (Gifu Univ.), Motoki Shiga (Gifu Univ./JST/RIKEN) IBISML2018-107 |
In reinforcement learning,the policy function may not be optimized properly if the observed state space is limited to lo... [more] |
IBISML2018-107 pp.17-22 |
ITS, IE, ITE-MMS, ITE-HI, ITE-ME, ITE-AIT [detail] |
2018-02-15 11:30 |
Hokkaido |
Hokkaido Univ. |
Stochastic Discrete Event Simulation Environment for Autonomous Cart Fleet for Artificial Intelligent Training and Reinforcement Learning Algorithms Naohisa Hashimoto, Ali Boyali, Shin Kato (AIST), Takao Otsuka, Kazuhisa Mizushima, Manabu Omae (Keio Univ) ITS2017-66 IE2017-98 |
In this report we give details of a Discrete Event Simulation (DES) framework coded in Python environment for simulation... [more] |
ITS2017-66 IE2017-98 pp.29-33 |
HCS |
2018-01-26 15:00 |
Kagoshima |
Daiichi Institute of Technology |
Computational Model of Stepwise Acquisition of Function Words Representing Mental Attitude Ryosuke Kanajiri, Natsuki Oka, Chie Fukada, Kazuaki Tanaka (KIT) HCS2017-75 |
We propose a computational model that gradually acquires the meaning of Japanese sentence-final particles representing m... [more] |
HCS2017-75 pp.53-57 |
ICM, CQ, NS, NV (Joint) |
2017-11-17 15:00 |
Kagawa |
|
[Encouragement Talk]
Reinforcement Learning based Automated Process Generation for Virtual Network Update Manabu Nakanoya (NEC) ICM2017-32 |
Spreading the network virtualization and softwarization technology using network function virtualization(NFV) and softwa... [more] |
ICM2017-32 pp.63-68 |
IBISML |
2017-11-10 13:00 |
Tokyo |
Univ. of Tokyo |
Hierarchical Reinforcement Learning Based on Return-Weighted Density Estimation Takayuki Osa (UTokyo/RIKEN), Masashi Sugiyama (RIKEN/UTokyo) IBISML2017-67 |
We propose a hierarchical reinforcement learning (HRL) methods for learning the optimal policy from a multi-modal reward... [more] |
IBISML2017-67 pp.243-249 |
CCS |
2017-06-29 13:30 |
Ibaraki |
Ibaraki Univ. |
A Complex-Valued Reinforcement Learning Method Using Complex-Valued Neural Networks Masaki Mochida, Hidehiro Nakano, Arata Miyauchi (Tokyo City Univ.) CCS2017-1 |
This paper proposes the method to approximate the action-value function in complex-valued reinforcement learning by usin... [more] |
CCS2017-1 pp.1-5 |
MBE, NC (Joint) |
2017-03-13 10:25 |
Tokyo |
Kikai-Shinko-Kaikan Bldg. |
Estimation of the change of agent's behavior strategy using state-action history Shihori Uchida, Shigeyuki Oba, Shin Ishii (Kyoto Univ.) NC2016-65 |
Reinforcement learning (RL) is a model of learning process of animals and intelligent agents to obtain the optimal behav... [more] |
NC2016-65 pp.7-12 |
NS, IN (Joint) |
2017-03-02 11:00 |
Okinawa |
OKINAWA ZANPAMISAKI ROYAL HOTEL |
A method of coordinating multiple control algorithms for NFV Akito Suzuki, Masahiro Kobayashi (NTT), Yousuke Takahashi (NTT COM), Shigeaki Harada, Ryoichi Kawahara (NTT) IN2016-103 |
Network Functions Virtualization (NFV) has possibility to enable a variety of network services by flexibly combining mul... [more] |
IN2016-103 pp.37-42 |
IBISML |
2016-11-17 14:00 |
Kyoto |
Kyoto Univ. |
Incremental Natural Actor Critic with Importance Weight Aware Update Ryo Iwaki (Osaka Univ.), Hiroki Yokoyama (Tamagawa Univ.), Minoru Asada (Osaka Univ.) IBISML2016-81 |
Appropriate tuning of step-size parameter is crucial for reinforcement learning, as well as other machine learning techn... [more] |
IBISML2016-81 pp.251-257 |
NC, IPSJ-BIO, IBISML, IPSJ-MPS (Joint) [detail] |
2015-06-23 16:35 |
Okinawa |
Okinawa Institute of Science and Technology |
Inverse reinforcemnet learing based on behaviors of a learning agent Shunsuke Sakurai, Shigeyuki Oba, Shin Ishii (Kyoto Univ.) IBISML2015-15 |
An appropriate design of reward function is important for reinforcement learning to efficiently obtain an optimal policy... [more] |
IBISML2015-15 pp.95-99 |
RCS |
2015-04-17 08:30 |
Oita |
Yufuin, Yufugoukan |
Evaluation of Positioning Accuracy on QZSS Terminal for GPS Complementary and Reinforcement Hiroshi Oguma, Keita Norishima, Konatsu Suehiro (NIT,Toyama), Yuji Miyake, Suguru Kameda, Akinori Taira, Noriharu Suematsu, Tadashi Takagi, Kazuo Tsubouchi (Tohoku Univ.) RCS2015-9 |
Quasi-Zenith Satellite System (QZSS) is a satellite navigation system consisting of several QZSS satellites in highly i... [more] |
RCS2015-9 pp.41-46 |
NC, MBE |
2015-03-16 14:45 |
Tokyo |
Tamagawa University |
Reinforcement Learning based on Internal-Dynamics-Derived Exploration Using a Chaotic Neural Network Katsunari Shibata, Yuta Sakashita (Oita Univ.) MBE2014-166 NC2014-117 |
As a basic concept for emergence of intelligence through autonomous learning, exploration that is es- sential in reinfor... [more] |
MBE2014-166 NC2014-117 pp.277-282 |
CAS, MSS, IPSJ-AL [detail] |
2014-11-21 13:30 |
Okinawa |
Nobumoto Ohama Memorial Hall (Ishigaki island) |
On optimal LLP supervisory control of discrete event systems based on reinforcement learning Hijiri Umemoto, Tatsushi Yamasaki (Setsunan Univ.) CAS2014-102 MSS2014-66 |
For large scale and time varying discrete event systems, LLP(Limited Lookahead Policy) supervisory control is proposed. ... [more] |
CAS2014-102 MSS2014-66 pp.135-140 |
NC, MBE (Joint) |
2014-03-18 13:40 |
Tokyo |
Tamagawa University |
Flexible shaping reinforcement learning for environmental changing by using value of aggregating state to state-action value Shinnosuke Oka, Kazushi Murakoshi (Toyohashi Univ. Tech.) NC2013-138 |
Shaping reinforcement learning is a method to speed up the learning process by providing additional shaping reward that ... [more] |
NC2013-138 pp.287-292 |
NC, MBE (Joint) |
2014-03-18 14:00 |
Tokyo |
Tamagawa University |
A profit sharing reinforcement learning method using hierarchical reward propagation function based on action history Zhenhua Gong, Hidehiro Nakano, Arata Miyauchi (Tokyo City Univ.) NC2013-139 |
A Profit Sharing Reinforcement Learing (PSRL) method can realize robust learing not only in Markov Decision Process (MDP... [more] |
NC2013-139 pp.293-298 |
HCS |
2014-02-02 09:40 |
Kagoshima |
Kagoshima University (Korimoto Campus) |
Representation and Acquisition of the Meaning of Function Words and Abstract Words
-- Computational Model With Dynamic Module Combination -- Natsuki Oka, Xia Wu, Kaoru Kohyama, Chie Fukada, Motoyuki Ozeki (Kyoto Inst. of Tech.) HCS2013-87 |
Function words and abstract words do not refer to concrete objects. The aim of this study is to represent the meaning of... [more] |
HCS2013-87 pp.101-106 |