Reward Value-Based Goal Selection for Agents’ Cooperative Route Learning Without Communication in Reward and Goal Dynamism

Reward Value-Based Goal Selection for Agents’ Cooperative Route Learning Without Communication in Reward and Goal Dynamism はコメントを受け付けていません

概要

This paper proposes a goal selection method to operate agents get maximum reward values per time by noncommunicative learning. In particular, that method aims to enable agents to cooperate along to dynamism of reward values and goal locations. Adaptation against to these dynamisms can enable agents to learn cooperative actions along to changing transportation tasks and changing incomes/rewards because of transporting tasks for heavy/valuable and light/valueless items in a storehouse. Concretely, this paper extends the previous noncommunicative cooperative action learning method (Profit minimizing reinforcement learning with oblivion of memory: PMRL-OM) and sets the two unified conditions combined of the number of time steps and the rewards. One of the unified conditions is calculated the approximated number of time steps if the expected reward values are the same each other for all purposes, and the other is the minimum number of time steps divided by the reward value. The proposed method makes all agents learn to achieve the purposes in the order in which they have the minimum number of the condition values. After that, each agent learns cooperative policy by PMRL-OM as the previous method. This paper analyzes the unified conditions and derives that the condition calculating the approximated time steps can be combined both evaluations with almost same weight unlike the value the other condition, that is, the condition can help the agents to select the appropriate purposes among them with the small difference in terms of the two evaluations. This paper tests empirically the performances of PMRL-OM with the two conditions by comparing with the PMRL-OM in three cases of grid world problems whose goal locations and reward values are changed dynamically. The results of this derive that the unified conditions perform better than PMRL-OM without some conditions in grid world problems. In particular, it is clear that the condition calculating the approximated time step can direct the appropriate goals for the agents.

論文誌情報

題目: Reward Value-Based Goal Selection for Agents’ Cooperative Route Learning Without Communication in Reward and Goal Dynamism
著者: Fumito Uwano and Keiki Takadama
誌名: Springer SN Computer Science
詳細: Volume 1, Issue 3, 2020

Bibtex or Download

Fumito Uwano, Keiki Takadama. Reward Value-Based Goal Selection for Agents' Cooperative Route Learning Without Communication in Reward and Goal Dynamism. SN Computer Science, 1(3), 2020.
@article{fumito uwano 2020reward,
  title={Reward Value-Based Goal Selection for Agents' Cooperative Route Learning Without Communication in Reward and Goal Dynamism},
  author={Fumito Uwano and Keiki Takadama},
  journal={SN Computer Science},
  year={2020},
  volume={1},
  number={3},
  pages={},
  publisher={Springer}
}