This paper focuses on a multi-agent cooperation which is generally difficult to be achieved without sufficient information of other agents, and proposes the reinforcement learning method that introduces an internal reward for a multi-agent cooperation without sufficient information. To guarantee to achieve such a cooperation, this paper theoretically derives the condition of selecting appropriate actions by changing internal rewards given to the agents, and extends the reinforcement learning methods (Q-learning and Profit Sharing) to enable the agents to acquire the appropriate Q-values updated according to the derived condition. Concretely, the internal rewards change when the agents can only find better solution than the current one. The intensive simulations on the maze problems as one of testbeds have revealed the following implications:(1) our proposed method successfully enables the agents to select their own appropriate cooperating actions which contribute to acquiring the minimum steps towards to their goals, while the conventional methods (i.e., Q-learning and Profit Sharing) cannot always acquire the minimum steps; and (2) the proposed method based on Profit Sharing provides the same good performance as the proposed method based on Q-learning.
題目: Reinforcement Learning with Internal Reward for Multi-Agent Cooperation: A Theoretical Approach
著者: Fumito Uwano, Naoki Tatebe, Masaya Nakata, Tim Kovacs and Keiki Takadama
誌名: EAI Endorsed Transactions on Collaborative Computing: Proceedings of the 9th EAI International Conference on Bio-inspired Information and Communications Technologies (BICT2015)
詳細: Volume 2, Number 8, New York, USA, December 2015
@inproceedings{fumito uwano 2016reinforcement,
title={Reinforcement Learning with Internal Reward for Multi-Agent Cooperation: A Theoretical Approach},
author={Fumito Uwano and Naoki Tatebe and Masaya Nakata and Tim Kovacs and Keiki Takadama},
booktitle={EAI Endorsed Transactions on Collaborative Computing},
year={2016},
volume={2},
number={8},
month={May},
publisher={ACM}
}