Toward learning cooperative behavior for any number of agents, this paper proposes a multi-agent reinforcement learning method without communication, called PMRL-based Learning for Any number of Agents (PLAA). PLAA prevents from agents reaching the purpose for spending too many times, and to promote the local multi-agent cooperation without communication by PMRL as a previous method. To guarantee the effectiveness of PLAA, this paper compares PLAA with Q-learning, and two previous methods in 10 kinds of the maze for the 2 and 3 agents. From the experimental result, we revealed those things: (a) PLAA is the most effective method for cooperation among 2 and 3 agents; (b) PLAA enable the agents to cooperate with each other in small iterations.
題目: Strategy for Learning Cooperative Behavior with Local Information for Multi-agent Systems
著者: Fumito Uwano and Keiki Takadama
誌名: PRIMA 2018: Principles and Practice of Multi-Agent Systems
詳細: Tokyo, Japan, October 2018, pp. 663-670
@inproceedings{fumito uwano 2018strategy,
title={Strategy for Learning Cooperative Behavior with Local Information for Multi-agent Systems},
author={Fumito Uwano and Keiki Takadama},
booktitle={PRIMA 2018: Principles and Practice of Multi-Agent Systems},
year={2018},
pages={663--670},
month={October},
publisher={Springer}
}