Learning Complex Multi-Agent Policies in Presence of an Adversary

Published in IROS 2020 Workshop on Trends and Advances in Machine Learning and Automated Reasoning for Intelligent Robots and Systems, 2020

Recommended citation: Ghiya, S. (2020). "Learning Complex Multi-Agent Policies in Presence of an Adversary" IROS 2020 Workshop on Trends and Advances in Machine Learning and Automated Reasoning for Intelligent Robots and Systems. 1(1). https://arxiv.org/abs/2008.07698

In recent years, there has been some outstanding work on applying deep reinforcement learning to multi-agent settings. Often in such multi-agent scenarios, adversaries can be present. We address the requirements of such a setting by implementing a graph-based multi-agent deep reinforcement learning algorithm. In this work, we consider the scenario of multi-agent deception in which multiple agents need to learn to cooperate and communicate to deceive an adversary. We have employed a two-stage learning process to get the cooperating agents to learn such deceptive behaviors. Our experiments show that our approach allows us to employ curriculum learning to increase the number of cooperating agents in the environment and enables a team of agents to learn complex behaviors to successfully deceive an adversary. Download paper here

Recommended citation: Ghiya, S. (2020). "Learning Complex Multi-Agent Policies in Presence of an Adversary" IROS 2020 Workshop on Trends and Advances in Machine Learning and Automated Reasoning for Intelligent Robots and Systems. 1(1).