MQE: Unleashing the Power of Interaction with Multi-agent Quadruped Environment

1Institute for Interdisciplinary Information Sciences, Tsinghua University
2AI Institute, Beijing University of Posts and Telecommunication
3Zhipu AI, 44Paradigm Inc., 5Shanghai Qi Zhi Institute
6Shanghai Artificial Intelligence Laboratory

Agents try to herd sheep in parallel simulation created by MQE

Abstract

The advent of deep reinforcement learning (DRL) has significantly advanced the field of robotics, particularly in the control and coordination of quadruped robots. However, the complexity of real-world tasks often necessitates the deployment of multi-robot systems capable of sophisticated interaction and collaboration. To address this need, we introduce the Multi-agent Quadruped Environment (MQE), a novel platform designed to facilitate the development and evaluation of multi-agent reinforcement learning (MARL) algorithms in realistic and dynamic scenarios. MQE emphasizes complex interactions between robots and objects, hierarchical policy structures, and challenging evaluation scenarios that reflect real-world applications. We present a series of collaborative and competitive tasks within MQE, ranging from simple coordination to complex adversarial interactions, and benchmark state-of-the-art MARL algorithms. Our findings indicate that hierarchical reinforcement learning can simplify task learning, but also highlight the need for advanced algorithms capable of handling the intricate dynamics of multi-agent interactions. MQE serves as a stepping stone towards bridging the gap between simulation and practical deployment, offering a rich environment for future research in multi-agent systems and robot learning.

Video Presentation

BibTeX

@misc{xiong2024mqe,
      title={MQE: Unleashing the Power of Interaction with Multi-agent Quadruped Environment}, 
      author={Ziyan Xiong and Bo Chen and Shiyu Huang and Wei-Wei Tu and Zhaofeng He and Yang Gao},
      year={2024},
      eprint={2403.16015},
      archivePrefix={arXiv},
      primaryClass={cs.RO}
}