FigAgent / 2003.06709 /paper_text /intro_method.md
Eric03's picture
Add files using upload-large-folder tool
8c400aa verified
# Introduction
Significant progress has been made in cooperative multi-agent reinforcement learning (MARL) under the paradigm of *centralised training with decentralised execution* (CTDE) [26, 16] in recent years, both in value-based [38, 30, 35, 31, 41, 29] and actor-critic [21, 7, 11, 6] approaches. Most popular multi-agent actor-critic methods such as COMA [7] and MADDPG [21] learn a *centralised critic* with decentralised actors. The critic is centralised to make use of all available information (i.e., it can condition on the global state and the joint action) to estimate the joint action-value function Qtot, unlike a *decentralised critic* that estimates the local action-value function Q<sup>a</sup> based only on individual observations and actions for each agent a. 5 Even though the joint action-value function
<sup></sup>Equal contribution. Correspondence to: Bei Peng [<bei.peng@cs.ox.ac.uk>](mailto:bei.peng@cs.ox.ac.uk)
<sup></sup>University of Oxford
Facebook AI Research
<sup>§</sup>Delft University of Technology
<sup>5</sup>COMA learns a single centralised critic for all cooperative agents due to parameter sharing. For each agent the critic has different inputs and can thus output different values for the same state and joint action. In MADDPG, each agents learns its own centralised critic, as it is designed for general multi-agent learning problems, including cooperative, competitive, and mixed settings.
these actor-critic methods can represent is not restricted, in practice they significantly underperform value-based methods like QMIX [30] on the challenging StarCraft Multi-Agent Challenge (SMAC) [34] benchmark [32, 31].
In this paper, we propose a novel approach called *FACtored Multi-Agent Centralised policy gradients* (FACMAC), which works for both discrete and continuous cooperative multi-agent tasks. Like MADDPG, our approach uses deep deterministic policy gradients [18] to learn decentralised policies. However, FACMAC learns a single *centralised but factored critic*, which factors the joint action-value function Qtot into per-agent utilities Q<sup>a</sup> that are combined via a non-linear monotonic function, as in the popular Q-learning algorithm QMIX [30]. While the critic used in COMA and MADDPG is also centralised, it is monolithic rather than factored.6 Compared to learning a monolithic critic, our factored critic can potentially scale better to tasks with a larger number of agents and/or actions. In addition, in contrast to other value-based approaches such as QMIX, there are no inherent constraints on factoring the critic. This allows us to employ rich value factorisations, including *nonmonotonic* ones, that value-based methods cannot directly use without forfeiting decentralisability or introducing other significant algorithmic changes. We thus also employ a nonmonotonic factorisation and empirically demonstrate that its increased representational capacity allows it to solve some tasks that cannot be solved with monolithic, or monotonically factored critics.
In MADDPG, a separate policy gradient is derived for each agent individually, which optimises its policy assuming all other agents' actions are fixed. This could cause the agents to converge to sub-optimal policies in which no single agent wishes to change its action unilaterally. In FACMAC, we use a new *centralised* gradient estimator that optimises over the entire joint action space, rather than optimising over each agent's action space separately as in MADDPG. The agents' policies are thus trained as a single joint-action policy, which can enable learning of more coordinated behaviour, as well as the ability to escape sub-optimal solutions. The centralised gradient estimator fully reaps the benefits of learning a centralised critic, by not implicitly marginalising over the actions of the other agents in the policy-gradient update. The gradient estimator used in MADDPG is also known to be vulnerable to relative overgeneralisation [44]. To overcome this issue, in our centralised gradient estimator, we sample all actions from all agents' current policies when evaluating the joint actionvalue function. We empirically show that MADDPG can quickly get stuck in local optima in a simple continuous matrix game, whereas our centralised gradient estimator finds the optimal policy. While Lyu et al. [22] recently show that merely using a centralised critic (with per-agent gradients that optimise over each agent's actions separately) does not necessarily lead to better coordination, our centralised gradient estimator re-establishes the value of using centralised critics.
Most recent works on continuous MARL focus on evaluating their algorithms on the multi-agent particle environments [21], which feature a simple two-dimensional world with some basic simulated physics. To demonstrate FACMAC's scalability to more complex continuous domains and to stimulate more progress in continuous MARL, we introduce *Multi-Agent MuJoCo* (MAMuJoCo), a new, comprehensive benchmark suite that allows the study of decentralised continuous control. Based on the popular single-agent MuJoCo benchmark [4], MAMuJoCo features a wide variety of novel robotic control tasks in which multiple agents within a single robot have to solve a task cooperatively.
We evaluate FACMAC on variants of the multi-agent particle environments [21] and our novel MAMuJoCo benchmark, which both feature continuous action spaces, and the challenging SMAC benchmark [34], which features discrete action spaces. Empirical results demonstrate FACMAC's superior performance over MADDPG and other baselines on all three domains. In particular, FACMAC scales better when the number of agents (and/or actions) and the complexity of the task increases. Results on SMAC show that FACMAC significantly outperforms stochastic DOP [43], which recently claimed to be the first multi-agent actor-critic method to outperform state-of-the-art valued-based methods on SMAC, in all scenarios we tested. Moreover, our ablations and additional experiments demonstrate the advantages of both factoring the critic and using our centralised gradient estimator. We show that, compared to learning a monolithic critic, learning a factored critic can: 1) better take advantage of the centralised gradient estimator to optimise the agent policies when the number of agents and/or actions is large, and 2) leverage a nonmonotonic factorisation to solve tasks that cannot be solved with monolithic or monotonically factored critics.
<sup>6</sup>We use "centralised and monolithic critic" and "monolithic critic" interchangeably to refer to the centralised critic used in COMA and MADDPG, and "centralised but factored critic" and "factored critic" interchangeably to refer to the critic used in our approach.
We consider a fully cooperative multi-agent task in which a team of agents interacts with the same environment to achieve some common goal. It can be modeled as a decentralised partially observable Markov decision process (Dec-POMDP) [27] consisting of a tuple $G = \langle \mathcal{N}, S, U, P, r, \Omega, O, \gamma \rangle$ . Here $\mathcal{N} \equiv \{1, \dots, n\}$ denotes the finite set of agents and $s \in S$ describes the true state of the environment. At each time step, each agent $a \in \mathcal{N}$ selects a discrete or continuous action $u_a \in U$ , forming a joint action $\mathbf{u} \in \mathbf{U} \equiv U^n$ . This results in a transition to the next state s' according to the state transition function $P(s'|s,\mathbf{u}): S \times \mathbf{U} \times S \to [0,1]$ and a team reward $r(s,\mathbf{u}): \gamma \in [0,1)$ is a discount factor. Due to the partial observability, each agent $a \in \mathcal{N}$ draws an individual partial observation $o_a \in \Omega$ from the observation kernel O(s,a). Each agent learns a stochastic policy $\pi_a(u_a|\tau_a)$ or a deterministic policy $\mu_a(\tau_a)$ , conditioned only on its local action-observation; $Q^{\pi}(s_t,\mathbf{u}_t) = \mathbb{E}_{s_{t+1:\infty},\mathbf{u}_{t+1:\infty}}[R_t|s_t,\mathbf{u}_t]$ , where $R_t = \sum_{i=0}^{\infty} \gamma^i r_{t+i}$ is the discounted return. Similarly, the joint deterministic policy $\mu$ induces a joint action-value function denoted $Q^{\mu}(s_t,\mathbf{u}_t)$ . We adopt the centralised training with decentralised execution (CTDE) paradigm [26, 16], where policy training can exploit extra global information that might be available and has the freedom to share information between agents during training. However, during execution, each agent must act with only access to its own action-observation history.
**VDN and QMIX.** VDN [38] and QMIX [30] are Q-learning algorithms for cooperative MARL tasks with discrete actions. They both aim to efficiently learn a centralised but factored action-value function $Q_{tot}^{\pi}$ , using CTDE. To ensure consistency between the centralised and decentralised policies, VDN and QMIX factor $Q_{tot}^{\pi}$ assuming additivity and monotonicity, respectively. More specifically, VDN factors $Q_{tot}^{\pi}$ into a sum of the per-agent utilities: $Q_{tot}^{\pi}(\tau, \mathbf{u}; \phi) = \sum_{a=1}^{n} Q_{a}^{\pi_a}(\tau_a, u_a; \phi_a)$ . QMIX, however, represents $Q_{tot}^{\pi}$ as a continuous monotonic mixing function of an agent's utilities: $Q_{tot}^{\pi}(\tau, \mathbf{u}, s; \phi, \psi) = f_{\psi}(s, Q_1^{\pi_1}(\tau_1, u_1; \phi_1), \dots, Q_n^{\pi_n}(\tau_n, u_n; \phi_n))$ , where $\frac{\partial f_{\psi}}{\partial Q_a} \geq 0, \forall a \in \mathcal{N}$ . This is sufficient to ensure that the global arg max performed on $Q_{tot}^{\pi}$ yields the same result as a set of individual arg max performed on each $Q_a^{\pi_a}$ . Here $f_{\psi}$ is approximated by a monotonic mixing network, parameterised by $\psi$ . Monotonicity can be guaranteed by non-negative mixing weights. These weights are generated by separate *hypernetworks* [10], which condition on the full state s. QMIX is trained end-to-end to minimise the following loss:
$$\mathcal{L}(\boldsymbol{\phi}, \psi) = \mathbb{E}_{\mathcal{D}} \left[ \left( y^{tot} - Q_{tot}^{\boldsymbol{\pi}}(\boldsymbol{\tau}, \mathbf{u}, s; \boldsymbol{\phi}, \psi) \right)^{2} \right], \tag{1}$$
where the bootstrapping target $y^{tot} = r + \gamma \max_{\mathbf{u}'} Q_{tot}^{\pi}(\boldsymbol{\tau'}, \mathbf{u}', s'; \boldsymbol{\phi}^-, \psi^-)$ . Here r is the global reward, and $\boldsymbol{\phi}^-$ and $\psi^-$ are parameters of the target Q and mixing network, respectively, as in DQN [24]. The expectation is estimated with a minibatch of transitions sampled from an experience replay buffer $\mathcal{D}$ [19]. During execution, each agent selects actions greedily with respect to its own $Q_a^{\pi_a}$ .
**MADDPG.** MADDPG [21] is an extension of DDPG [18] to multi-agent settings. It is an actor-critic, off-policy method that uses the paradigm of CTDE to learn deterministic policies in continuous action spaces. In MADDPG, a separate actor and critic is learned for each agent, such that arbitrary reward functions can be learned. It is therefore applicable to either cooperative, competitive, or mixed settings. We assume each agent a has a deterministic policy $\mu_a(\tau_a; \theta_a)$ , parameterised by $\theta_a$ (abbreviated as $\mu_a$ ), and let $\mu = \{\mu_a(\tau_a; \theta_a)\}_{a=1}^n$ be the set of all agent policies. In MADDPG, a centralised and monolithic critic that estimates the joint action-value function $Q_a^{\mu}(s, u_1, \dots, u_n; \phi_a)$ is learned for each agent a separately. The critic is said to be centralised as it utilises information only available to it during the centralised training phase, the global state $s^7$ and the actions of all agents, $u_1, \dots, u_n$ , to estimate the joint action-value function $Q_a^{\mu}$ , which is parameterised by $\phi_a$ . This joint action-value function is trained by minimising the following loss:
$$\mathcal{L}(\phi_a) = \mathbb{E}_{\mathcal{D}} \left[ \left( y^a - Q_a^{\mu}(s, u_1, \dots, u_n; \phi_a) \right)^2 \right], \tag{2}$$
where $y^a = r_a + \gamma Q_a^{\mu}(s', u'_1, \dots, u'_n|_{u'_a = \mu_a(\tau_a; \theta_a^-)}; \phi_a^-)$ . Here $r_a$ is the reward received by each agent $a, u'_1, \dots, u'_n$ is the set of target policies with delayed parameters $\theta_a^-$ , and $\phi_a^-$ are the parameters of the target critic. The replay buffer $\mathcal{D}$ contains the transition tuples $(s, s', u_1, \dots, u_n, r_1, \dots, r_n)$ .
<sup>&</sup>lt;sup>7</sup>If the global state *s* is not available, the centralised and monolithic critic can condition on the joint observations or action-observation histories.
The following policy gradient can be calculated individually to update the policy of each agent a:
$$\nabla_{\theta_a} J(\mu_a) = \mathbb{E}_{\mathcal{D}} \Big[ \nabla_{\theta_a} \mu_a(\tau_a) \nabla_{u_a} Q_a^{\mu}(s, u_1, \dots, u_n) \big|_{u_a = \mu_a(\tau_a)} \Big], \tag{3}$$
where the current agent a's action $u_a$ is sampled from its current policy $\mu_a$ when evaluating the joint action-value function $Q_a^{\mu}$ , while all other agents' actions are sampled from the replay buffer $\mathcal{D}$ .
In this section, we propose a new approach called *FACtored Multi-Agent Centralised policy gradients* (FACMAC) that uses a centralised but factored critic and a centralised gradient estimator to learn continuous cooperative tasks. We start by describing the idea of learning a centralised but factored critic. We then discuss our new centralised gradient estimator and demonstrate its benefit in a simple continuous matrix game. Finally, we discuss how we adapt our method to discrete cooperative tasks.
Learning a centralised and monolithic critic conditioning on the global state and the joint action can be difficult and/or impractical when the number of agents and/or actions is large [11]. We thus employ value function factorisation in the multi-agent actor-critic framework to enable scalable learning of a centralised critic in Dec-POMDPs. Another key advantage of adopting value factorisation in an actor-critic framework is that, compared to value-based methods, it allows for a more flexible factorisation as the critic's design is not constrained. One can employ any type of factorisation, including nonmonotonic factorisations that value-based methods cannot directly use without forfeiting decentralisability or introducing other significant algorithmic changes.
Specifically, in FACMAC, all agents share a centralised critic $Q_{tot}^{\mu}$ that is factored as:
$$Q_{tot}^{\mu}(\boldsymbol{\tau}, \mathbf{u}, s; \boldsymbol{\phi}, \psi) = g_{\psi}\left(s, \{Q_a^{\mu_a}(\tau_a, u_a; \phi_a)\}_{a=1}^n\right),\tag{4}$$
where $\phi$ and $\phi_a$ are parameters of the joint action-value function $Q^{\mu}_{tot}$ and agent-wise utilities $Q^{\mu_a}_a$ , respectively. In our canonical implementation which we refer to as FACMAC, $g_{\psi}$ is a non-linear monotonic function parametrised as a mixing network with parameters $\psi$ , as in QMIX [30]. To evaluate the policy, the centralised but factored critic is trained by minimising the following loss:
$$\mathcal{L}(\boldsymbol{\phi}, \psi) = \mathbb{E}_{\mathcal{D}} \left[ \left( y^{tot} - Q_{tot}^{\boldsymbol{\mu}}(\boldsymbol{\tau}, \mathbf{u}, s; \boldsymbol{\phi}, \psi) \right)^{2} \right], \tag{5}$$
where $y^{tot} = r + \gamma Q_{tot}^{\mu}(\tau', \mu(\tau'; \theta^-), s'; \phi^-, \psi^-)$ . Here $\mathcal{D}$ is the replay buffer, and $\theta^-, \phi^-$ , and $\psi^-$ are parameters of the target actors, critic, and mixing network, respectively.
Leveraging the flexibility of our approach, namely the lack of restrictions on the form of the critic, we also explore a new nonmonotonic factorisation with full representational capacity. The joint action-value function $Q^{\mu}_{tot}$ is represented as a non-linear non-monotonic mixing function of per-agent utilities $Q^{\mu_a}_a$ . This nonmonotonic mixing function is parameterised as a mixing network, with a similar architecture to $g_{\psi}$ in FACMAC, but without the constraint of monotonicity enforced by using non-negative weights. We refer to this method as FACMAC-nonmonotonic. Additionally, to better understand the advantages of factoring a centralised critic, we also explore two additional simpler factorisation schemes. These include factoring the centralised critic $Q^{\mu}_{tot}$ into a sum of per-agent utilities $Q^{\mu}_{a}$ as in VDN (FACMAC-vdn), and as a sum of $Q^{\mu}_{a}$ and a state-dependent bias (FACMAC-vdn-s). Our value factorisation technique is general and can be readily applied to any multi-agent actor-critic algorithms that learn centralised and monolithic critics [21, 7, 6].
To update the decentralised policy of each agent, a naive adaptation of the deterministic policy gradient used by MADDPG (shown in (3)) is
$$\nabla_{\theta_a} J(\mu_a) = \mathbb{E}_{\mathcal{D}} \Big[ \nabla_{\theta_a} \mu_a(\tau_a) \nabla_{u_a} Q_{tot}^{\boldsymbol{\mu}}(\boldsymbol{\tau}, u_1, \dots, u_n, s) \big|_{u_a = \mu_a(\tau_a)} \Big]. \tag{6}$$
Compared to the policy gradient used in MADDPG, the updates of all agents' individual deterministic policies now depend on the single shared factored critic $Q_{tot}^{\mu}$ , as opposed to learning and utilising a
![](_page_4_Figure_0.jpeg)
Figure 1: The overall FACMAC architecture. (a) The decentralised policy networks. (b) The centralised but factored critic. (c) The non-linear monotonic mixing function.
monolithic critic $Q_a^{\mu}$ for each agent. However, there are two main problems in both policy gradients. First, each agent optimises its own policy assuming all other agents' actions are fixed, which could cause the agents to converge to sub-optimal policies in which no single agent wishes to change its action unilaterally. Second, both policy gradients make the corresponding methods vulnerable to relative overgeneralisation [44] as, when agent a ascends the policy gradient based on $Q_a^{\mu}$ or $Q_{tot}^{\mu}$ , only its own action $u_a$ is sampled from its current policy $\mu_a$ , while all other agents' actions are sampled from the replay buffer $\mathcal{D}$ . The other agents' actions thus might be drastically different from the actions their current policies would choose. This could cause the agents to converge to sub-optimal actions that appear to be a better choice when considering the effect of potentially arbitrary actions from the other collaborating agents.
In FACMAC, we use a new *centralised* gradient estimator that optimises over the entire joint action space, rather than optimising over each agent's actions separately as in both (3) and (6), to achieve better coordination among agents. In addition, to overcome relative overgeneralisation, when calculating the policy gradient we sample all actions from all agents' current policies when evaluating $Q_{tot}^{\mu}$ . Our centralised policy gradient can thus be estimated as
$$\nabla_{\theta} J(\boldsymbol{\mu}) = \mathbb{E}_{\mathcal{D}} \left[ \nabla_{\theta} \boldsymbol{\mu} \nabla_{\boldsymbol{\mu}} Q_{tot}^{\boldsymbol{\mu}} (\boldsymbol{\tau}, \mu_1(\tau_1), \dots, \mu_n(\tau_n), s) \right], \tag{7}$$
where $\mu = \{\mu_1(\tau_1; \theta_1), \dots, \mu_n(\tau_n; \theta_n)\}$ is the set of all agents' current policies and all agents share the same actor network parameterised by $\theta$ . However, it is not a requirement of our method for all agents to share parameters in this manner.
If the critic factorisation is linear, as in FACMAC-vdn, then the centralised gradient is equivalent to the per-agent gradients that optimise over each agent's actions separately. This is explored in more detail by DOP [43], which restricts the factored critic to be linear to exploit this equivalence. A major benefit of our method then, is that it does not place any such restrictions on the critic. As remarked by Lyu et al. [22], merely using a centralised critic with per-agent gradients does not necessarily lead to better coordination between agents due to the two problems outlined above. Our centralised gradient estimator, which now optimises over the entire joint action space, is required in order to fully take advantage of a centralised critic.
Figure 1 illustrates the overall FACMAC architecture. For each agent a, there is one policy network that decides which individual action (discrete or continuous) to take. There is also one critic network for each agent a that estimates the individual agent utilities $Q_a$ , which are then combined into the joint action-value function $Q_{tot}$ via a non-linear monotonic mixing function as in QMIX. $Q_{tot}$ is then used by our centralised gradient estimator to help the actor update its policy parameters.
To show the benefits of our new centralised gradient estimator, we compare MADDPG with the centralised policy gradient (CPG) against the original MADDPG on a simple continuous cooperative matrix game. Figure 2 (left) illustrates the continuous matrix game with two agents. There is a narrow path (shown in red) starting from the origin (0,0) to (1,1), in which the reward gradually increases. Everywhere else there is a small punishment moving away from the origin, increasing in magnitude further from the origin. Experimental results are shown in Figure 2 (right). MADDPG quickly gets stuck in the local optimum within 200k timesteps, while MADDPG (with CPG) robustly converges
![](_page_5_Figure_0.jpeg)
Figure 2: Left: The Continuous Matrix Game. Right: Mean test return on Continuous Matrix Game.
![](_page_5_Figure_2.jpeg)
Figure 3: **Left:** Per-agent policy gradient at the origin. For agent 1 (similarly for agent 2) it is 0 since the gradient term assumes the other agent's action to be fixed and thus it only considers the relative improvements along the dotted line. **Right:** Our Centralised Policy Gradient correctly determines the gradient for improving the joint action.
to the optimal policy. Figure 3 visualises the differences between the per-agent and centralised policy gradients, demonstrating that the centralised policy gradient is necessary to take advantage of the centralised critic. In Section 5, we further demonstrate the benefits of this centralised gradient estimator in more complex tasks.
As FACMAC requires differentiable policies and the sampling process of discrete actions from a categorical distribution is not differentiable, we use the Gumbel-Softmax estimator [12] to enable efficient learning of FACMAC on cooperative tasks with discrete actions. The Gumbel-Softmax estimator is a continuous distribution that approximates discrete samples from a categorical distribution to produce differentiable samples. It is a differentiable relaxation of the Gumbel-Max trick, which reparameterises the stochastic policies as a deterministic function of the policy parameters and some independent noise sampled from a standard Gumbel distribution.
Moreover, we use the Straight-Through Gumbel-Softmax Estimator [12] to ensure the action dynamics during training and evaluation are the same. Specifically, during training, we sample discrete actions $u_a$ from the original categorical distribution in the forward pass, but use the continuous Gumbel-Softmax sample $x_a$ in the backward pass to approximate the gradients: $\nabla_{\theta_a} u_a \approx \nabla_{\theta_a} x_a$ . We can then update the agent's policy using our centralised policy gradient: $\nabla_{\theta} J(\theta) \approx \mathbb{E}_{\mathcal{D}} \Big[ \nabla_{\theta} x \nabla_{x} Q_{tot}^{x}(\tau, x_1, \dots, x_n, s) \Big]$ , where $x = \{x_1, \dots, x_n\}$ is the set of continuous sample that approximates the discrete agent actions.
The evaluation of continuous MARL algorithms has recently been largely limited to the simple multi-agent particle environments [21]. We believe the lack of diverse continuous benchmarks is one factor limiting progress in continuous MARL. To demonstrate FACMAC's scalability to more complex continuous domains and to stimulate more progress in continuous MARL, we develop *Multi-Agent MuJoCo* (MAMuJoCo), a novel benchmark for continuous cooperative multi-agent robotic control. Starting from the popular fully observable single-agent robotic MuJoCo [39] control suite included with OpenAI Gym [4], we create a wide variety of novel scenarios in which multiple agents within a single robot have to solve a task cooperatively.
![](_page_6_Figure_0.jpeg)
Figure 4: Agent partitionings for MAMuJoCo environments: A) Manyagent Swimmer, B) 3- Agent Hopper [3x1], C) 2-Agent HalfCheetah [2x3], D) 6-Agent HalfCheetah [6x1], E) 2-Agent Humanoid and 2-Agent HumanoidStandup (each [1x9,1x8]), F) 2-Agent Walker [2X3], G) 2-Agent Reacher [2x1], H) 2-Agent Ant [2x4], I) 2-Agent Ant Diag [2x4], J) 4-Agent Ant [4x2], and K) Manyagent Ant. Colours indicate agent partitionings. Each joint corresponds to a single controllable motor. Split partitions indicate shared body segments. Square brackets indicate [(number of agents) x (joints per agent)]. Joint IDs are in order of definition in the corresponding OpenAI Gym XML asset files [4]. Global joints indicate degrees of freedom of the center of mass of the composite robotic agent.
Single-robot multi-agent tasks in MAMuJoCo arise by first representing a given single robotic agent as a *body graph*, where vertices (joints) are connected by adjacent edges (body segments). We then partition the body graph into disjoint sub-graphs, one for each agent, each of which contains one or more joints that can be controlled. Figure 4 shows agent partitionings for MAMuJoCo environments.
Multiple agents are introduced within a single robot as partial observability arises through latency, bandwidth, and noisy sensors in a single robot. Even if communication is free and instant when it works, we want policies that keep working even when communication channels within the robot malfunction. Without access to the exact full state, local decision rules become more important and introducing autonomous agents at individual decision points (e.g., each physical component of the robot) is reasonable and beneficial. This also makes it more robust to single-point failures (e.g., broken sensors) and more adaptive and flexible as new independent decision points (thus agents) may be added easily. This design also offers important benefits. It facilitates comparisons to existing literature on both the fully observable single-agent domain [28], as well as settings with lowbandwidth communication [42]. More importantly, it allows for the study of novel MARL algorithms for decentralised coordination in isolation (scenarios with multiple robots may add confounding factors such as spatial exploration), which is currently a gap in the research literature.
MAMuJoCo also includes scenarios with a larger and more flexible number of agents, which takes inspiration from modular robotics [47, 17]. Compared to traditional robots, modular robots are more versatile, configurable, and scalable as it is easier to replace or add modules to change the degrees of freedom. We therefore develop two scenarios named ManyAgent Swimmer and ManyAgent Ant, in which one can configure an arbitrarily large number of agents (within the memory limits), each controlling a consecutive segment of arbitrary length. This design is similar to many practical modular snake robots [45, 25], which mimic snake-like motion for diverse tasks such as navigating rough terrains and urban search and rescue. See Appendix A for more details about MAMuJoCo.