Buckets:

|
download
raw
63.4 kB

Title: Attention-Based Recurrence for Multi-Agent Reinforcement Learning under Stochastic Partial Observability

URL Source: https://arxiv.org/html/2301.01649

Markdown Content: 1Introduction 2Background 3Related Work 4AERIAL 5MessySMAC 6Experiments 7Conclusion and Future Work License: arXiv.org perpetual non-exclusive license arXiv:2301.01649v6 [cs.MA] 28 Dec 2023 Attention-Based Recurrence for Multi-Agent Reinforcement Learning under Stochastic Partial Observability Thomy Phan Fabian Ritz Philipp Altmann Maximilian Zorn Jonas Nüßlein Michael Kölle Thomas Gabor Claudia Linnhoff-Popien Abstract

Stochastic partial observability poses a major challenge for decentralized coordination in multi-agent reinforcement learning but is largely neglected in state-of-the-art research due to a strong focus on state-based centralized training for decentralized execution (CTDE) and benchmarks that lack sufficient stochasticity like StarCraft Multi-Agent Challenge (SMAC). In this paper, we propose Attention-based Embeddings of Recurrence In multi-Agent Learning (AERIAL) to approximate value functions under stochastic partial observability. AERIAL replaces the true state with a learned representation of multi-agent recurrence, considering more accurate information about decentralized agent decisions than state-based CTDE. We then introduce MessySMAC, a modified version of SMAC with stochastic observations and higher variance in initial states, to provide a more general and configurable benchmark regarding stochastic partial observability. We evaluate AERIAL in Dec-Tiger as well as in a variety of SMAC and MessySMAC maps, and compare the results with state-based CTDE. Furthermore, we evaluate the robustness of AERIAL and state-based CTDE against various stochasticity configurations in MessySMAC.

Dec-POMDP, Stochastic Partial Observability, Multi-Agent Learning, Recurrence, Self-Attention 1Introduction

A wide range of real-world applications like fleet management, industry 4.0, or communication networks can be formulated as decentralized partially observable Markov decision process (Dec-POMDP) representing a cooperative multi-agent system (MAS), where multiple agents have to coordinate to achieve a common goal (Oliehoek & Amato, 2016). Stochastic partial observability poses a major challenge for decentralized coordination in Dec-POMDPs due to noisy sensors and potentially high variance in initial states which are common in the real world (Kaelbling et al., 1998; Oliehoek & Amato, 2016).

Multi-agent reinforcement learning (MARL) is a general approach to tackle Dec-POMDPs with remarkable progress in recent years (Wang et al., 2021; Wen et al., 2022). State-of-the-art MARL is based on centralized training for decentralized execution (CTDE), where training takes place in a laboratory or a simulator with access to global information (Lowe et al., 2017; Foerster et al., 2018). For example, state-based CTDE exploits true state information to learn a centralized value function in order to derive coordinated policies for decentralized decision making (Rashid et al., 2018; Yu et al., 2022). Due to its effectiveness in the StarCraft Multi-Agent Challenge (SMAC) as the current de facto standard for MARL evaluation, state-based CTDE has become very popular and is widely considered an adequate approach to general Dec-POMDPs for more than half a decade, leading to the development of many increasingly complex algorithms (Lyu et al., 2021, 2022).

However, merely relying on state-based CTDE and SMAC in MARL research can be a pitfall in practice as stochastic partial observability is largely neglected – despite being an important aspect in Dec-POMDPs (Lyu et al., 2022):

From an algorithm perspective, purely state-based value functions are insufficient to evaluate and adapt multi-agent behavior, since all agents make decisions on a completely different basis, i.e., individual histories of noisy observations and actions. True Dec-POMDP value functions consider more accurate closed-loop information about decentralized agent decisions though (Oliehoek et al., 2008). Furthermore, the optimal state-based value function represents an upper-bound of the true optimal Dec-POMDP value function thus state-based CTDE can result in overly optimistic behavior in general Dec-POMDPs (Lyu et al., 2022).

From a benchmark perspective, SMAC has very limited stochastic partial observability due to deterministic observations and low variance in initial states (Ellis et al., 2022). Therefore, SMAC scenarios only represent simplified special cases rather than general Dec-POMDP challenges, being insufficient for assessing practicability of MARL.

In this paper, we propose Attention-based Embeddings of Recurrence In multi-Agent Learning (AERIAL) to approximate value functions under agent-wise stochastic partial observability. AERIAL replaces the true state with a learned representation of multi-agent recurrence, considering more accurate closed-loop information about decentralized agent decisions than state-based CTDE. We then introduce MessySMAC, a modified version of SMAC with stochastic observations and higher variance in initial states, to provide a more general and configurable Dec-POMDP benchmark for more adequate evaluation. Our contributions are as follows:

We formulate and discuss the concepts of AERIAL w.r.t. stochastic partial observability in Dec-POMDPs.

We introduce MessySMAC to enable systematic evaluation under various stochasticity configurations.

We evaluate AERIAL in Dec-Tiger, a small and traditional Dec-POMDP benchmark, as well as in a variety of original SMAC and MessySMAC maps, and compare the results with state-based CTDE. Our results show that AERIAL achieves competitive performance in original SMAC, and superior performance in Dec-Tiger and MessySMAC. Furthermore, we evaluate the robustness of AERIAL and state-based CTDE against various stochasticity configurations in MessySMAC.

2Background 2.1Decentralized POMDPs

We formulate cooperative MAS problems as Dec-POMDP 𝑀

⟨ 𝒟 , 𝒮 , 𝒜 , 𝒯 , ℛ , 𝒵 , Ω , 𝑏 0 ⟩ , where 𝒟

{ 1 , … , 𝑁 } is a set of agents 𝑖 , 𝒮 is a set of (true) states 𝑠 𝑡 at time step 𝑡 , 𝒜

⟨ 𝒜 𝑖 ⟩ 𝑖 ∈ 𝒟 is the set of joint actions 𝐚 𝐭

⟨ 𝑎 𝑡 , 1 , … , 𝑎 𝑡 , 𝑁 ⟩

⟨ 𝑎 𝑡 , 𝑖 ⟩ 𝑖 ∈ 𝒟 , 𝒯 ⁢ ( 𝑠 𝑡 + 1 | 𝑠 𝑡 , 𝐚 𝐭 ) is the state transition probability, 𝑟 𝑡

ℛ ⁢ ( 𝑠 𝑡 , 𝐚 𝐭 ) ∈ ℝ is the shared reward, 𝒵 is a set of local observations 𝑧 𝑡 , 𝑖 for each agent 𝑖 ∈ 𝒟 , Ω ⁢ ( 𝐳 𝐭 + 𝟏 | 𝐚 𝐭 , 𝑠 𝑡 + 1 ) is the probability of joint observation 𝐳 𝐭 + 𝟏

⟨ 𝑧 𝑡 + 1 , 𝑖 ⟩ 𝑖 ∈ 𝒟 ∈ 𝒵 𝑁 , and 𝑏 0 is the probability distribution over initial states 𝑠 0 (Oliehoek & Amato, 2016). Each agent 𝑖 maintains a local history 𝜏 𝑡 , 𝑖 ∈ ( 𝒵 × 𝒜 𝑖 ) 𝑡 and 𝝉 𝐭

⟨ 𝜏 𝑡 , 𝑖 ⟩ 𝑖 ∈ 𝒟 is the joint history. A belief state 𝑏 ⁢ ( 𝑠 𝑡 | 𝝉 𝐭 ) is a sufficient statistic for joint history 𝝉 𝐭 and defines a probability distribution over true states 𝑠 𝑡 , updatable by Bayes’ theorem (Kaelbling et al., 1998). Joint quantities are written in bold face.

Stochastic partial observability in 𝑀 is given by observation and initialization stochasticity w.r.t. Ω and 𝑏 0 respectively.

A joint policy 𝝅

⟨ 𝜋 𝑖 ⟩ 𝑖 ∈ 𝒟 with decentralized or local policies 𝜋 𝑖 defines a deterministic mapping from joint histories to joint actions 𝝅 ⁢ ( 𝝉 𝐭 )

⟨ 𝜋 𝑖 ⁢ ( 𝜏 𝑡 , 𝑖 ) ⟩ 𝑖 ∈ 𝒟 ∈ 𝒜 . The return is defined by 𝐺 𝑡

∑ 𝑐

0 𝑇 − 1 𝛾 𝑐 ⁢ 𝑟 𝑡 + 𝑐 , where 𝑇 is the horizon and 𝛾 ∈ [ 0 , 1 ] is the discount factor. 𝝅 can be evaluated with a value function 𝑄 𝝅 ⁢ ( 𝝉 𝐭 , 𝐚 𝐭 )

𝔼 𝑏 0 , 𝒯 , Ω ⁢ [ 𝐺 𝑡 | 𝝉 𝐭 , 𝐚 𝐭 , 𝝅 ] . The goal is to find an optimal joint policy 𝝅 * with optimal value function 𝑄 𝝅 *

𝑄 * as defined in the next section.

2.2Optimal Value Functions and Policies Fully Observable MAS

In MDP-like settings with a centralized controller, the optimal value function 𝑄 𝑀𝐷𝑃 * is defined by (Watkins & Dayan, 1992; Boutilier, 1996):

𝑄 𝑀𝐷𝑃 * ⁢ ( 𝑠 𝑡 , 𝐚 𝐭 )

𝑟 𝑡 + 𝛾 ⁢ ∑ 𝑠 𝑡 + 1 ∈ 𝒮 𝒳

(1)

where 𝒳

𝒯 ⁢ ( 𝑠 𝑡 + 1 | 𝑠 𝑡 , 𝐚 𝐭 ) ⁢ 𝑚𝑎𝑥 𝐚 𝐭 + 𝟏 ∈ 𝒜 ⁢ 𝑄 𝑀𝐷𝑃 * ⁢ ( 𝑠 𝑡 + 1 , 𝐚 𝐭 + 𝟏 ) .

Due to full observability, 𝑄 𝑀𝐷𝑃 * does not depend on 𝝉 𝐭 but on 𝑠 𝑡 . Thus, decentralized observations 𝑧 𝑡 , 𝑖 and probabilities according to Ω and 𝑏 0 are not considered at all. An optimal (joint) policy 𝝅 𝐌𝐃𝐏 * of the centralized controller simply maximizes 𝑄 𝑀𝐷𝑃 * for all 𝑠 𝑡 (Watkins & Dayan, 1992):

𝝅 𝐌𝐃𝐏 *

𝑎𝑟𝑔𝑚𝑎𝑥 𝝅 𝐌𝐃𝐏 ⁢ ∑ 𝑠 𝑡 ∈ 𝒮 𝑄 𝑀𝐷𝑃 * ⁢ ( 𝑠 𝑡 , 𝝅 𝐌𝐃𝐏 ⁢ ( 𝑠 𝑡 ) )

(2) Partially Observable MAS

In general Dec-POMDPs, where true states are not fully observable and only decentralized controllers or agents exist, the optimal value function 𝑄 * is defined by (Oliehoek et al., 2008):

𝑄 * ⁢ ( 𝝉 𝐭 , 𝐚 𝐭 )

∑ 𝑠 𝑡 ∈ 𝒮 𝑏 ⁢ ( 𝑠 𝑡 | 𝝉 𝐭 ) ⁢ ( 𝑟 𝑡 + 𝛾 ⁢ ∑ 𝑠 𝑡 + 1 ∈ 𝒮 ∑ 𝐳 𝐭 + 𝟏 ∈ 𝒵 𝑁 𝒳 )

(3)

where 𝒳

𝒯 ( 𝑠 𝑡 + 1 | 𝑠 𝑡 , 𝐚 𝐭 ) Ω ( 𝐳 𝐭 + 𝟏 | 𝐚 𝐭 , 𝑠 𝑡 + 1 ) 𝑄 * ( 𝝉 𝐭 + 𝟏 , 𝝅 * ( 𝝉 𝐭 + 𝟏 ) ) with 𝝉 𝐭 + 𝟏

⟨ 𝝉 𝐭 , 𝐚 𝐭 , 𝐳 𝐭 + 𝟏 ⟩ .

An optimal joint policy 𝝅 * for decentralized execution maximizes the expectation of 𝑄 * for all joint histories 𝝉 𝐭 (Emery-Montemerlo et al., 2004; Oliehoek et al., 2008):

𝝅 *

𝑎𝑟𝑔𝑚𝑎𝑥 𝝅 ⁢ ∑ 𝑡

0 𝑇 − 1 ∑ 𝝉 𝐭 ∈ ( 𝒵 𝑁 × 𝒜 ) 𝑡 𝒞 𝝅 ⁢ ( 𝝉 𝐭 ) ⁢ 𝐏 𝝅 ⁢ ( 𝝉 𝐭 | 𝑏 0 ) ⁢ 𝑄 * ⁢ ( ⋅ )

(4)

where 𝑄 * ⁢ ( ⋅ )

𝑄 * ⁢ ( 𝝉 𝐭 , 𝝅 ⁢ ( 𝝉 𝐭 ) ) , indicator 𝒞 𝝅 ⁢ ( 𝝉 𝐭 ) filters out joint histories 𝝉 𝐭 that are inconsistent with 𝝅 , and probability 𝐏 𝝅 ⁢ ( 𝝉 𝐭 | 𝑏 0 ) represents the recurrence of all agents considering agent-wise stochastic partial observability w.r.t. decentralization of 𝝅 and 𝝉 𝐭 (Oliehoek et al., 2008):

𝐏 𝝅 ⁢ ( 𝝉 𝐭 | 𝑏 0 )

𝐏 ⁢ ( 𝐳 𝟎 | 𝑏 0 ) ⁢ ∏ 𝑐

1 𝑡 𝐏 ⁢ ( 𝐳 𝐜 | 𝝉 𝐜 − 𝟏 , 𝝅 )

= 𝐏 ⁢ ( 𝐳 𝟎 | 𝑏 0 ) ⁢ ∏ 𝑐

1 𝑡 ∑ 𝑠 𝑐 ∈ 𝒮 ∑ 𝑠 𝑐 − 1 ∈ 𝒮 𝒯 ⁢ ( ⋅ ) ⁢ Ω ⁢ ( ⋅ )

(5)

where 𝒯 ⁢ ( ⋅ )

𝒯 ⁢ ( 𝑠 𝑐 | 𝑠 𝑐 − 1 , 𝝅 ⁢ ( 𝝉 𝐜 − 𝟏 ) ) and Ω ⁢ ( ⋅ )

Ω ⁢ ( 𝐳 𝐜 | 𝝅 ⁢ ( 𝝉 𝐜 − 𝟏 ) , 𝑠 𝑐 ) .

Since all agents act according to their local history 𝜏 𝑡 , 𝑖 without access to the complete joint history 𝝉 𝐭 , recurrence 𝐏 𝝅 ⁢ ( 𝝉 𝐭 | 𝑏 0 ) depends on more accurate closed-loop information than just true states 𝑠 𝑡 , i.e., all previous observations, actions, and probabilities according to 𝑏 0 , 𝒯 , and Ω .

𝑄 𝑀𝐷𝑃 * is proven to represent an upper bound of 𝑄 * (Oliehoek et al., 2008). Thus, naively deriving local policies 𝜋 𝑖 from 𝑄 𝑀𝐷𝑃 * instead of 𝑄 * can result in overly optimistic behavior as we will show in Section 4.1 and 6.

2.3Multi-Agent Reinforcement Learning

Finding an optimal joint policy 𝝅 * via exhaustive computation of 𝑄 * according to Eq. 3-5 is intractable in practice (Nair et al., 2003; Szer et al., 2005). MARL offers a scalable way to learn 𝑄 * and 𝝅 * via function approximation, e.g., using CTDE, where training takes place in a laboratory or a simulator with access to global information (Lowe et al., 2017; Foerster et al., 2018). We focus on value-based MARL to learn a centralized value function 𝑄 𝑡𝑜𝑡 ≈ 𝑄 * , which can be factorized into local utility functions ⟨ 𝑄 𝑖 ⟩ 𝑖 ∈ 𝒟 for decentralized decision making via 𝜋 𝑖 ⁢ ( 𝜏 𝑡 , 𝑖 )

𝑎𝑟𝑔𝑚𝑎𝑥 𝑎 𝑡 , 𝑖 ⁢ 𝑄 𝑖 ⁢ ( 𝜏 𝑡 , 𝑖 , 𝑎 𝑡 , 𝑖 ) . For that, a factorization operator Ψ is used (Phan et al., 2021):

𝑄 𝑡𝑜𝑡 ⁢ ( 𝝉 𝐭 , 𝐚 𝐭 )

Ψ ⁢ ( 𝑄 1 ⁢ ( 𝜏 𝑡 , 1 , 𝑎 𝑡 , 1 ) , … , 𝑄 𝑁 ⁢ ( 𝜏 𝑡 , 𝑁 , 𝑎 𝑡 , 𝑁 ) )

(6)

In practice, Ψ is realized with deep neural networks, such that ⟨ 𝑄 𝑖 ⟩ 𝑖 ∈ 𝒟 can be learned end-to-end via backpropagation by minimizing the mean squared temporal difference (TD) error (Rashid et al., 2018; Sunehag et al., 2018). A factorization operator Ψ is decentralizable when satisfying the IGM (Individual-Global-Max) such that (Son et al., 2019):

𝑎𝑟𝑔𝑚𝑎𝑥 𝐚 𝐭 ⁢ 𝑄 𝑡𝑜𝑡 ⁢ ( 𝝉 𝐭 , 𝐚 𝐭 )

( 𝑎𝑟𝑔𝑚𝑎𝑥 𝑎 𝑡 , 1 ⁢ 𝑄 1 ⁢ ( 𝜏 𝑡 , 1 , 𝑎 𝑡 , 1 )

𝑎𝑟𝑔𝑚𝑎𝑥 𝑎 𝑡 , 𝑁 ⁢ 𝑄 𝑁 ⁢ ( 𝜏 𝑡 , 𝑁 , 𝑎 𝑡 , 𝑁 ) )

(7)

There exists a variety of factorization operators Ψ which satisfy Eq. 7 using monotonicity like QMIX (Rashid et al., 2018), nonlinear transformation like QPLEX (Wang et al., 2021), or loss weighting like CW- and OW-QMIX (Rashid et al., 2020). Most approaches use state-based CTDE to learn 𝑄 𝑀𝐷𝑃 * according to Eq. 1 instead of 𝑄 * (Eq. 3-5).

2.4Recurrent Reinforcement Learning

In partially observable settings, the policy 𝜋 𝑖 of agent 𝑖 conditions on the history 𝜏 𝑡 , 𝑖 of past observations and actions (Kaelbling et al., 1998; Oliehoek & Amato, 2016). In practice, recurrent neural networks (RNNs) like LSTMs or GRUs are used to learn a compact representation ℎ 𝑡 , 𝑖 of 𝜏 𝑡 , 𝑖 and 𝜋 𝑖 known as hidden state or memory representation1, which implicitly encodes the individual recurrence of agent 𝑖 , i.e., the distribution 𝑃 𝑖 𝜋 𝑖 over 𝜏 𝑡 , 𝑖 (Hochreiter & Schmidhuber, 1997; Cho et al., 2014; Hu & Foerster, 2019):

𝑃 𝑖 𝜋 𝑖 ⁢ ( 𝜏 𝑡 , 𝑖 | 𝑏 0 )

𝑃 𝑖 ⁢ ( 𝑧 0 , 𝑖 | 𝑏 0 ) ⁢ ∏ 𝑐

1 𝑡 𝑃 𝑖 ⁢ ( 𝑧 𝑐 , 𝑖 | 𝜏 𝑐 − 1 , 𝑖 , 𝜋 𝑖 )

(8)

RNNs are commonly used for partially observable problems and have been empirically shown to be more effective than using raw observations 𝑧 𝑡 , 𝑖 or histories 𝜏 𝑡 , 𝑖 (Hausknecht & Stone, 2015; Samvelyan et al., 2019; Vinyals et al., 2019).

3Related Work Multi-Agent Reinforcement Learning

In recent years, MARL has achieved remarkable progress in challenging domains (Gupta et al., 2017; Vinyals et al., 2019). State-of-the-art MARL is based on CTDE to learn a centralized value function 𝑄 𝑡𝑜𝑡 for actor-critic learning (Lowe et al., 2017; Foerster et al., 2018; Yu et al., 2022) or factorization (Rashid et al., 2018, 2020; Wang et al., 2021). However, the majority of works assumes a simplified Dec-POMDP setting, where Ω is deterministic, and uses true states to approximate 𝑄 𝑀𝐷𝑃 * according to Eq. 1 instead of 𝑄 * (Eq. 3-5). Thus, state-based CTDE is possibly less effective in more general Dec-POMDP settings. Our approach addresses stochastic partial observability with a learned representation of multi-agent recurrence 𝐏 𝝅 ⁢ ( 𝝉 𝐭 | 𝑏 0 ) according to Eq. 5 instead of 𝑠 𝑡 .

Weaknesses of State-Based CTDE

Recent works investigated potential weaknesses of state-based CTDE for multi-agent actor-critic methods regarding bias and variance (Lyu et al., 2021, 2022). The experimental results show that state-based CTDE can surprisingly fail in very simple Dec-POMDP benchmarks that exhibit more stochasticity than SMAC. While these studies can be considered an important step towards general Dec-POMDPs, there is neither an approach which adequately addresses stochastic partial observability nor a benchmark to systematically evaluate such an approach yet. In this work, we focus on value-based MARL, where learning an accurate value function is important for meaningful factorization, and propose an attention-based recurrence approach to approximate value functions under stochastic partial observability. We also introduce a modified SMAC benchmark, which enables systematic evaluation under various stochasticity configurations.

Attention-Based CTDE

Attention has been used in CTDE to process information of potentially variable length 𝑁 , where joint observations 𝐳 𝐭 , joint actions 𝐚 𝐭 , or local utilities ⟨ 𝑄 𝑖 ⟩ 𝑖 ∈ 𝒟 are weighted and aggregated to provide a meaningful representation for value function approximation (Iqbal & Sha, 2019; Wang et al., 2021; Iqbal et al., 2021; Wen et al., 2022; Khan et al., 2022). Most works focus on Markov games without observation stochasticity, which are special cases of the Dec-POMDP setting. In this work, we focus on stochastic partial observability and apply self-attention to the memory representations ℎ 𝑡 , 𝑖 of all agents’ RNNs instead of the raw observations 𝑧 𝑡 , 𝑖 to approximate 𝑄 * for general Dec-POMDPs according to Eq. 3-5.

Figure 1:Illustration of the AERIAL setup. Left: Recurrent agent network structure with memory representations ℎ 𝑡 − 1 , 𝑖 and ℎ 𝑡 , 𝑖 . Right: Value function factorization via factorization operator Ψ using the joint memory representation 𝐡 𝐭

⟨ ℎ 𝑡 , 𝑖 ⟩ 𝑖 ∈ 𝒟 of all agents’ RNNs instead of true states 𝑠 𝑡 . All memory representations ℎ 𝑡 , 𝑖 are detached from the computation graph to avoid additional differentiation (indicated by the dashed gray arrows) and passed through a simplified transformer before being used by Ψ for value function factorization. 4AERIAL 4.1Limitation of State-Based CTDE

Most state-of-the-art works assume a simplified Dec-POMDP setting, where Ω is deterministic, and approximate 𝑄 𝑀𝐷𝑃 * according to Eq. 1 instead of 𝑄 * (Eq. 3-5).

If there are only deterministic observations and initial states 𝑠 0 such that 𝑏 0 ⁢ ( 𝑠 0 )

1 and 𝑏 0 ⁢ ( 𝑠 ′ )

0 if 𝑠 ′ ≠ 𝑠 0 , then multi-agent recurrence 𝐏 𝝅 ⁢ ( 𝝉 𝐭 | 𝑏 0 ) as defined in Eq. 5 would only depend on state transition probabilities 𝒯 ⁢ ( 𝑠 𝑡 + 1 | 𝑠 𝑡 , 𝐚 𝐭 ) which are purely state-based, ignoring decentralization of agents and observations (Oliehoek et al., 2008). In such scenarios, stochastic partial observability is very limited, especially if all 𝜋 𝑖 are deterministic. We hypothesize that this is one reason for the empirical success of state-based CTDE in original SMAC, whose scenarios seemingly have these simplifying properties (Ellis et al., 2022).

In the following, we regard a small example, where state-based CTDE can fail at finding an optimal joint policy 𝝅 * .

Example

Dec-Tiger is a traditional and simple Dec-POMDP benchmark with 𝑁

2 agents facing two doors (Nair et al., 2003). A tiger is randomly placed behind the left ( 𝑠 𝐿 ) or right door ( 𝑠 𝑅 ) representing the true state. Both agents are able to listen (li) and open the left ( 𝑜 𝐿 ) or right door ( 𝑜 𝑅 ). The listening action li produces a noisy observation of either hearing the tiger to be left ( 𝑧 𝐿 ) or right ( 𝑧 𝑅 ), which correctly indicates the tiger’s position with 85 % chance and a cost of − 1 per listening agent. If both agents open the same door, the episode terminates with a reward of -50 if opening the tiger door and +20 otherwise. If both agents open different doors, the episode ends with -100 reward and, if only one agent opens a door while the other agent is listening, the episode terminates with -101 if opening the tiger door and +9 otherwise.

Given a horizon of 𝑇

2 , the tiger being behind the right door ( 𝑠 𝑅 ), and both agents having listened in the first step, where agent 1 heard 𝑧 𝐿 and agent 2 heard 𝑧 𝑅 : Assuming that both agents learned to perform the same actions, e.g., due to CTDE and parameter sharing (Tan, 1993; Gupta et al., 2017), 𝑄 𝑀𝐷𝑃 * and 𝑄 * would estimate the following values2:

𝑄 𝑀𝐷𝑃 * ⁢ ( 𝑠 𝑅 , ⟨ 𝑙𝑖 , 𝑙𝑖 ⟩ )

− 2
𝑄 * ⁢ ( 𝝉 𝐭 , ⟨ 𝑙𝑖 , 𝑙𝑖 ⟩ )

− 2

𝑄 𝑀𝐷𝑃 * ⁢ ( 𝑠 𝑅 , ⟨ 𝑜 𝐿 , 𝑜 𝐿 ⟩ )

20
𝑄 * ⁢ ( 𝝉 𝐭 , ⟨ 𝑜 𝐿 , 𝑜 𝐿 ⟩ )

− 15

𝑄 𝑀𝐷𝑃 * ⁢ ( 𝑠 𝑅 , ⟨ 𝑜 𝑅 , 𝑜 𝑅 ⟩ )

− 50
𝑄 * ⁢ ( 𝝉 𝐭 , ⟨ 𝑜 𝑅 , 𝑜 𝑅 ⟩ )

− 15

Any policy 𝝅 𝐌𝐃𝐏 * or decentralizable joint policy 𝝅 w.r.t. IGM (Eq. 7) that maximizes 𝑄 𝑀𝐷𝑃 * according to Eq. 2 would optimistically recommend ⟨ 𝑜 𝐿 , 𝑜 𝐿 ⟩ based on the true state 𝑠 𝑅 , regardless of what the agents observed. However, any joint policy 𝝅 * that maximizes the expectation of 𝑄 * according to Eq. 4 would consider agent-wise stochastic partial observability and recommend ⟨ 𝑙𝑖 , 𝑙𝑖 ⟩ , which corresponds to the true optimal decision for 𝑇

2 (Szer et al., 2005).

4.2Attention-Based Embeddings of Recurrence Preliminaries

We now introduce Attention-based Embeddings of Recurrence In multi-Agent Learning (AERIAL) to approximate optimal Dec-POMDP value functions 𝑄 * according to Eq. 3-5. Our setup uses a factorization operator Ψ like QMIX or QPLEX according to Eq. 6-7. All agents process their local histories 𝜏 𝑡 , 𝑖 via RNNs as motivated in Section 2.4 and schematically shown in Fig. 1 (left).

Unlike 𝑄 𝑀𝐷𝑃 * , the true optimal Dec-POMDP value function 𝑄 * considers more accurate closed-loop information about decentralized agent decisions through multi-agent recurrence 𝐏 𝝅 ⁢ ( 𝝉 𝐭 | 𝑏 0 ) according to Eq. 5. Simply replacing 𝑠 𝑡 with 𝝉 𝐭 as suggested in (Lyu et al., 2022) is not sufficient because the resulting value function would assume a centralized controller with access to the complete joint history 𝝉 𝐭 , in contrast to decentralized agents 𝑖 which can only access their respective local history 𝜏 𝑡 , 𝑖 (Oliehoek et al., 2008).

Exploiting Multi-Agent Recurrence

At first we propose to naively exploit all individual recurrences by simply replacing the true state 𝑠 𝑡 in CTDE with the joint memory representation 𝐡 𝐭

⟨ ℎ 𝑡 , 𝑖 ⟩ 𝑖 ∈ 𝒟 of all agents’ RNNs. Each memory representation ℎ 𝑡 , 𝑖 implicitly encodes the individual recurrence 𝑃 𝑖 𝜋 𝑖 ⁢ ( 𝜏 𝑡 , 𝑖 | 𝑏 0 ) of agent 𝑖 according to Eq. 8. Therefore, 𝐡 𝐭 provides more accurate closed-loop information about decentralized agent decisions than 𝑠 𝑡 .

This approach, called AERIAL (no attention), can already be considered a sufficient solution if all individual recurrences 𝑃 𝑖 𝜋 𝑖 ⁢ ( 𝜏 𝑡 , 𝑖 | 𝑏 0 ) are statistically independent such that 𝐏 𝝅 ⁢ ( 𝝉 𝐭 | 𝑏 0 )

∏ 𝑖

1 𝑁 𝑃 𝑖 𝜋 𝑖 ⁢ ( 𝜏 𝑡 , 𝑖 | 𝑏 0 ) .

Attention-Based Recurrence

While AERIAL (no attention) offers a simple way to address agent-wise stochastic partial observability, the independence assumption of all individual recurrences 𝑃 𝑖 𝜋 𝑖 ⁢ ( 𝜏 𝑡 , 𝑖 | 𝑏 0 ) does not hold in practice due to correlations in observations and actions (Bernstein et al., 2005; Amato et al., 2007).

Given the Dec-Tiger example above, the individual recurrences according to Eq. 8 are 𝑃 1 𝜋 1 ⁢ ( 𝜏 𝑡 , 1 | 𝑏 0 )

𝑃 2 𝜋 2 ⁢ ( 𝜏 𝑡 , 2 | 𝑏 0 )

0.5 (Kaelbling et al., 1998). However, the actual multi-agent recurrence according to Eq. 5 is 𝐏 𝝅 ⁢ ( 𝝉 𝐭 | 𝑏 0 )

0.15 ⋅ 0.85 ≠ 𝑃 1 𝜋 1 ⁢ ( 𝜏 𝑡 , 1 | 𝑏 0 ) ⋅ 𝑃 2 𝜋 2 ⁢ ( 𝜏 𝑡 , 2 | 𝑏 0 ) , indicating that individual recurrences are not statistically independent in general (Oliehoek & Amato, 2016).

Therefore, we process 𝐡 𝐭 by a simplified transformer along the agent axis to automatically consider the latent dependencies of all memory representations ℎ 𝑡 , 𝑖 ∈ 𝐡 𝐭 through self-attention. The resulting approach, called AERIAL, is depicted in Fig. 1 and Algorithm 1 in Appendix C.

Our transformer does not use positional encoding or masking, since we assume no particular ordering among agents. The joint memory representation 𝐡 𝐭 is passed through a single multi-head attention layer with the output of each attention head 𝑐 being defined by (Vaswani et al., 2017):

𝑎𝑡𝑡 𝑐 ⁢ ( 𝐡 𝐭 )

𝑠𝑜𝑓𝑡𝑚𝑎𝑥 ⁢ ( 𝑊 𝑞 𝑐 ⁢ ( 𝐡 𝐭 ) ⁢ 𝑊 𝑘 𝑐 ⁢ ( 𝐡 𝐭 ) ⊤ 𝑑 𝑎𝑡𝑡 ) ⁢ 𝑊 𝑣 𝑐 ⁢ ( 𝐡 𝐭 )

(9)

where 𝑊 𝑞 𝑐 , 𝑊 𝑘 𝑐 , and 𝑊 𝑣 𝑐 are multi-layer perceptrons (MLP) with an output dimensionality of 𝑑 𝑎𝑡𝑡 . All outputs 𝑎𝑡𝑡 𝑐 ⁢ ( 𝐡 𝐭 ) are summed and passed through a series of MLP layers before being fed into the factorization operator Ψ , effectively replacing the true state 𝑠 𝑡 by a learned representation of multi-agent recurrence 𝐏 𝝅 ⁢ ( 𝝉 𝐭 | 𝑏 0 ) according to Eq. 5.

To avoid additional differentation of 𝐡 𝐭 through Ψ or Eq. 9, we detach 𝐡 𝐭 from the computation graph. Thus, we make sure that 𝐡 𝐭 is only learned through agent RNNs.

4.3Discussion of AERIAL

The strong focus on state-based CTDE in the last few years has led to the development of increasingly complex algorithms that largely neglect stochastic partial observability in general Dec-POMDPs (Lyu et al., 2021, 2022). In contrast, AERIAL offers a simple way to adjust factorization approaches by replacing the true state 𝑠 𝑡 with a learned representation of multi-agent recurrence 𝐏 𝝅 ⁢ ( 𝝉 𝐭 | 𝑏 0 ) to consider more accurate closed-loop information about decentralized agent decisions. The rest of the training scheme remains unchanged, which eases adjustment of existing approaches.

Since the naive independence assumption of individual memory representations ℎ 𝑡 , 𝑖 does not hold in practice – despite decentralization – we use a simplified transformer to consider the latent dependencies of all ℎ 𝑡 , 𝑖 ∈ 𝐡 𝐭 along the agent axis to learn an adequate representation of multi-agent recurrence 𝐏 𝝅 ⁢ ( 𝝉 𝐭 | 𝑏 0 ) according to Eq. 5.

AERIAL does not depend on true states therefore requiring less overall information than state-based CTDE, since we assume 𝐡 𝐭 to be available in all CTDE setups anyway (Foerster et al., 2018; Rashid et al., 2020). Note that AERIAL does not necessarily require RNNs to obtain 𝐡 𝐭 as hidden layers of MLPs or decision transformers can be used to approximate 𝐡 𝐭 as well (Son et al., 2019; Chen et al., 2021).

Figure 2:Left: Screenshot of two SMAC maps. Middle: PCA visualization of the joint observations in original SMAC within the first 5 steps of 1,000 episodes using a random policy with 𝐾

0 initial random steps. Right: Analogous PCA visualization for MessySMAC with 𝐾

10 initial random steps. For visual comparability, the observations are deterministic here. 5MessySMAC 5.1Limitation of SMAC as a Benchmark

StarCraft Multi-Agent Challenge (SMAC) provides a rich set of micromanagement tasks, where a team of learning agents has to fight against an enemy team, which acts according to handcrafted heuristics of the built-in StarCraft AI (Samvelyan et al., 2019). SMAC currently represents the de facto standard for MARL evaluation (Rashid et al., 2018, 2020; Wang et al., 2021). However, SMAC scenarios exhibit very limited stochastic partial observability due to deterministic observations and low variance in initial states therefore only representing simplified special cases rather than general Dec-POMDP challenges (Lyu et al., 2022; Ellis et al., 2022). To assess practicability of MARL, we need benchmarks with sufficient stochasticity as the real-world is generally messy and only observable through noisy sensors.

5.2SMAC with Stochastic Partial Observability

MessySMAC is a modified version of SMAC with observation stochasticity w.r.t. Ω , where all measured values of observation 𝑧 𝑡 , 𝑖 are negated with a probability of 𝜙 ∈ [ 0 , 1 ) , and initialization stochasticity w.r.t. 𝑏 0 , where 𝐾 random steps are initially performed before officially starting an episode. During the initial phase, the agents can already be ambushed by the built-in AI, which further increases difficulty compared to the original SMAC maps if 𝐾

0 . MessySMAC represents a more general Dec-POMDP challenge which enables systematic evaluation under various stochasticity configurations according to 𝜙 and 𝐾 .

Fig. 2 shows the PCA visualization of joint observations in two maps of original SMAC ( 𝐾

0 ) and MessySMAC ( 𝐾

10 ) within the first 5 steps of 1,000 episodes using a random policy. In original SMAC, the initial observations of 𝑠 0 (dark purple) are very similar and can be easily distinguished from subsequent observations by merely regarding time steps. Therefore, open-loop control might already be sufficient to solve these scenarios satisfactorily as hypothesized in (Ellis et al., 2022). However, the distinction of observations by time steps is more tricky in MessySMAC due to significantly higher entropy in 𝑏 0 , indicating higher initialization stochasticity and a stronger requirement for closed-loop control, where agents need to explicitly consider their actual observations to make proper decisions.

5.3Comparison with SMACv2

SMACv2 is an update to the original SMAC benchmark featuring initialization stochasticity w.r.t. position and unit types, as well as observation restrictions (Ellis et al., 2022). SMACv2 addresses similar issues as MessySMAC but MessySMAC additionally features observation stochasticity w.r.t. Ω according to the general Dec-POMDP formulation in Section 2.1. Unlike MessySMAC, SMACv2 does not support the original SMAC maps thus not enabling direct comparability w.r.t. stochasticity configurations.

Therefore, SMACv2 can be viewed as entirely new StarCraft II benchmark, while MessySMAC represents a SMAC extension, enabling systematic evaluation under various stochasticity configurations for the original SMAC maps.

6Experiments

We use the state-based CTDE implementations of QPLEX, CW-QMIX, OW-QMIX, and QMIX from (Rashid et al., 2020) as state-of-the-art baselines with their default hyperparameters. We also integrate MAPPO from (Yu et al., 2022). For all experiments, we report the average performance and the 95% confidence interval over at least 20 runs.

AERIAL is implemented3 using QMIX as factorization operator Ψ according to Fig. 1. We also experimented with QPLEX as alternative with no significant difference in performance. Thus, we stick with QMIX for efficiency due to fewer trainable parameters. The transformer of AERIAL has 4 heads with 𝑊 𝑞 𝑐 , 𝑊 𝑘 𝑐 , and 𝑊 𝑣 𝑐 each having one hidden layer of 𝑑 𝑎𝑡𝑡

64 units with ReLU activation. The subsequent MLP layers have 64 units with ReLU activation.

For ablation study, we implement AERIAL (no attention), which trains Ψ directly on 𝐡 𝐭 without self-attention as described in Section 4.2, and AERIAL (raw history), which trains Ψ on the raw joint history 𝝉 𝐭 concatenated with the true state 𝑠 𝑡 as originally proposed for actor-critic methods (Lyu et al., 2022).

6.1Dec-Tiger Setting

We use the Dec-Tiger problem described in Section 4.1 and (Nair et al., 2003) as simple proof-of-concept domain with 𝑇

4 and 𝛾

1 . We also provide the optimal value of 4.8 computed with MAA* (Szer et al., 2005).

Results

The results are shown in Fig. 3. AERIAL comes closest to the optimum, achieving an average return of about zero. AERIAL (no attention) performs second best with an average return of about -8, while all other approaches achieve an average return of about -15.

Figure 3:Average learning progress w.r.t. the return of AERIAL variants and state-of-the-art baselines in Dec-Tiger over 50 runs. Shaded areas show the 95% confidence interval. Discussion

The results confirm the example from Section 4.1 and the findings of (Oliehoek et al., 2008; Lyu et al., 2022). All state-based CTDE approaches and AERIAL (raw history) converge to a one-step policy, where both agents optimistically open the same door regardless of any agent observation. AERIAL (no attention) converges to a local optimum most of the time, where both agents only listen for all 𝑇

4 time steps. AERIAL performs best due to considering the latent dependencies of all memory representations ℎ 𝑡 , 𝑖 ∈ 𝐡 𝐭 via self-attention to learn an adequate representation of multi-agent recurrence 𝐏 𝝅 ⁢ ( 𝝉 𝐭 | 𝑏 0 ) according to Eq. 5.

6.2Original SMAC Setting

We evaluate AERIAL in original SMAC using the maps 3s5z and 10m_vs_11m, which are classified as easy, as well as the hard maps 2c_vs_64zg, 3s_vs_5z, and 5m_vs_6m, and the super hard map 3s5z_vs_3s6z (Samvelyan et al., 2019).

Results

The final average test win rates after 2 million steps of training are shown in Table 1. AERIAL is competitive to QPLEX and QMIX in the easy maps, while performing best in 3s_vs_5z and 5m_vs_6m. MAPPO performs best in 2c_vs_64zg and 3s5z_vs_3s6z with AERIAL being second best in the super hard map 3s5z_vs_3s6z.

Table 1:Average win rate of AERIAL and state-of-the-art baselines after 2 million time steps of training across 400 final test episodes for the original SMAC maps with the 95% confidence interval. The best results per map are highlighted in boldface and blue. AERIAL QPLEX CW-QMIX OW-QMIX QMIX MAPPO 3s5z 0.95 ± 0.01

0.94 ± 0.01

0.87 ± 0.02

0.91 ± 0.02

0.95 ± 0.01

68.7 ± 0.94

10m_vs_11m 0.97 ± 0.01

0.90 ± 0.02

0.91 ± 0.02

0.96 ± 0.01

0.90 ± 0.02

77.3 ± 0.66

2c_vs_64zg 0.52 ± 0.11

0.29 ± 0.1

0.38 ± 0.12

0.55 ± 0.13

0.59 ± 0.11

90.2 ± 0.24

3s_vs_5z 0.96 ± 0.02

0.74 ± 0.11

0.18 ± 0.06

0.08 ± 0.04

0.81 ± 0.05

73.8 ± 0.44

5m_vs_6m 0.77 ± 0.03

0.66 ± 0.04

0.41 ± 0.04

0.55 ± 0.06

0.67 ± 0.05

60.6 ± 1.13

3s5z_vs_3s6z 0.18 ± 0.09

0.1 ± 0.03

0.0 ± 0.0

0.02 ± 0.01

0.02 ± 0.02

20.5 ± 2.91 Discussion

AERIAL is competitive to state-of-the-art baselines in original SMAC, indicating that replacing the true state 𝑠 𝑡 with the joint memory representation 𝐡 𝐭 does not notably harm performance. Despite outperforming most baselines in some maps, we do not claim significant outperformance here, since we regard most SMAC maps as widely solved by the community anyway (Ellis et al., 2022).

6.3MessySMAC Setting

We evaluate AERIAL in MessySMAC using the same maps as in Section 6.2. We set 𝜙

15 % and 𝐾

10 .

Results

The results are shown in Fig. 4. AERIAL performs best in all maps with AERIAL (no attention) being second best except in 2c_vs_64zg. In 3s5z_vs_3s6z, only AERIAL and AERIAL (no attention) progress notably. AERIAL (raw history) performs worst in all maps. MAPPO only progresses notably in 2c_vs_64zg.

(a)3s5z (b)10m_vs_11m (c)2c_vs_64zg (d)3s_vs_5z (e)5m_vs_6m (f)3s5z_vs_3s6z Figure 4:Average learning progress w.r.t. the win rate of AERIAL variants and state-of-the-art baselines in MessySMAC for 2 million steps over 20 runs. Shaded areas show the 95% confidence interval. The legend at the top applies across all plots. Discussion

Similar to the Dec-Tiger experiment, the results confirm the benefit of exploiting more accurate closed-loop information in domains with stochastic partial observability. AERIAL consistently outperforms AERIAL (no attention), indicating that self-attention can correct for the naive independence assumption of all ℎ 𝑡 , 𝑖 ∈ 𝐡 𝐭 . MAPPO performs especially poorly in MessySMAC due to its misleading dependence on true states without any credit assignment, confirming the findings of (Ellis et al., 2022).

6.4Robustness against Stochastic Partial Observability Setting

To evaluate the robustness of AERIAL and AERIAL (no attention) against various stochasticity configurations in MessySMAC, we manipulate Ω through the observation negation probability 𝜙 and 𝑏 0 through the number of initial random steps 𝐾 as defined in Section 5.2. We compare the results with QMIX and QPLEX as the best performing state-of-the-art baselines in MessySMAC according to the results in Section 6.3. We present summarized plots, where the results are aggregated accross all maps from Section 6.3. To avoid that easy maps dominate the average win rate, since all approaches achieve high values there, we normalize the values by the maximum win rate achieved in the respective map for all tested configurations of 𝜙 and 𝐾 . Thus, we ensure an equal weighting regardless of the particular difficulty level. If not mentioned otherwise, we set 𝜙

15 % and 𝐾

10 as default parameters based on Section 6.3.

Results

The results regarding observation stochasticity w.r.t. Ω and 𝜙 are shown in Fig. 5. Fig. 5(a) shows that the average win rates of all approaches decrease with increasing 𝜙 with AERIAL consistently achieving the highest average win rate in all configurations. Fig. 5(b) shows that AERIAL performs best in most MessySMAC maps, especially when 𝜙 ≥ 15 % . AERIAL (no attention) performs second best.

(a)normalized test win rate (b)# maps best out of 6 Figure 5:Evaluation of AERIAL, AERIAL (no attention), and the best MessySMAC baselines for different observation negation probabilities 𝜙 affecting observation stochasticity w.r.t. Ω (20 runs per configuration). (a) The average normalized test win rate accross all 6 MessySMAC maps from Section 6.3. (b) The number of maps best out of 6. The legend at the top applies across all plots.

The results regarding initialization stochasticity w.r.t. 𝑏 0 and 𝐾 are shown in Fig. 6. Analogously to Fig. 5, Fig. 6(a) shows that the average (normalized) win rates of all approaches decrease with increasing 𝐾 with AERIAL consistently achieving the highest average win rate in all configurations. Fig. 6(b) shows that AERIAL performs best in most MessySMAC maps, especially when 𝐾 ≥ 10 . AERIAL (no attention) performs second best.

(a)normalized test win rate (b)# maps best out of 6 Figure 6:Evaluation of AERIAL, AERIAL (no attention), and the best MessySMAC baselines for different initial random steps 𝐾 affecting initialization stochasticity w.r.t. 𝑏 0 (20 runs per configuration). (a) The average normalized test win rate accross all 6 MessySMAC maps from Section 6.3. (b) The number of maps best out of 6. The legend at the top applies across all plots. Discussion

Our results systematically demonstrate the robustness of AERIAL and AERIAL (no attention) against various stochasticity configurations according to Ω and 𝑏 0 . State-based CTDE is notably less effective in settings, where observation and initialization stochasticity is high. As AERIAL consistently performs best in all maps when 𝜙 ≥ 15 % or 𝐾 ≥ 10 , we conclude that providing an adequate representation of 𝐏 𝝅 ⁢ ( 𝝉 𝐭 | 𝑏 0 ) according to Eq. 5 that is learned, e.g., through 𝐡 𝐭 and self-attention, is more beneficial for CTDE than merely relying on true states when facing domains with high stochastic partial observability.

7Conclusion and Future Work

To tackle general multi-agent problems, which are messy and only observable through noisy sensors, we need adequate algorithms and benchmarks that sufficiently consider stochastic partial observability.

In this paper, we proposed AERIAL to approximate value functions under stochastic partial observability with a learned representation of multi-agent recurrence, considering more accurate closed-loop information about decentralized agent decisions than state-based CTDE.

We then introduced MessySMAC, a modified version of SMAC with stochastic observations and higher variance in initial states, to provide a more general and configurable Dec-POMDP benchmark regarding stochastic partial observability. We showed visually in Fig. 2 and experimentally in Section 6 that MessySMAC scenarios pose a greater challenge than their original SMAC counterparts due to observation and initialization stochasticity.

Compared to state-based CTDE, AERIAL offers a simple but effective approach to general Dec-POMDPs, being competitive in original SMAC and superior in Dec-Tiger and MessySMAC, which both exhibit observation and initialization stochasticity unlike original SMAC. Simply replacing the true state with memory representations can already improve performance in most scenarios, confirming the need for more accurate closed-loop information about decentralized agent decisions. Self-attention can correct for the naive independence assumption of agent-wise recurrence to further improve performance, especially when observation or initialization stochasticity is high.

We plan to further evaluate AERIAL in SMACv2 and mixed competitive-cooperative settings with multiple CTDE instances (Lowe et al., 2017; Phan et al., 2020).

Acknowledgements

This work was partially funded by the Bavarian Ministry for Economic Affairs, Regional Development and Energy as part of a project to support the thematic development of the Institute for Cognitive Systems.

References Amato et al. (2007) Amato, C., Bernstein, D. S., and Zilberstein, S.Optimizing Memory-Bounded Controllers for Decentralized POMDPs.In Proceedings of the 23rd Conference on Uncertainty in Artificial Intelligence, pp.  1–8, 2007. Bernstein et al. (2005) Bernstein, D. S., Hansen, E. A., and Zilberstein, S.Bounded Policy Iteration for Decentralized POMDPs.In IJCAI, pp.  52–57, 2005. Boutilier (1996) Boutilier, C.Planning, Learning and Coordination in Multiagent Decision Processes.In Proceedings of the 6th conference on Theoretical aspects of rationality and knowledge, pp.  195–210. Morgan Kaufmann Publishers Inc., 1996. Chen et al. (2021) Chen, L., Lu, K., Rajeswaran, A., Lee, K., Grover, A., Laskin, M., Abbeel, P., Srinivas, A., and Mordatch, I.Decision Transformer: Reinforcement Learning via Sequence Modeling.In Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems, volume 34, pp.  15084–15097. Curran Associates, Inc., 2021. Cho et al. (2014) Cho, K., van Merriënboer, B., Bahdanau, D., and Bengio, Y.On the Properties of Neural Machine Translation: Encoder-Decoder Approaches.In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pp.  103–111, 2014. Ellis et al. (2022) Ellis, B., Moalla, S., Samvelyan, M., Sun, M., Mahajan, A., Foerster, J. N., and Whiteson, S.SMACv2: An Improved Benchmark for Cooperative Multi-Agent Reinforcement Learning.2022. Emery-Montemerlo et al. (2004) Emery-Montemerlo, R., Gordon, G., Schneider, J., and Thrun, S.Approximate Solutions for Partially Observable Stochastic Games with Common Payoffs.In Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 1, AAMAS ’04, pp. 136–143, USA, 2004. IEEE Computer Society.ISBN 1581138644. Foerster et al. (2018) Foerster, J., Farquhar, G., Afouras, T., Nardelli, N., and Whiteson, S.Counterfactual Multi-Agent Policy Gradients.Proceedings of the AAAI Conference on Artificial Intelligence, 32(1), Apr. 2018. Gupta et al. (2017) Gupta, J. K., Egorov, M., and Kochenderfer, M.Cooperative Multi-Agent Control using Deep Reinforcement Learning.In Autonomous Agents and Multiagent Systems, pp.  66–83. Springer, 2017. Hausknecht & Stone (2015) Hausknecht, M. and Stone, P.Deep Recurrent Q-Learning for Partially Observable MDPs.In 2015 AAAI Fall Symposium Series, 2015. Hochreiter & Schmidhuber (1997) Hochreiter, S. and Schmidhuber, J.Long Short-Term Memory.Neural Computation, 9(8):1735–1780, 1997. Hu & Foerster (2019) Hu, H. and Foerster, J. N.Simplified Action Decoder for Deep Multi-Agent Reinforcement Learning.In International Conference on Learning Representations, 2019. Iqbal & Sha (2019) Iqbal, S. and Sha, F.Actor-Attention-Critic for Multi-Agent Reinforcement Learning.In Chaudhuri, K. and Salakhutdinov, R. (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp.  2961–2970, Long Beach, California, USA, 09–15 Jun 2019. PMLR. Iqbal et al. (2021) Iqbal, S., De Witt, C. A. S., Peng, B., Boehmer, W., Whiteson, S., and Sha, F.Randomized Entity-wise Factorization for Multi-Agent Reinforcement Learning.In Meila, M. and Zhang, T. (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp.  4596–4606. PMLR, 18–24 Jul 2021. Kaelbling et al. (1998) Kaelbling, L. P., Littman, M. L., and Cassandra, A. R.Planning and Acting in Partially Observable Stochastic Domains.Artificial intelligence, 101(1-2):99–134, 1998. Khan et al. (2022) Khan, M. J., Ahmed, S. H., and Sukthankar, G.Transformer-Based Value Function Decomposition for Cooperative Multi-Agent Reinforcement Learning in StarCraft.Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 18(1):113–119, Oct. 2022. Lowe et al. (2017) Lowe, R., Wu, Y., Tamar, A., Harb, J., Abbeel, P., and Mordatch, I.Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments.In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. Lyu et al. (2021) Lyu, X., Xiao, Y., Daley, B., and Amato, C.Contrasting Centralized and Decentralized Critics in Multi-Agent Reinforcement Learning.In Proceedings of the 20th International Conference on Autonomous Agents and Multiagent Systems, pp.  844–852, 2021. Lyu et al. (2022) Lyu, X., Baisero, A., Xiao, Y., and Amato, C.A Deeper Understanding of State-Based Critics in Multi-Agent Reinforcement Learning.Proceedings of the AAAI Conference on Artificial Intelligence, 36(9):9396–9404, Jun. 2022.doi: 10.1609/aaai.v36i9.21171. Nair et al. (2003) Nair, R., Tambe, M., Yokoo, M., Pynadath, D., and Marsella, S.Taming Decentralized POMDPs: Towards Efficient Policy Computation for Multiagent Settings.In Proceedings of the 18th International Joint Conference on Artificial Intelligence, IJCAI’03, pp.  705–711, San Francisco, CA, USA, 2003. Morgan Kaufmann Publishers Inc. Oliehoek & Amato (2016) Oliehoek, F. A. and Amato, C.A Concise Introduction to Decentralized POMDPs, volume 1.Springer, 2016. Oliehoek et al. (2008) Oliehoek, F. A., Spaan, M. T., and Vlassis, N.Optimal and Approximate Q-Value Functions for Decentralized POMDPs.Journal of Artificial Intelligence Research, 32:289–353, 2008. Phan et al. (2020) Phan, T., Gabor, T., Sedlmeier, A., Ritz, F., Kempter, B., Klein, C., Sauer, H., Schmid, R., Wieghardt, J., Zeller, M., et al.Learning and Testing Resilience in Cooperative Multi-Agent Systems.In Proceedings of the 19th International Conference on Autonomous Agents and Multiagent Systems, AAMAS ’20, pp.  1055–1063. International Foundation for Autonomous Agents and Multiagent Systems, 2020. Phan et al. (2021) Phan, T., Ritz, F., Belzner, L., Altmann, P., Gabor, T., and Linnhoff-Popien, C.VAST: Value Function Factorization with Variable Agent Sub-Teams.In Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems, volume 34, pp.  24018–24032. Curran Associates, Inc., 2021. Phan et al. (2023) Phan, T., Ritz, F., Nüßlein, J., Kölle, M., Gabor, T., and Linnhoff-Popien, C.Attention-Based Recurrency for Multi-Agent Reinforcement Learning under State Uncertainty.In Extended Abstracts of the 22nd International Conference on Autonomous Agents and Multiagent Systems, AAMAS ’23, pp.  2839–2841. International Foundation for Autonomous Agents and Multiagent Systems, 2023.ISBN 9781450394321. Rashid et al. (2018) Rashid, T., Samvelyan, M., Schroeder, C., Farquhar, G., Foerster, J., and Whiteson, S.QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning.In Dy, J. and Krause, A. (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp.  4295–4304. PMLR, 10–15 Jul 2018. Rashid et al. (2020) Rashid, T., Farquhar, G., Peng, B., and Whiteson, S.Weighted QMIX: Expanding Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning.In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M. F., and Lin, H. (eds.), Advances in Neural Information Processing Systems, volume 33, pp.  10199–10210. Curran Associates, Inc., 2020. Samvelyan et al. (2019) Samvelyan, M., Rashid, T., Schroeder de Witt, C., Farquhar, G., Nardelli, N., Rudner, T. G., Hung, C.-M., Torr, P. H., Foerster, J., and Whiteson, S.The StarCraft Multi-Agent Challenge.In Proceedings of the 18th International Conference on Autonomous Agents and Multiagent Systems, AAMAS ’19, pp.  2186–2188, Richland, SC, 2019. International Foundation for Autonomous Agents and Multiagent Systems.ISBN 9781450363099. Son et al. (2019) Son, K., Kim, D., Kang, W. J., Hostallero, D. E., and Yi, Y.QTRAN: Learning to Factorize with Transformation for Cooperative Multi-Agent Reinforcement Learning.In Chaudhuri, K. and Salakhutdinov, R. (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp.  5887–5896. PMLR, 09–15 Jun 2019. Sunehag et al. (2018) Sunehag, P., Lever, G., Gruslys, A., Czarnecki, W. M., Zambaldi, V., Jaderberg, M., Lanctot, M., Sonnerat, N., Leibo, J. Z., Tuyls, K., and Graepel, T.Value-Decomposition Networks for Cooperative Multi-Agent Learning based on Team Reward.In Proceedings of the 17th International Conference on Autonomous Agents and Multiagent Systems, AAMAS ’18, pp.  2085–2087, Richland, SC, 2018. International Foundation for Autonomous Agents and Multiagent Systems. Szer et al. (2005) Szer, D., Charpillet, F., and Zilberstein, S.MAA*: A Heuristic Search Algorithm for Solving Decentralized POMDPs.UAI’05, pp.  576–583, Arlington, Virginia, USA, 2005. AUAI Press.ISBN 0974903914. Tan (1993) Tan, M.Multi-Agent Reinforcement Learning: Independent versus Cooperative Agents.In Proceedings of the Tenth International Conference on International Conference on Machine Learning, ICML’93, pp.  330–337, San Francisco, CA, USA, 1993. Morgan Kaufmann Publishers Inc.ISBN 1558603077. Vaswani et al. (2017) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. u., and Polosukhin, I.Attention is All You Need.In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. Vinyals et al. (2019) Vinyals, O., Babuschkin, I., Czarnecki, W. M., Mathieu, M., Dudzik, A., Chung, J., Choi, D. H., Powell, R., Ewalds, T., Georgiev, P., et al.Grandmaster Level in StarCraft II using Multi-Agent Reinforcement Learning.Nature, pp.  1–5, 2019. Wang et al. (2021) Wang, J., Ren, Z., Liu, T., Yu, Y., and Zhang, C.QPLEX: Duplex Dueling Multi-Agent Q-Learning.In International Conference on Learning Representations, 2021. Watkins & Dayan (1992) Watkins, C. J. and Dayan, P.Q-Learning.Machine Learning, 8(3-4):279–292, 1992. Wen et al. (2022) Wen, M., Kuba, J. G., Lin, R., Zhang, W., Wen, Y., Wang, J., and Yang, Y.Multi-Agent Reinforcement Learning is a Sequence Modeling Problem.arXiv preprint arXiv:2205.14953, 2022. Whittlestone et al. (2021) Whittlestone, J., Arulkumaran, K., and Crosby, M.The Societal Implications of Deep Reinforcement Learning.Journal of Artificial Intelligence Research, 70:1003–1030, May 2021.ISSN 1076-9757.doi: 10.1613/jair.1.12360. Yu et al. (2022) Yu, C., Velu, A., Vinitsky, E., Gao, J., Wang, Y., Bayen, A., and Wu, Y.The Surprising Effectiveness of PPO in Cooperative Multi-Agent Games.In 36th Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022. Appendix ALimitations and Societal Impacts A.1Limitations

AERIAL does not significantly outperform state-of-the-art baselines in easier domains without stochastic partial observability as indicated by the original SMAC results in Table 1, implying that simplified Dec-POMDP settings might benefit from more specialized algorithms. The dependence on joint memory representations 𝐡 𝐭

⟨ ℎ 𝑡 , 𝑖 ⟩ 𝑖 ∈ 𝒟 might induce some bias w.r.t. agent behavior policies which could limit performance in hard exploration domains therefore requiring additional mechanisms beyond the scope of this work. The full version of AERIAL requires additional compute4 due to the transformer component in Fig. 1 which can be compensated by using a more (parameter) efficient value function factorization operator Ψ , e.g., using QMIX instead of QPLEX.

A.2Potential Negative Societal Impacts

The goal of our work is to realize autonomous systems to solve complex tasks under stochastic partial observability as motivated in Section 1. We refer to (Whittlestone et al., 2021) for a general overview regarding societal implications of deep RL and completely focus on cooperative MARL settings in the following.

AERIAL is based on a centralized training regime to learn decentralized policies with a common objective. That objective might include bias of a central authority and could potentially harm opposing parties, e.g., via discrimination or misleading information. Since training is conducted in a laboratory or a simulation, the resulting system might exhibit unsafe or questionable behavior when being deployed in the real world due to poor generalization, e.g., leading to accidents or unfair decisions. The transformer component in Fig. 1 might require a significant amount of additional compute for tuning and training therefore increasing overall cost. The self-attention weights of Eq. 9 could be used to discriminate participating individuals in an unethical way, e.g., discarding less relevant groups of individuals according to the softmax output.

Similar to original SMAC, MessySMAC is based on team battles, indicating that any MARL algorithm mastering that challenge could be misused for real combat, e.g., in autonomous weapon systems to realize distributed and coordinated strategies. Since MessySMAC covers the aspect of stochastic partial observability, successfully evaluated algorithms could be potentially more effective and dangerous in real-world scenarios.

Appendix BDec-Tiger Example

Given the Dec-Tiger example from Section 4.1 with a horizon of 𝑇

2 , the tiger being behind the right door ( 𝑠 𝑅 ), and both agents having listened in the first step, where agent 1 heard 𝑧 𝐿 and agent 2 heard 𝑧 𝑅 : The final state-based values are defined by 𝑄 𝑀𝐷𝑃 * ⁢ ( 𝑠 𝑡 , 𝐚 𝐭 )

ℛ ⁢ ( 𝑠 𝑡 , 𝐚 𝐭 ) .

Due to both agents perceiving different observations, i.e., 𝑧 𝐿 and 𝑧 𝑅 respectively, the probability of being in state 𝑠 𝑅 is 50% according to the belief state, i.e., 𝑏 ⁢ ( 𝑠 𝑅 | 𝝉 𝐭 )

𝑏 ⁢ ( 𝑠 𝐿 | 𝝉 𝐭 )

1 2 . Thus, the true optimal Dec-POMDP values for the final time step are defined by:

𝑄 * ⁢ ( 𝝉 𝐭 , 𝐚 𝐭 )

∑ 𝑠 𝑡 ∈ 𝒮 𝑏 ⁢ ( 𝑠 𝑡 | 𝝉 𝐭 ) ⁢ ℛ ⁢ ( 𝑠 𝑡 , 𝐚 𝐭 )

= 1 2 ⁢ ( 𝑄 𝑀𝐷𝑃 * ⁢ ( 𝑠 𝐿 , 𝐚 𝐭 ) + 𝑄 𝑀𝐷𝑃 * ⁢ ( 𝑠 𝑅 , 𝐚 𝐭 ) )

(10)

The values of 𝑄 𝑀𝐷𝑃 * and 𝑄 * for the final time step 𝑡

2 in the example are given in Table 2. Both agents can reduce the expected penalty when always performing the same action. Therefore, it is likely for MARL to converge to a joint policy that recommends the same actions for both agents, especially when synchronization techniques like parameter sharing are used (Tan, 1993; Gupta et al., 2017; Yu et al., 2022).

Table 2:The values of 𝑄 𝑀𝐷𝑃 * and 𝑄 * for the final time step 𝑡

2 in the Dec-Tiger example from Section 4.1. 𝐚 𝐭

𝑄 𝑀𝐷𝑃 * ⁢ ( 𝑠 𝐿 , 𝐚 𝐭 )

𝑄 𝑀𝐷𝑃 * ⁢ ( 𝑠 𝑅 , 𝐚 𝐭 )

𝑄 * ⁢ ( 𝝉 𝐭 , 𝐚 𝐭 )

⟨ 𝑙𝑖 , 𝑙𝑖 ⟩

− 𝟐

− 𝟐

− 𝟐

⟨ 𝑙𝑖 , 𝑜 𝐿 ⟩ -101 +9 -46

⟨ 𝑙𝑖 , 𝑜 𝑅 ⟩ +9 -101 -46

⟨ 𝑜 𝐿 , 𝑙𝑖 ⟩ -101 +9 -46

⟨ 𝐨 𝐋 , 𝐨 𝐋 ⟩

− 𝟓𝟎

  • 𝟐𝟎

− 𝟏𝟓

⟨ 𝑜 𝐿 , 𝑜 𝑅 ⟩ -100 -100 -100

⟨ 𝑜 𝑅 , 𝑙𝑖 ⟩ +9 -101 -46

⟨ 𝑜 𝑅 , 𝑜 𝐿 ⟩ -100 -100 -100

⟨ 𝐨 𝐑 , 𝐨 𝐑 ⟩

  • 𝟐𝟎

− 𝟓𝟎

− 𝟏𝟓 Appendix CFull Algorithm of AERIAL

The complete formulation of AERIAL is given in Algorithm 1. Note that AERIAL does not depend on true states 𝑠 𝑡 at all, since the experience samples 𝑒 𝑡 (Line 23) used for training do not record any states.

Algorithm 1 Attention-based Embeddings of Recurrence In multi-Agent Learning (AERIAL) 1:  Initialize parameters for ⟨ 𝑄 𝑖 ⟩ 𝑖 ∈ 𝒟 and Ψ . 2:  for episode 𝑚 ← 1 , 𝐸  do 3:     Sample 𝑠 0 , 𝐳 𝟎 , and 𝝉 𝟎 via 𝑏 0 and Ω 4:     for time step 𝑡 ← 0 , 𝑇 − 1  do 5:        for agent 𝑖 ∈ 𝒟  do 6:            𝑎 𝑡 , 𝑖 ← 𝜋 𝑖 ⁢ ( 𝜏 𝑡 , 𝑖 ) {Use 𝑎𝑟𝑔𝑚𝑎𝑥 𝑎 𝑡 , 𝑖 ∈ 𝒜 𝑖 ⁢ 𝑄 𝑖 ⁢ ( 𝜏 𝑡 , 𝑖 , 𝑎 𝑡 , 𝑖 ) } 7:            𝑟𝑎𝑛𝑑 ∼ 𝑈 ⁢ ( 0 , 1 ) {Sample from uniform distribution} 8:           if  𝑟𝑎𝑛𝑑 ≤ 𝜖  then 9:              Select random action 𝑎 𝑡 , 𝑖 ∈ 𝒜 𝑖 {Explore with 𝜖 -greedy} 10:           end if 11:        end for 12:         𝐚 𝐭 ← ⟨ 𝑎 𝑡 , 𝑖 ⟩ 𝑖 ∈ 𝒟 13:        Execute joint action 𝐚 𝐭 14:         𝑠 𝑡 + 1 ∼ 𝒯 ⁢ ( 𝑠 𝑡 + 1 | 𝑠 𝑡 , 𝐚 𝐭 ) 15:         𝐳 𝐭 + 𝟏 ∼ Ω ⁢ ( 𝐳 𝐭 + 𝟏 | 𝐚 𝐭 , 𝑠 𝑡 + 1 ) 16:         𝐡 𝐭 ← ⟨ ℎ 𝑡 , 𝑖 ⟩ 𝑖 ∈ 𝒟 {Query memory representations of all agents} 17:        Detach 𝐡 𝐭 from computation graph{Avoid additional differentiation through Ψ or Eq. 9} 18:         𝝉 𝐭 + 𝟏 ← ⟨ 𝝉 𝐭 , 𝐚 𝐭 , 𝐳 𝐭 + 𝟏 ⟩ {Concatenate 𝝉 𝐭 , 𝐚 𝐭 , and 𝐳 𝐭 + 𝟏 } 19:        for attention head 𝑐 ← 1 , 𝐶  do 20:            𝑎𝑡𝑡𝑒𝑛𝑡𝑖𝑜𝑛 𝑐 ← 𝑎𝑡𝑡 𝑐 ⁢ ( 𝐡 𝐭 ) {Process individual recurrences according to Eq. 9} 21:        end for 22:         𝑟𝑒𝑐 𝑡 ← 𝑀𝐿𝑃 ⁢ ( ∑ 𝑐

1 𝐶 𝑎𝑡𝑡𝑒𝑛𝑡𝑖𝑜𝑛 𝑐 ) {See Section 4.2} 23:         𝑒 𝑡 ← ⟨ 𝝉 𝐭 , 𝐚 𝐭 , 𝑟 𝑡 , 𝐳 𝐭 + 𝟏 , 𝑟𝑒𝑐 𝑡 ⟩ 24:        Store experience sample 𝑒 𝑡 25:     end for 26:     Train Ψ and ⟨ 𝑄 𝑖 ⟩ 𝑖 ∈ 𝒟 using all 𝑒 𝑡 {See Fig. 1} 27:  end for Appendix DExperiment Details D.1Computing infrastructure

All training and test runs were performed in parallel on a computing cluster of fifteen x86_64 GNU/Linux (Ubuntu 18.04.5 LTS) machines with i7-8700 @ 3.2GHz CPU (8 cores) and 64 GB RAM. We did not use any GPU in our experiments.

D.2Hyperparameters and Neural Network Architectures

Our experiments are based on PyMARL and the code from (Rashid et al., 2020) under the Apache License 2.0. We use the default setting from the paper without further hyperparameter tuning as well as the same neural network architectures for the agent RNNs, i.e., gated recurrent units (GRU) of (Cho et al., 2014) with 64 units, and the respective factorization operators Ψ as specified by default for each state-of-the-art baseline in Section 6. We set the loss weight 𝛼

0.75 for CW-QMIX and OW-QMIX.

For MAPPO, we use the hyperparameters suggested in (Yu et al., 2022) for SMAC, where we set the clipping parameter to 0.1 and use an epoch count of 5. The parameter 𝜆 for generalized advantage estimation is set to 1. The centralized critic has two hidden layers of 128 units with ReLU activation, a single linear output, and conditions on agent-specific global states which concatenate the global state and the individual observation per agent. The policy network of MAPPO has a similar recurrent architecture like the local utility functions 𝑄 𝑖 and additionally applies softmax to the output layer.

AERIAL is implemented using QMIX as factorization operator Ψ according to Fig. 1. We also experimented with QPLEX as alternative with no significant difference in performance. Thus, we stick with QMIX for computational efficiency due to fewer trainable parameters. The transformer has 𝐶

4 heads 𝑐 ∈ { 1 , … , 𝐶 } with respective MLPs 𝑊 𝑞 𝑐 , 𝑊 𝑘 𝑐 , and 𝑊 𝑣 𝑐 , each having one hidden layer of 𝑑 𝑎𝑡𝑡

64 units with ReLU activation. The three subsequent MLP layers of Line 22 in Algorithm 1 have 64 units with ReLU activation.

All neural networks are trained using RMSProp with a learning rate of 0.0005.

Generated on Thu Dec 28 01:21:12 2023 by LATExml

Xet Storage Details

Size:
63.4 kB
·
Xet hash:
9ef1d3737bfd541c44796a325f3bee648b922407c9b7e45de0c9e8ac23be11a4

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.