# Introduction ::: center
Overview
::: Our work targets knowledge reusing between agents in cooperative MARL, where all the agents in the environment are not good enough. In this section, we provide a story-level overview of our main idea. The overview of our motivating scenario is presented in Fig. [1](#fig:Overview){reference-type="ref" reference="fig:Overview"}. Considering our settings, all the agents act in the environment and update their self policy parameters with local rewards from the environment. The actions executed by the agents is dictated by their self policy parameters. Now, we explore a novel knowledge transfer framework, PAT. In our framework, each agent has two acting modes, student mode and self-learning mode. Before each agent takes action, agents' student actor-critic modules decide agents' acting modes with agents' hidden states. It is not an ad-hoc design for specific domain. Agents should learn to ideally learn from other agents. In self-learning mode, agents take action based on their independent learned behavioral knowledge, which is represented as agents' behavior policy parameters. In this mode, agents' actions are independent of other agents' behavioral knowledge. All agents in self-learning mode are trained in an individual end-to-end manner using Deep Deterministic Policy Gradient [@lillicrap2015continuous] algorithm with an actor network and a critic network. In student mode, if there are more than two agents in the system, a student agent receives multiple advice from other agents. We now refer to a new problem, teacher selecting, because not all teachers' knowledge is useful for the student agent. However, existing frameworks try to dodge this problem and have key limitation in the scenario where the number of agents is large, which also causes difficult in model transfer. We apply a soft attention mechanism in our work to select teachers' knowledge. The Attention Teacher Selector solves the problem by selecting contextual information in teachers' learning information and computing weights of teachers' knowledge. Considering from a different angle, our attentional module selectively transform the learning information from teachers with the target of solving student's problem. The attentional selecting approach is effective both in multi-task scenarios and joint task scenarios. We make a few assumptions on agents' identities to support our framework. When an agent chooses student mode in $t$ time step, other agents in the environment automatically become its teachers and provide their behavior policy and learning knowledge to the student agent. Our parallel setting means that An agent in student mode can be a teacher of the other agents. When agent $i$ is unfamiliar with its observation $m^i_t$, but at the same time, agent $i$ may be familiar with agent $j$'s observation $m^i_t$ because of $i$'s past trajectory, which means that agent i can be agent j's teacher. At this time step, agent $i$ is in student mode but it also is a teacher to transfer its knowledge to other agent. The core idea is agents' different learning experience. An student agent may have confidence in the other states and its behavior knowledge can help the other student agents. Our teacher selector module is designed to determine the appropriate teachers and transform teachers' local behavior knowledge into student's advising action. Moreover, our attention mechanism quantifies the reliability of teachers, so our scenarios do not need good-enough agents (experts). In a high-level summary, because of agents' different learning experience, agents in a cooperative team are good at different tasks or different parts of a joint task. Knowledge Transfer is a framework to help agents solve unfamiliar task with experienced agents' learning knowledge. PAT's training and architecture details are presented in the next section. # Method This section introduces our knowledge transfer approach with more design details of the whole structure and all training protocols in our framework. In our framework, each agent has two actor-critic model and an attention mechanism to support two acting modes. Different from original individual agent learning, after receiving observation from the environment, agents in our framework need to choose their action mode before taking action. At $t$ time step, agent $i$ reprocesses the observation from environment with a hidden LSTM (or RNN) unit, which integrates information (observation) from $i$'s observation history. The LSTM unit $l^i$ outputs agent's observation encoding $m^i_t$, which represents agent's hidden state. Here, $k$ is a scaling variable, which represents the time period covered in the hidden state. We will adjust $k$ depending on different types of games. $$\begin{equation} l^i : (o^i_{t-k}, a^i_{t-k}, ..., o^i_t) \rightarrow m^i_t \end{equation}$$ Next, based on this step's memorized observation, $m^i_t$, agent $i$'s student actor network takes this step's memorized observation, $m^i_t$, as input and output agent $i$'s acting mode. Considering the efficiency of information exchange and communication cost, student actor is used to deciding agent $i$' confidence in $t$ time step. If $i$ has enough confidence with $m^i_t$, student actor chooses self-learning mode. Conversely, student actor chooses student mode and sends advice request to other agents. Student actor and student critic is a deep deterministic policy gradient model. The student actor network outputs the probability of choosing student mode. When the probability exceeds a threshold value, agent will choose student mode as a deterministic action. The threshold value is a variable depending on different types of games. Student actor and student critic represent the acting mode choosing model which determines whether agent $i$ become a student and ask teacher agents for advice. We train student actor-critic using a student reward ${\widetilde{r}}^i_t$. $$\begin{equation} \widetilde{r}_{t}^{i}=V\left(m^i_t; \theta'^{i}_{t}\right)-V\left(m^i_t; \theta^i_{t}\right) \end{equation}$$ $\theta'^{i}_{t}$ and $\theta^i_{t}$, are agent $i$ policy parameters in student mode and self-learning mode. The student reward measures gain in agent learning performance from student mode. The sharing of student actor-critic network parameters allows this module learning effectively in environment and easily extending to other settings. In our experiments, student actor-critic is trained with the trained Attention Teacher Selector. Agent $i$'s student critic is updated to minimize the student loss function: $\mathcal{\widetilde{R}}$ is agent $i$'s student policy transition set. $$\begin{equation} \begin{split} \mathcal{L}\left(\theta^{\tilde{Q}}\right)=\mathbb{E}_{m_t, w_t, \widetilde{r}_{t}, m_{t+1}\sim\mathcal{\widetilde{R}}}\left[\left(\tilde{y}_t-\widetilde{Q}\left(m_t, w_t | \theta^{\tilde{Q}}\right)\right)^{2}\right],\\ \tilde{y}_t=\tilde{r}_t+.\gamma \widetilde{Q}(m_{t+1}, w^{\prime} | \theta^{\widetilde{Q^{\prime}}})|_{w^{\prime}=\widetilde{\mu}^{\prime}\left(m_{t+1} | \theta^{w^{\prime}}\right)} \end{split} \end{equation}$$ Student policy network is updated by ascent with the following gradient: $$\begin{equation} {\nabla}_{\theta^{\widetilde{\mu}}}J=\mathbb{E}_{m_t, w_t \sim\mathcal{\widetilde{R}}}\left[\nabla_{w} \widetilde{Q}\left(m_t, w_t | {\theta}^{\widetilde{Q}}\right)|_{w_t=\widetilde{\mu}(m_t)} {\nabla}_{\theta^{\widetilde{\mu}}} \widetilde{\mu}\left(m_t | {\theta}^{\widetilde{\mu}}\right)\right] \end{equation}$$ Here, $\widetilde{\mu}$ is agent $i$'s student policy, which is parameterized by $\widetilde{\theta}$ ::: center
PAT Architecture
::: Inspired by the similarity between source task and target task in transfer learning, we use attention mechanism to evaluate the task similarity between student and teachers and teachers' confidence of student's state. Therefore, each agent's Attention Teacher Selector in student mode is used to select advice from teachers based on their similarity and confidence. The main idea behind our knowledge transfer approach is to learn the student mode by selectively paying attention to policy advice from other agents in the cooperative team. Fig. [3](#fig:Attention){reference-type="ref" reference="fig:Attention"} illustrates the main components of our attention mechanism. We now describe the Attention Teacher Selector mechanism in agent student mode. The Attention Teacher Selector (ATS) is a soft attention mechanism as a differentiable query-key-value model [@graves2014neural; @oh2016control]. After the student actor of student agent $i \in \mathcal{G}$ compute the memorized observation at $t$ time step and choose student mode, ATS receives the encoding hidden state $m^i_t$. Then, from other agents in the team as teacher agents, ATS receives the teachers' encoding learning history $h^j_t = l^j(o^j_1, a^j_1, ..., o^j_t)$ and encoding policy parameter $\theta^j$. Now, ATS computes a query $Q^i_t = W_Q m^i_t$ as student query vector, a key $K^j_t = W_K h^j_t$ as teacher key vector, and a value $V^j_t = W_V \theta^j$ as teacher policy value vector, where $W_K, W_Q$ and $W_V$ are attentional learning parameters. After ATS receives all key-value $(K^j, V^j)$ from all of teachers $j \in \mathcal{G}$, the attention weight $\alpha^{ij}$ is assigned by passing key vector from teacher and query vector from student into a softmax: $$\begin{equation} \alpha^{ij} = softmax \left(\frac{Q^{i}K^{j}}{\sqrt{D_K}}\right) \end{equation}$$ Here, $D_K$ is the dimension of teacher $j$'s key vector, which is used to resolve vanishing gradients (Vaswani et al. 2017). The final policy advice is a weight sum with a linear transformation: $$\begin{equation} v^i = W_T\sum_{j \neq i} \alpha^{ij} V^{j} \end{equation}$$ Here, $W_T$ is a learning parameter for policy parameter decoding. Behind the single attention head, we use a simple multi-attention head with a set of learning parameters $(W_K, W_Q, W_V)$ to aggregate all advice from different representation subplaces. Besides, attention head dropout is applied to improve the effectivity of our attention mechanism. Finally, student agent $i$ obtains its action at this time with policy parameters from Attention Teacher Selector: $$\begin{equation} \widetilde{a}^i_t = v^i(m^i_t) \end{equation}$$ In our experiments, the attention parameters $(W_K, W_Q, W_V)$ are shared across all agents, because knowledge transfer process is similar in all pairs of student-teacher, but different observations introduce different teacher weight vector. This setting encourages our approach to learn more efficient and make our model easy to be extended in different settings, such as larger number of agents or a different environment. In this work, we consider scenarios where other agents' learning experience is useful to a student agent. Feeding student's observation information and teacher's learning experience into our attention mechanism helps to select action with other agents' behavioral policy for the student agent. This module is an end-to-end knowledge transfer method without any decentralized learning parameter sharing. ::: center
Attention based Knowledge Selection
::: If agent $i$'s student actor chooses self-learning mode, the student actor sends $i$'s encoding hidden state $m^i_t$ to the actor network. In self-learning mode, agents learn as a common individual agent. Each agent's policy in self-learning mode is independently trained by DDPG [@lillicrap2015continuous] algorithm. Agent $i$'s critic network is updated by TD error, $\mathcal{R}$ is agent $i$'s transition set: $$\begin{equation} \begin{split} \mathcal{L}\left(\theta^{Q}\right)=\mathbb{E}_{m_t, a_t, r_t, m_{t+1} \sim \mathcal{R}}\left[\left(y_t-Q\left(m_t, a_t | \theta^{Q}\right)\right)^{2}\right],\\ y_t=r_t+\left.\gamma Q\left(m_{t+1}, a^{\prime} | \theta^{Q^{\prime}}\right)\right|_{a^{\prime}=\mu^{\prime}\left(m_{t+1} | \theta^{\mu^{\prime}}\right)} \end{split} \end{equation}$$ The policy gradient of agent $i$'s actor network can be derived as: $$\begin{equation} \nabla_{\theta^{\mu}} J=\mathbb{E}_{m_t, a_t \sim \mathcal{R}}\left[\nabla_{a} Q\left(m_t, a_t | \theta^{Q}\right)|_{a_t=\mu(m_t)} \nabla_{\theta^{\mu}} \mu\left(m_t | \theta^{\mu}\right)\right]. \end{equation}$$ In games with discrete action space, in self-learning mode, we refer to the modified discrete version of DDPG suggested by [@lowe2017multi] in agents' actor-critic networks. Agents in self-learning update its actor network use, $$\begin{equation} \nabla_{\theta^{\mu}} J=\mathrm{E}_{m_t, a_t \sim \mathcal{R}}\left[\nabla_{a} Q(m_t, a_t) \nabla_{\theta^{\mu}} a_t\right] \end{equation}$$. Our framework is adapted for both continuous action space and discrete action space.