Introduction
We analyze a typical and straightforward object selection game, in which a speaking agent (Alice, or speaker) and a listening agent (Bob, or listener) must cooperate to accomplish a task. In each round of the game, we show Alice a target object $x$ selected from an object space $\mathcal{X}$ and let her send a discrete-sequence message $\mathbf{m}$ to Bob. We then show Bob $c$ different objects ($x$ must be one of them) and use $c_1,...,c_c\in\mathcal{X}$ to represent these candidates. Bob must use the message received from Alice to select the object that Alice refers among the $c$ candidates. If Bob's selection $\bar{c}$ is correct, both Alice and Bob are rewarded. The objects are shuffled and candidates are randomly selected in each round to avoid the agents recognizing the objects using their order of presentation.
In our game, each object in $\mathcal{X}$ has $N_a$ attributes (color and shape are often used in the literature), and each attribute has $N_v$ possible values. To represent objects, similarly to the settings chosen in [@kottur2017natural], we encode each attribute as a one-hot vector and concatenate the $N_a$ one-hot vectors to represent one object. The message delivered by Alice is a fixed-length discrete sequence $\mathbf{m}=(m_1,...,m_{N_L})$, in which each $m_i$ is selected from a fixed size meaningless vocabulary $V$.
Neural agents usually have separate modules for speaking and listening, which we name Alice and Bob. Their architectures, shown in Figure 1{reference-type="ref" reference="fig:sys_archi"}, are similar to those studied in [@ivan2017nips] and [@lazaridou2018iclr]. Alice first applies a multi-layer perceptron (MLP) to encode $x$ into an embedding, then feeds it to an encoding LSTM [@lstm]. Its output will go through a softmax layer, which we use to generate the message $m_1, m_2, \cdots$. Bob uses a decoding LSTM to read the message and uses a MLP to encode $c_1,...,c_c$ into embeddings. Bob then takes the dot product between the hidden states of the decoding LSTM and the embeddings to generate scores $s_c$ for each object. These scores are then used to calculate the cross-entropy loss when training Bob. When Alice and Bob are trained using reinforcement learning, we can use $p_A(\mathbf{m}|x;\theta_A)$ and $p_B(\bar{c}|\mathbf{m}, c_1,...,c_c;\theta_B)$ to represent their respective policies, where $\theta_A$ and $\theta_B$ contain the parameters of each of the neural agents. When the agents are trained to play the game together, we use the REINFORCE algorithm [@REINFORCE] to maximize the expected reward under their policies, and add the entropy regularization term to encourage exploration during training, as explained in [@mnih2016asynchronous]. The gradients of the objective function $J(\theta_A,\theta_B)$ are: $$\begin{align} \nabla_{\theta_A}J &= \mathbb{E}\left[R(\bar{c},x)\nabla\log p_A(\mathbf{m}|x)\right]+\lambda_A\nabla H[p_A(\mathbf{m}|x)] \ \nabla_{\theta_B}J &= \mathbb{E}\left[R(\bar{c},x)\nabla\log p_B(\bar{c}|\mathbf{m},c_1,...,c_c)\right]+\lambda_B\nabla H[p_B(\bar{c}|\mathbf{m},c_1,...,c_c)], \end{align}$$ where $R(\bar{c},x)=\mathbbm{1}(\bar{c},x)$ is the reward function, $H$ is the standard entropy function, and $\lambda_A, \lambda_B>0$ are hyperparameters. A formal definition of the agents can be found in Appendix C.
Compositionality is a crucial feature of natural languages, allowing us to use small building blocks (e.g., words, phrases) to generate more complex structures (e.g., sentences), with the meaning of the larger structure being determined by the meaning of its parts [@using_language]. However, there is no consensus on how to quantitatively assess it. Besides a subjective human evaluation, topological similarity has been proposed as a possible quantitative measure [@comp_measure01].
To define topological similarity, we first define the language studied in this work as $\mathcal{L}(\cdot):\mathcal{X}\mapsto\mathcal{M}$. Then we need to measure the distances between pairs of objects: $\Delta_{\mathcal{X}}^{ij} = d_\mathcal{X} (x_i, x_j)$, where $d_\mathcal{X}(\cdot)$ is a distance in $\mathcal{X}$. Similarly, we compute the corresponding quantity for the associated messages $m_i = \mathcal{L}(x_i)$ in the message space $\mathcal{M}$ with $\Delta_{\mathcal{M}}^{ij} = d_\mathcal{M} \left(m_i, m_j\right)$, where $d_\mathcal{M}(\cdot)$ is a distance in $\mathcal{M}$. Then the topological similarity (i.e., $\rho$) is defined as the correlation between these quantities across $\mathcal{X}$. Following the setup of [@lazaridou2018iclr] and [@li2019ease], we use the negative cosine similarity in the object space and Levenshtein distances [@levenshtein1966binary] in the message space. We provide an example in Appendix B to give a better intuition about this metric.
The idea of iterated learning requires the agent in current generation be partially exposed to the language used in the previous generation. Even this idea is proven to be effective when experimenting with human participants, directly applying it to games played by neural agents is not trivial: for example, we are not sure where to find the preference for high-$\rho$ languages for the neural agents. Besides, we must carefully design an algorithm that can simulate the "partially exposed" procedure, which is essential for the success of iterated learning.
As mentioned before, the preference of high-$\rho$ language by the learning agents is essential for the success of iterated learning. In language evolution, highly compositional languages are favored because they are structurally simple and hence are easier to learn [@carr2017cultural]. We believe that a similar phenomenon applies to communication between neural agents:
Hypothesis 1: High topological similarity improves the learning speed of the speaking neural agent.
We speculate that high-$\rho$ languages are easier to emulate for a neural agent than low-$\rho$ languages. Concretely, that means that Alice, when pre-trained with object-message pairs describing a high-$\rho$ language at a given generation, will be faster to successfully output the right message for each object. Intuitively, this is because the structured mapping described by a language with high $\rho$ is smoother, and hence has a lower sample complexity, which makes resulting examples easier to learn for the speaker agent [@sample_complexity].
Hypothesis 2: High topological similarity allows the listening agent to successfully recognize more concepts, using less samples.
We speculate that high-$\rho$ languages are easier for a neural agent to recognize. That means that Bob, when pre-trained with message-object pairs corresponding to a high-$\rho$ language, will be faster to successfully choose the right object. Intuitively, the lower topological similarity is, the more difficult it will be to infer unseen object-message pairs from seen examples. The more complex mapping of a low-$\rho$ language implies that more object-message pairs need to be provided to describe it. This translates as an inability for the listening agent to generalize the information it obtained from one object-message associated to a low-$\rho$ language to other examples. Thus, the general performance of Bob on any example will improve much faster when trained with pairs corresponding to a high-$\rho$ language than with a low-$\rho$ language. We provide experimental results in section 4.1{reference-type="ref" reference="sec:exp_speed_adv"} to verify our hypotheses. We also provide a detailed example in Appendix D to illustrate our reasoning.