| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T14:58:57.485413Z" |
| }, |
| "title": "Compositionality and Capacity in Emergent Languages", |
| "authors": [ |
| { |
| "first": "Abhinav", |
| "middle": [], |
| "last": "Gupta", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "abhinavg@nyu.edu" |
| }, |
| { |
| "first": "Cinjon", |
| "middle": [], |
| "last": "Resnick", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "cinjon@nyu.edu" |
| }, |
| { |
| "first": "Jakob", |
| "middle": [], |
| "last": "Foerster", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [ |
| "M" |
| ], |
| "last": "Dai", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "adai@google.com" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "kyunghyun.cho@nyu.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Recent works have discussed the extent to which emergent languages can exhibit properties of natural languages particularly learning compositionality. In this paper, we investigate the learning biases that affect the efficacy and compositionality in multi-agent communication in addition to the communicative bandwidth. Our foremost contribution is to explore how the capacity of a neural network impacts its ability to learn a compositional language. We additionally introduce a set of evaluation metrics with which we analyze the learned languages. Our hypothesis is that there should be a specific range of model capacity and channel bandwidth that induces compositional structure in the resulting language and consequently encourages systematic generalization. While we empirically see evidence for the bottom of this range, we curiously do not find evidence for the top part of the range and believe that this is an open question for the community.", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Recent works have discussed the extent to which emergent languages can exhibit properties of natural languages particularly learning compositionality. In this paper, we investigate the learning biases that affect the efficacy and compositionality in multi-agent communication in addition to the communicative bandwidth. Our foremost contribution is to explore how the capacity of a neural network impacts its ability to learn a compositional language. We additionally introduce a set of evaluation metrics with which we analyze the learned languages. Our hypothesis is that there should be a specific range of model capacity and channel bandwidth that induces compositional structure in the resulting language and consequently encourages systematic generalization. While we empirically see evidence for the bottom of this range, we curiously do not find evidence for the top part of the range and believe that this is an open question for the community.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Compositional language learning in the context of multi agent emergent communication has been extensively studied (Foerster et al., 2016; Lazaridou et al., 2017; Baroni, 2020) . These works have found that while most emergent languages do not tend to be compositional, they can be guided towards this attribute through artificial task-specific constraints (Harding Graesser et al., 2019; Lee et al., 2018; S\u0142owik et al., 2020) .", |
| "cite_spans": [ |
| { |
| "start": 114, |
| "end": 137, |
| "text": "(Foerster et al., 2016;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 138, |
| "end": 161, |
| "text": "Lazaridou et al., 2017;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 162, |
| "end": 175, |
| "text": "Baroni, 2020)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 356, |
| "end": 387, |
| "text": "(Harding Graesser et al., 2019;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 388, |
| "end": 405, |
| "text": "Lee et al., 2018;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 406, |
| "end": 426, |
| "text": "S\u0142owik et al., 2020)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we focus on how a neural network, specifically a generative one, can learn a compositional language. Moreover, we ask how this can occur without task-specific constraints. To accomplish this, we first define what is a language and what we mean by compositionality. In tandem, we introduce precision and recall, two metrics that help us measure how well a generative model at * These two authors contributed equally. large has learned a grammar from a finite set of training instances. We then use a variational autoencoder with a discrete sequence bottleneck to investigate how well the model learns a compositional language, in addition to what affects that learning. This allows us to derive residual entropy, a third metric that reliably measures compositionality in our particular environment. We use this metric to cross-validate precision and recall.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our paper is most similar to Kottur et al. (2017) , which showed that compositional language arose only when certain constraints on the agents are satisfied. While the constraints they examined were either making their models memoryless or having a minimal vocabulary in the language, we hypothesized about the importance for agents to have small capacity relative to the number of concepts to which they are exposed. Each of Verhoef et al. (2016) ; Kirby et al. (2015) ; Zaslavsky et al. (2018) examine the trade-off between expression and compression in both emergent and natural languages, in addition to how that trade-off affects the learners. We differ in that we target a specific aspect of the agent (capacity) and ask how that aspect biases the learning.", |
| "cite_spans": [ |
| { |
| "start": 29, |
| "end": 49, |
| "text": "Kottur et al. (2017)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 426, |
| "end": 447, |
| "text": "Verhoef et al. (2016)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 450, |
| "end": 469, |
| "text": "Kirby et al. (2015)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 472, |
| "end": 495, |
| "text": "Zaslavsky et al. (2018)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We consider the problem of learning an underlying language L from a finite set of training strings randomly drawn from it: D = {s|s \u223c G } where G is the minimal length generator associated with L . We assume |D| |L | and our goal is to use D to learn a language L that approximates L as well as possible. We know that there exists an equivalent generator G for L, and so our problem becomes estimating a generator from this finite set rather than reconstructing an entire set of strings belonging to the original language L * . We cast the problem of estimating a generator G as density modeling, in which case the goal is to estimate a distribution p(s). Sampling from p(s) is equivalent Figure 1 : The grid above shows five shapes and five colors. Agents with a non-compositional language can use this shared map to communicate \"Red Circle\" with only log 2 5 2 = 5 bits. If they instead used a compositional language, it would require log 2 5 = 3 bits for each concept for a total of 6 bits to convey the string. On the other hand, the agent needs 25 memory slots to store the concepts in the former case but only 10 slots in the compositional case. This trade-off exemplifies the motivation for our investigation because it suggests that a key driver of compositionality in language is the capacity of an agent relative to the total number of objects in its environment.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 689, |
| "end": 697, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Compositional Language and Learning", |
| "sec_num": "2" |
| }, |
| { |
| "text": "to generating a string from the generator G.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compositional Language and Learning", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Evaluation metrics When the language was learned perfectly, any string sampled from the learned distribution p(s) must belong to L . Also, any string in L must be assigned a non-zero probability under p(s). Otherwise, the set of strings generated from this generator, implicitly defined via p(s), is not identical to the original language L . This observation leads to two metrics for evaluating the quality of the estimated language with the distribution p(s), precision and recall:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compositional Language and Learning", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "Precision(L , p) = 1 |L | s\u2208L I(s \u2208 L ) (1) Recall(L , p) = s\u2208L log p(s)", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Compositional Language and Learning", |
| "sec_num": "2" |
| }, |
| { |
| "text": "where I(x) is the indicator function. These metrics are designed to be fit for any compositional structure rather than one-off evaluation approaches.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compositional Language and Learning", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Our setup We simplify and assume that each of the characters in the string s \u2208 L correspond to underlying concepts. While the inputs are ordered according to the sequential concepts, our model encodes them using a bag of words (BoW) representation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compositional Language and Learning", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The speaker f \u03b8 is parameterized using a recurrent policy which receives the sequence of concatenated one-hot input tokens of s and converts each of them to an embedding. It then runs an LSTM nonautoregressively for l timesteps taking the flattened representation of the input embeddings as its input and linearly projecting each result to a probability distribution over {0, 1}. This results in a sequential Bernoulli distribution over l latent variables: f \u03b8 (z|s) = l t=1 p(z t |s; \u03b8). From this distribution, we can sample a latent string z = (z 1 , . . . , z l ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compositional Language and Learning", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The listener g \u03c6 receives z and uses a BoW representation to encode them into its own embedding space. Taking the flattened representation of these embeddings as input, we run an LSTM for |N | time steps, each time outputting a probability distribution over the full alphabet \u03a3:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compositional Language and Learning", |
| "sec_num": "2" |
| }, |
| { |
| "text": "g \u03c6 (s|z) = |N | j=1 p(s j |z; \u03c6).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compositional Language and Learning", |
| "sec_num": "2" |
| }, |
| { |
| "text": "To train the whole system end-to-end (Sukhbaatar et al., 2016; Mordatch and Abbeel, 2018) via backpropogation, we apply a continuous approximation to z t that depends on a learned temperature parameter \u03c4 . We use the 'straight-through' version of Gumbel-Softmax (Jang et al., 2017; Maddison et al., 2017) to convert the continuous distribution to a discrete distribution for each z t . The final sequence of one hot vectors encoding z is our message, which is passed to the listener g \u03c6 .", |
| "cite_spans": [ |
| { |
| "start": 37, |
| "end": 62, |
| "text": "(Sukhbaatar et al., 2016;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 63, |
| "end": 89, |
| "text": "Mordatch and Abbeel, 2018)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 262, |
| "end": 281, |
| "text": "(Jang et al., 2017;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 282, |
| "end": 304, |
| "text": "Maddison et al., 2017)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compositional Language and Learning", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The prior p \u03bb encodes the message z using a BoW representation. It gives the probability of z according to the prior (binary) distribution for each z t and is defined as: p \u03bb (z) = l t=1 p(z t |\u03bb). This can be used both to compute the prior probability of a latent string and also to efficiently sample from p \u03bb using ancestral sampling. Penalizing the KL divergence between the speaker's distribution and the prior distribution encourages the emergent protocol to use latent strings that are as diverse as possible.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compositional Language and Learning", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Hypotheses on compositionality Under this framework for language learning, we can make the following observations. If the length of the latent sequence l < log 2 |L |, it is impossible for the model to avoid the failure case because there will be |L | \u2212 2 l strings in L that cannot be generated from the trained model. Consequently, recall cannot be maximized. However, this may be difficult to check using the sample-based estimate as the chance of sampling s \u2208 L \\ g \u03c6 (s|z)p \u03bb (z)dz decreases proportionally to the size of L . This is especially true when the gap |L | \u2212 2 l is narrow.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compositional Language and Learning", |
| "sec_num": "2" |
| }, |
| { |
| "text": "When l \u2265 log 2 |L |, there are three cases. The first is when there are not enough parameters \u03b8 to learn the underlying compositional grammar, in which case L cannot be learned. The second case is when the number of parameters |\u03b8| is greater than that required to store all the training strings, i.e., |\u03b8| = O(l|D|). Here, it is highly likely for the model to overfit as it can map each training string with a unique latent string without having to learn any of L 's compositional structure. Lastly, when the number of parameters lies in between these two poles, we hypothesize that the model will capture the underlying compositional structure and exhibit systematic generalization (Bahdanau et al., 2019) .", |
| "cite_spans": [ |
| { |
| "start": 683, |
| "end": 706, |
| "text": "(Bahdanau et al., 2019)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compositional Language and Learning", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Models and Learning The task is to communicate 6 concepts, each of which have 10 possible values with a total dataset size of 10 6 . We train the proposed VAE We gradually decrease the number of LSTM units from the base model by a factor \u03b1 \u2208 (0, 1]. This is how we control the number of parameters (|\u03b8| and |\u03c6|). We obtain seven models from each of these by varying the length of the latent sequence l from {19, 20, 21, 22, 23, 24, 25}. These were chosen because we both wanted to show a range of bits and because we need at least 20 bits to cover the 10 6 strings in L * ( log 2 10 6 = 20).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Evaluation: Residual Entropy Our setup allows us to design a metric by which we can check the compositionality of the learned language L by examining how the underlying concepts are described by a string. Let p be a sequence of partitions of {1, 2, . . . , l}. We define the degree of compositionality as the ratio between the variabil-ity of each concept C i and the variability explained by a latent subsequence z[p i ] indexed by an associated partition p i . More formally, the degree of compositionality given the partition sequence p is defined as a residual entropy", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "re(p, L, L ) = 1 |N | |N | i=1 H L (C i |z[p i ])/H L (C i )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "where there are |N | concepts by the definition of our language. When each term inside the summation is close to zero, it implies that a subsequence z[p i ] explains most of the variability of the specific concept C i , and we consider this situation compositional. The residual entropy of a trained model is then the smallest re(p) over all possible sequences of partitions P and spans from 0 (compositional) to 1 (non-compositional) where re(L, L ) = min p\u2208P re(p, L, L ). Fig. 3 shows the main findings of our research. In plot (a), we see the parameter counts at the threshold. Below these values, the model cannot solve the task but above these, it can solve it. Further, observe the curve delineated by the lower left corner of the shift from unsuccessful to successful models. This inverse relationship between bits and parameters shows that the more parameters in the model, the fewer bits it needs to solve the task. Note however that it could only solve the task with fewer bits if it was forming a non-compositional code, suggesting that higher parameter models are able to do so while lower parameter ones cannot. Observe further that all of our models above the minimum threshold (72,400) have the capacity to learn a compositional code. This is shown by the perfect training accuracy achieved by all of those models in plot (a) for 24 bits and by the perfect compositionality (zero entropy) in plot (b) for 24 bits. Together with the above, this validates that learning compositional codes requires less capacity than learning non-compositional codes. Plot (c) confirms our hypothesis that large models can memorize the entire dataset. The 24 bit model with 971,400 parameters achieves a train accuracy of 1.0 and a validation accuracy of 0.0. Cross-validating this with plots (d) and (g), we find that a member of the same parameter class is non-compositional and that there is one that achieves unusually low recall. We verified that these are all the same seed, which shows that the agents in this model are memorizing the dataset.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 475, |
| "end": 481, |
| "text": "Fig. 3", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Plots (b) and (e) show that our compositionality metrics pass two sanity checks -high recall and perfect entropy can only be achieved with a channel that is sufficiently large (i.e. 24 bits) to allow for a compositional latent representation. Plot (f) shows that while the capacity does not affect the ability to learn a compositional language across the model range, it does change the learnability. Here we find that smaller models can fail to solve the task for any bandwidth, which coincides with literature suggesting a link between overparameterization and learnability (Li and Liang, 2018; Du et al., 2019) . Finally, as expected, we find that no model learns to solve the task with < 20 bits, validating that the minimum required number of bits for learning a language of size |L| is log(|L|) . We also see that no model learns to solve it for 20 bits, which is likely due to optimization difficulties.", |
| "cite_spans": [ |
| { |
| "start": 576, |
| "end": 596, |
| "text": "(Li and Liang, 2018;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 597, |
| "end": 613, |
| "text": "Du et al., 2019)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We first confirm the effectiveness of training by observing that almost all the models achieve perfect precision (Fig. 2 (a) ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 113, |
| "end": 124, |
| "text": "(Fig. 2 (a)", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "), implying that L \u2286 L ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "where L is the language learned by the model. This occurs even with our learning which encouraging the model to capture all training strings rather than to focus on only a few training strings. A natural follow-up question is how large is L \\L. We measure this with recall in Fig. 2 (b) , which shows a clear phase transition according to the model capacity when l \u2265 22. This agrees with what we saw in Fig. 3 and is equivalent to saying |L \\L| 0 at a value that is close to our predicted boundary of l = log 2 10 6 = 20. We attribute this gap to the difficulty in learning a perfectly-parameterized neural network.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 276, |
| "end": 286, |
| "text": "Fig. 2 (b)", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 403, |
| "end": 409, |
| "text": "Fig. 3", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "These results clearly confirm the first part of our hypothesis -the latent sequence length must be at least as large as log |L |. They also confirm that there is a lowerbound on the number of parameters over which this model can successfully learn the underlying language. We have not been able to verify the upper bound in our experiments, which may require either a more (computationally) extensive set of experiments with even more parameters or a better theoretical understanding of the inherent biases behind learning with this architecture, such as from recent work on overparameterized models (Belkin et al., 2019; Nakkiran et al., 2020) .", |
| "cite_spans": [ |
| { |
| "start": 600, |
| "end": 621, |
| "text": "(Belkin et al., 2019;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 622, |
| "end": 644, |
| "text": "Nakkiran et al., 2020)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "This paper opens the door for a vast amount of follow-up research. All our models were sufficiently large to represent the compositional structure of the language when given sufficient bandwidth. Furthermore, while large models did overfit, this was an exception rather than the rule. We hypothesize that this is due to the large number of examples in our language, which forces the model to generalize, but note that there are likely additional biases at play that warrant further investigation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "4" |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We would like to thank Marco Baroni and Angeliki Lazaridou for their comments on an earlier version of the paper. We would also like to thank the anonymous reviewers for giving insightful feedback in turn enhancing this work, particularly reviewer two for their thoroughness. Special thanks to Adam Roberts, Doug Eck, Mohammad Norouzi, and Jesse Engel.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Systematic generalization: What is required and can it be learned", |
| "authors": [ |
| { |
| "first": "Dzmitry", |
| "middle": [], |
| "last": "Bahdanau", |
| "suffix": "" |
| }, |
| { |
| "first": "Shikhar", |
| "middle": [], |
| "last": "Murty", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Noukhovitch", |
| "suffix": "" |
| }, |
| { |
| "first": "Thien", |
| "middle": [], |
| "last": "Huu Nguyen", |
| "suffix": "" |
| }, |
| { |
| "first": "Harm", |
| "middle": [], |
| "last": "De Vries", |
| "suffix": "" |
| }, |
| { |
| "first": "Aaron", |
| "middle": [], |
| "last": "Courville", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dzmitry Bahdanau, Shikhar Murty, Michael Noukhovitch, Thien Huu Nguyen, Harm de Vries, and Aaron Courville. 2019. Systematic generaliza- tion: What is required and can it be learned? In International Conference on Learning Representa- tions.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Linguistic generalization and compositionality in modern artificial neural networks", |
| "authors": [ |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Philosophical Transactions of the Royal Society B: Biological Sciences", |
| "volume": "375", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.1098/rstb.2019.0307" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marco Baroni. 2020. Linguistic generalization and compositionality in modern artificial neural net- works. Philosophical Transactions of the Royal So- ciety B: Biological Sciences, 375:20190307.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Reconciling modern machinelearning practice and the classical bias-variance trade-off", |
| "authors": [ |
| { |
| "first": "Mikhail", |
| "middle": [], |
| "last": "Belkin", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Hsu", |
| "suffix": "" |
| }, |
| { |
| "first": "Siyuan", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| }, |
| { |
| "first": "Soumik", |
| "middle": [], |
| "last": "Mandal", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the National Academy of Sciences", |
| "volume": "116", |
| "issue": "", |
| "pages": "15849--15854", |
| "other_ids": { |
| "DOI": [ |
| "10.1073/pnas.1903070116" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. 2019. Reconciling modern machine- learning practice and the classical bias-variance trade-off. Proceedings of the National Academy of Sciences, 116(32):15849-15854.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Gradient descent provably optimizes over-parameterized neural networks", |
| "authors": [ |
| { |
| "first": "Simon", |
| "middle": [ |
| "S" |
| ], |
| "last": "Du", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiyu", |
| "middle": [], |
| "last": "Zhai", |
| "suffix": "" |
| }, |
| { |
| "first": "Barnabas", |
| "middle": [], |
| "last": "Poczos", |
| "suffix": "" |
| }, |
| { |
| "first": "Aarti", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Simon S. Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. 2019. Gradient descent provably optimizes over-parameterized neural networks. In Interna- tional Conference on Learning Representations.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Learning to communicate with deep multi-agent reinforcement learning", |
| "authors": [ |
| { |
| "first": "Jakob", |
| "middle": [], |
| "last": "Foerster", |
| "suffix": "" |
| }, |
| { |
| "first": "Nando", |
| "middle": [], |
| "last": "Ioannis Alexandros Assael", |
| "suffix": "" |
| }, |
| { |
| "first": "Shimon", |
| "middle": [], |
| "last": "De Freitas", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Whiteson", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "29", |
| "issue": "", |
| "pages": "2137--2145", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jakob Foerster, Ioannis Alexandros Assael, Nando de Freitas, and Shimon Whiteson. 2016. Learning to communicate with deep multi-agent reinforcement learning. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neu- ral Information Processing Systems 29, pages 2137- 2145. Curran Associates, Inc.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Emergent linguistic phenomena in multi-agent communication games", |
| "authors": [ |
| { |
| "first": "Laura", |
| "middle": [], |
| "last": "Harding Graesser", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Douwe", |
| "middle": [], |
| "last": "Kiela", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "EMNLP-IJCNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "3691--3701", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/D19-1384" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Laura Harding Graesser, Kyunghyun Cho, and Douwe Kiela. 2019. Emergent linguistic phenomena in multi-agent communication games. In EMNLP- IJCNLP, pages 3691-3701, Hong Kong, China. As- sociation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Categorical reparameterization with gumbel-softmax", |
| "authors": [ |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Jang", |
| "suffix": "" |
| }, |
| { |
| "first": "Shixiang", |
| "middle": [], |
| "last": "Gu", |
| "suffix": "" |
| }, |
| { |
| "first": "Ben", |
| "middle": [], |
| "last": "Poole", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eric Jang, Shixiang Gu, and Ben Poole. 2017. Cate- gorical reparameterization with gumbel-softmax. In International Conference on Learning Representa- tions.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Compression and communication in the cultural evolution of linguistic structure", |
| "authors": [ |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "Kirby", |
| "suffix": "" |
| }, |
| { |
| "first": "Monica", |
| "middle": [], |
| "last": "Tamariz", |
| "suffix": "" |
| }, |
| { |
| "first": "Hannah", |
| "middle": [], |
| "last": "Cornish", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenny", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Cognition", |
| "volume": "141", |
| "issue": "", |
| "pages": "87--102", |
| "other_ids": { |
| "DOI": [ |
| "10.1016/j.cognition.2015.03.016" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Simon Kirby, Monica Tamariz, Hannah Cornish, and Kenny Smith. 2015. Compression and communica- tion in the cultural evolution of linguistic structure. Cognition, 141:87-102.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Natural language does not emerge 'naturally' in multi-agent dialog", |
| "authors": [ |
| { |
| "first": "Satwik", |
| "middle": [], |
| "last": "Kottur", |
| "suffix": "" |
| }, |
| { |
| "first": "Jos\u00e9", |
| "middle": [], |
| "last": "Moura", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Dhruv", |
| "middle": [], |
| "last": "Batra", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "2962--2967", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/D17-1321" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Satwik Kottur, Jos\u00e9 Moura, Stefan Lee, and Dhruv Ba- tra. 2017. Natural language does not emerge 'natu- rally' in multi-agent dialog. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2962-2967. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Multi-Agent Cooperation and the Emergence of (Natural) Language", |
| "authors": [ |
| { |
| "first": "Angeliki", |
| "middle": [], |
| "last": "Lazaridou", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Peysakhovich", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Angeliki Lazaridou, Alexander Peysakhovich, and Marco Baroni. 2017. Multi-Agent Cooperation and the Emergence of (Natural) Language. In Interna- tional Conference on Learning Representations.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Emergent translation in multi-agent communication", |
| "authors": [ |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Weston", |
| "suffix": "" |
| }, |
| { |
| "first": "Douwe", |
| "middle": [], |
| "last": "Kiela", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jason Lee, Kyunghyun Cho, Jason Weston, and Douwe Kiela. 2018. Emergent translation in multi-agent communication. In International Conference on Learning Representations.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Learning overparameterized neural networks via stochastic gradient descent on structured data", |
| "authors": [ |
| { |
| "first": "Yuanzhi", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Yingyu", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Advances in Neural Information Processing Systems 31", |
| "volume": "", |
| "issue": "", |
| "pages": "8157--8166", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yuanzhi Li and Yingyu Liang. 2018. Learning over- parameterized neural networks via stochastic gradi- ent descent on structured data. In Advances in Neu- ral Information Processing Systems 31, pages 8157- 8166. Curran Associates, Inc.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "The concrete distribution: A continuous relaxation of discrete random variables", |
| "authors": [ |
| { |
| "first": "Chris", |
| "middle": [ |
| "J" |
| ], |
| "last": "Maddison", |
| "suffix": "" |
| }, |
| { |
| "first": "Andriy", |
| "middle": [], |
| "last": "Mnih", |
| "suffix": "" |
| }, |
| { |
| "first": "Yee Whye", |
| "middle": [], |
| "last": "Teh", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. 2017. The concrete distribution: A continuous relax- ation of discrete random variables. In International Conference on Learning Representations.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Emergence of grounded compositional language in multi-agent populations", |
| "authors": [ |
| { |
| "first": "Igor", |
| "middle": [], |
| "last": "Mordatch", |
| "suffix": "" |
| }, |
| { |
| "first": "Pieter", |
| "middle": [], |
| "last": "Abbeel", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "AAAI Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Igor Mordatch and Pieter Abbeel. 2018. Emergence of grounded compositional language in multi-agent populations. In AAAI Conference on Artificial Intel- ligence.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Deep double descent: Where bigger models and more data hurt", |
| "authors": [ |
| { |
| "first": "Preetum", |
| "middle": [], |
| "last": "Nakkiran", |
| "suffix": "" |
| }, |
| { |
| "first": "Gal", |
| "middle": [], |
| "last": "Kaplun", |
| "suffix": "" |
| }, |
| { |
| "first": "Yamini", |
| "middle": [], |
| "last": "Bansal", |
| "suffix": "" |
| }, |
| { |
| "first": "Tristan", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Boaz", |
| "middle": [], |
| "last": "Barak", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, and Ilya Sutskever. 2020. Deep double descent: Where bigger models and more data hurt. In International Conference on Learning Rep- resentations.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Exploring structural inductive biases in emergent communication. arXiv", |
| "authors": [ |
| { |
| "first": "Agnieszka", |
| "middle": [], |
| "last": "S\u0142owik", |
| "suffix": "" |
| }, |
| { |
| "first": "Abhinav", |
| "middle": [], |
| "last": "Gupta", |
| "suffix": "" |
| }, |
| { |
| "first": "William", |
| "middle": [ |
| "L" |
| ], |
| "last": "Hamilton", |
| "suffix": "" |
| }, |
| { |
| "first": "Mateja", |
| "middle": [], |
| "last": "Jamnik", |
| "suffix": "" |
| }, |
| { |
| "first": "Sean", |
| "middle": [ |
| "B" |
| ], |
| "last": "Holden", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Pal", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Agnieszka S\u0142owik, Abhinav Gupta, William L. Hamil- ton, Mateja Jamnik, Sean B. Holden, and Christo- pher Pal. 2020. Exploring structural inductive biases in emergent communication. arXiv, 2002.01335.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Learning multiagent communication with backpropagation", |
| "authors": [ |
| { |
| "first": "Sainbayar", |
| "middle": [], |
| "last": "Sukhbaatar", |
| "suffix": "" |
| }, |
| { |
| "first": "Arthur", |
| "middle": [], |
| "last": "Szlam", |
| "suffix": "" |
| }, |
| { |
| "first": "Rob", |
| "middle": [], |
| "last": "Fergus", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "NeurIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "2244--2252", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus. 2016. Learning multiagent communication with backpropagation. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, NeurIPS, pages 2244-2252. Curran Associates, Inc.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Iconicity and the emergence of combinatorial structure in language", |
| "authors": [ |
| { |
| "first": "Tessa", |
| "middle": [], |
| "last": "Verhoef", |
| "suffix": "" |
| }, |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "Kirby", |
| "suffix": "" |
| }, |
| { |
| "first": "Bart", |
| "middle": [], |
| "last": "De Boer", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Cognitive Science", |
| "volume": "40", |
| "issue": "8", |
| "pages": "1969--1994", |
| "other_ids": { |
| "DOI": [ |
| "10.1111/cogs.12326" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tessa Verhoef, Simon Kirby, and Bart de Boer. 2016. Iconicity and the emergence of combinatorial struc- ture in language. Cognitive Science, 40(8):1969- 1994.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Efficient compression in color naming and its evolution", |
| "authors": [ |
| { |
| "first": "Noga", |
| "middle": [], |
| "last": "Zaslavsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Charles", |
| "middle": [], |
| "last": "Kemp", |
| "suffix": "" |
| }, |
| { |
| "first": "Terry", |
| "middle": [], |
| "last": "Regier", |
| "suffix": "" |
| }, |
| { |
| "first": "Naftali", |
| "middle": [], |
| "last": "Tishby", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the National Academy of Sciences", |
| "volume": "115", |
| "issue": "", |
| "pages": "7937--7942", |
| "other_ids": { |
| "DOI": [ |
| "10.1073/pnas.1800521115" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Noga Zaslavsky, Charles Kemp, Terry Regier, and Naf- tali Tishby. 2018. Efficient compression in color naming and its evolution. Proceedings of the Na- tional Academy of Sciences, 115(31):7937-7942.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "Histograms showing precision, recall (defined in \u00a7 2), and entropy (defined in \u00a7 3) over the test set. We show results for bits 19 to 25 and parameter range 72k to 1534k (details in \u00a7 3). Each bit/parameter combination is trained for 10 seeds over 200k steps.", |
| "num": null, |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "text": "Main results showing best and worst performances of the proposed metrics over 10 seeds. See Section 3.1 for detailed analysis. Panels (a) and (f) show the accuracy of the training data, (b) and (d) show entropy, (e) and (g) show recall over the test data, and (c) plots the max difference in accuracy between training and test.", |
| "num": null, |
| "uris": null, |
| "type_str": "figure" |
| } |
| } |
| } |
| } |