| { |
| "paper_id": "Q18-1037", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:10:23.105962Z" |
| }, |
| "title": "Planning, Inference and Pragmatics in Sequential Language Games", |
| "authors": [ |
| { |
| "first": "Fereshte", |
| "middle": [], |
| "last": "Khani", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Stanford University", |
| "location": {} |
| }, |
| "email": "fereshte@stanford.edu" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "D" |
| ], |
| "last": "Goodman", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Stanford University", |
| "location": {} |
| }, |
| "email": "ngoodman@stanford.edu" |
| }, |
| { |
| "first": "Percy", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Stanford University", |
| "location": {} |
| }, |
| "email": "pliang@cs.stanford.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We study sequential language games in which two players, each with private information, communicate to achieve a common goal. In such games, a successful player must (i) infer the partner's private information from the partner's messages, (ii) generate messages that are most likely to help with the goal, and (iii) reason pragmatically about the partner's strategy. We propose a model that captures all three characteristics and demonstrate their importance in capturing human behavior on a new goal-oriented dataset we collected using crowdsourcing.", |
| "pdf_parse": { |
| "paper_id": "Q18-1037", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We study sequential language games in which two players, each with private information, communicate to achieve a common goal. In such games, a successful player must (i) infer the partner's private information from the partner's messages, (ii) generate messages that are most likely to help with the goal, and (iii) reason pragmatically about the partner's strategy. We propose a model that captures all three characteristics and demonstrate their importance in capturing human behavior on a new goal-oriented dataset we collected using crowdsourcing.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Human communication is extraordinarily rich. People routinely choose what to say based on their goals (planning), figure out the state of the world based on what others say (inference), all while taking into account that others are strategizing agents too (pragmatics). All three aspects have been studied in both the linguistics and AI communities. For planning, Markov Decision Processes and their extensions can be used to compute utility-maximizing actions via forward-looking recurrences (e.g., Vogel et al. (2013a) ). For inference, model-theoretic semantics (Montague, 1973) provides a mechanism for utterances to constrain possible worlds, and this has been implemented recently in semantic parsing (Matuszek et al., 2012; Krishnamurthy and Kollar, 2013) . Finally, for pragmatics, the cooperative principle of Grice (1975) can be realized by models in which a speaker simulates a listener-e.g., Franke (2009) and Frank and Goodman (2012) . Planning: Let me first try square, which is just one possibility.", |
| "cite_spans": [ |
| { |
| "start": 500, |
| "end": 520, |
| "text": "Vogel et al. (2013a)", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 565, |
| "end": 581, |
| "text": "(Montague, 1973)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 707, |
| "end": 730, |
| "text": "(Matuszek et al., 2012;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 731, |
| "end": 762, |
| "text": "Krishnamurthy and Kollar, 2013)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 819, |
| "end": 831, |
| "text": "Grice (1975)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 904, |
| "end": 917, |
| "text": "Franke (2009)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 922, |
| "end": 946, |
| "text": "Frank and Goodman (2012)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Inference: The square's letter must be B.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Pragmatics: The square's digit cannot be 2. Figure 1 : A game of InfoJigsaw played by two human players. One of the players (P letter ) only sees the letters, while the other one (P digit ) only sees the digits. Their goal is to identify the goal object, B2, by exchanging a few words. The clouds show the hypothesized role of planning, inference, and pragmatics in the players' choice of utterances. In this game, the bottom object is the goal (position (1, 3)).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 44, |
| "end": 52, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "There have been a few previous efforts in the language games literature to combine the three aspects. proposed a model of communication between a questioner and an answerer based on only one round of question answering. Vogel et al. (2013b) proposed a model of two agents playing a restricted version of the game from the Cards Corpus (Potts, 2012) , where the agents only communicate once. 1 In this work, we seek to capture all three aspects in a single, unified framework which allows for multiple rounds of communication.", |
| "cite_spans": [ |
| { |
| "start": 220, |
| "end": 240, |
| "text": "Vogel et al. (2013b)", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 335, |
| "end": 348, |
| "text": "(Potts, 2012)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 391, |
| "end": 392, |
| "text": "1", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Specifically, we study human communication in a sequential language game in which two players, each with private knowledge, try to achieve a common goal by talking. We created a particular sequential language game called InfoJigsaw (Figure 1 ). In InfoJigsaw, there is a set of objects with public properties (shape, color, position) and private properties (digit, letter). One player (P letter ) can only see the letters, while the other player (P digit ) can only see the digits. The two players wish to identify the goal object, which is uniquely defined by a letter and digit. To do this, the players take turns talking; to encourage strategic language, we allow at most two English words at a time. At any point, a player can end the game by choosing an object.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 232, |
| "end": 241, |
| "text": "(Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Even in this relatively constrained game, we can see the three aspects of communication at work. As Figure 1 shows, in the first turn, since P letter knows that the game is multi-turn, she simply says square; if the other player does not click on the square, she can try the bottom circle in the next turn (planning). In the second turn, P digit infers from square that the square's letter is probably B (inference). As the digit on the square is not a 2, she says circle. Finally, P letter infers that digits of circles are 2, and in addition she infers from circle that the digit on the square is not a 2 as otherwise, P digit would have clicked on it (pragmatics). Therefore, she correctly clicks on (1,3).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 100, |
| "end": 108, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we propose a model that captures planning, inference, and pragmatics for sequential language games, which we call PIP. Planning recurrences look forward, inference recurrences look back, and pragmatics recurrences look to simpler interlocutors' model. The principal challenge is to integrate all three types in a coherent way; we present a \"two-dimensional\" system of recurrences to capture this. Our recurrences bottom out in very simple, literal semantics, (e.g., context-independent meaning of circle), and we rely on the structure of recurrences to endow words with their rich contextdependent meaning. As a result, our model is very parsimonious and only has four (hyper)parameters.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "As our interest is in modeling human communication in sequential language games, we evaluate PIP on its ability to predict how humans play In-foJigsaw. 2 We paired up workers on Amazon Mechanical Turk to play InfoJigsaw, and collected a total of 1680 games. Our findings are as follows: (i) PIP obtains higher log-likelihood than a baseline that chooses actions to convey maximum information in each round; (ii) PIP obtains higher loglikelihood than ablations that remove the pragmatic or the planning components, supporting their importance in communication; (iii) PIP is better than an ablation with a truncated inference component that forgets the distant past only for longer games, but worse for shorter games. The overall conclusion is that by combining a very simple, contextindependent literal semantics with an explicit model of planning, inference, and pragmatics, PIP obtains rich context-dependent meanings that correlate with human behavior.", |
| "cite_spans": [ |
| { |
| "start": 152, |
| "end": 153, |
| "text": "2", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In a sequential language game, there are two players who have a shared world state w. In addition, each player j \u2208 {+1, \u22121} has a private state s j . At each time step t = 1, 2, . . . , the active player j(t) = 2(t mod 2) \u2212 1 (which alternates) chooses an action (including speaking) a t based on its policy \u03c0 j(t) (a t | w, s j(t) , a 1:t\u22121 ). Importantly that player j(t) can see her own private state s j(t) , but not the partner's s \u2212j(t) . At the end of the game (defined by a terminating action), both players receive utility U (w, s +1 , s \u22121 , a 1:t ) \u2208 R. The utility consists of a penalty if players did not reach the goal and a reward if they reached the goal along with a penalty for each action. Because the players have a common utility function that depends on private information, they must communicate the part of their private information that is relevant for maximizing utility. In order to simplify notation, we use j to represent j(t) in the rest of the paper.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sequential Language Games", |
| "sec_num": "2" |
| }, |
| { |
| "text": "InfoJigsaw. In InfoJigsaw (see Figure 1 for an example), two players try to identify a goal object, but each only has partial information about its identity. Thus, in order to solve the task, they must communicate, piecing their information together like a jigsaw Figure 2 : Chat interface that Amazon Mechanical Turk (AMT) workers use to play InfoJigsaw (for readability, objects with the goal digit/letter are bolded).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 31, |
| "end": 39, |
| "text": "Figure 1", |
| "ref_id": null |
| }, |
| { |
| "start": 264, |
| "end": 272, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Sequential Language Games", |
| "sec_num": "2" |
| }, |
| { |
| "text": "puzzle. Figure 2 shows the interface that humans use to play the game.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 8, |
| "end": 16, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Sequential Language Games", |
| "sec_num": "2" |
| }, |
| { |
| "text": "More formally, the shared world state w includes the public properties of a set of objects: position on a m \u00d7 n grid, color (blue, yellow, green), and shape (square, diamond, circle). In addition, w contains the letter and digit of the goal object (e.g., B2). The private state of player P digit is a digit (e.g., 1,2,3) for each object, and the private state of player P letter is a letter (e.g., A,B,C) for each object. These states are s +1 , s \u22121 depending on which player goes first.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sequential Language Games", |
| "sec_num": "2" |
| }, |
| { |
| "text": "On each turn t, a player j(t)'s action a t can be either (i) a message containing one or two English words 3 (e.g., circle), or (ii) a click on an object, specified by its position (e.g., (1,3)). A click action terminates the game. If the clicked object is the goal, a green square will appear around it which is visible to both players; if the clicked object is not the goal, a red square appears instead. To discourage random guessing, we prevent players from clicking in the first time step. Players do not see an explicit utility (U ); however, they are instructed to think strategically to choose messages that lead to clicking on the correct object while using a minimum number of messages. Players can see the number of correct clicks, wrong clicks, and number of the words they have sent to each other so far at the top right of the screen.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sequential Language Games", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We would like to study how context-dependent meaning arises out of the interplay between a 3 If the words are not inside the English dictionary, the sender receives an error and the message is rejected. This prevents players from circumventing the game rules by connecting multiple words without spaces. Table 1 : Statistics for all 1680 games and the 1259 games in which each message contains at least one of the 12 most frequent words or \"yes\", or \"no\".", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 304, |
| "end": 311, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Sequential Language Games", |
| "sec_num": "2" |
| }, |
| { |
| "text": "context-independent literal semantics with contextsensitive planning, inference, and pragmatics. The simplicity of the InfoJigsaw game ensures that this interplay is not obscured by other challenges.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sequential Language Games", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We generated 10 InfoJigsaw scenarios as follows: For each one, we randomly choose m, n to be either 2 \u00d7 3 or 3 \u00d7 2 (which results in 64 possible private states). We randomly choose the properties of all objects and randomly designated one as the goal. We randomly choose either P letter or P digit to start the game first. Finally, to make the scenarios interesting, we keep a scenario if it satisfies: (i) Only the goal object (and no other objects) has the goal combination of the letter and digit; (ii) There exist at least two goal-consistent objects for each player and their sum of goal-consistent objects is at least m \u00d7 n; and (iii) all the goal consistent objects for each player do not share the same color, shape, or position (which means all the goal-consistent objects are not in left, right, top, bottom, or middle).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data collection", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "We collected a dataset of InfoJigsaw games on Amazon Mechanical Turk using the framework in Hawkins 2015 each played all 10 scenarios in a random order. Out of 200 pairs, 32 pairs left the game prematurely which results in 168 pairs playing the total of 1680 games. Players performed 4967 actions (messages and clicks) total and obtained an average score (correct clicks) of 7.5 per game. The average score per scenario varied from 6.4 to 8.2. Interestingly, there is no significant difference in scores across the 10 scenarios, suggesting that players do not adapt and become more proficient with more game play (Figure 3c) . Figure 3 shows the statistics of the collected corpus. Figure 4 shows one of the games, along with the distribution of messages in the first time step of all games played on this scenario.", |
| "cite_spans": [ |
| { |
| "start": 613, |
| "end": 624, |
| "text": "(Figure 3c)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 627, |
| "end": 635, |
| "text": "Figure 3", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 682, |
| "end": 690, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data collection", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "To focus on the strategic aspects of InfoJigsaw, we filtered the dataset to reduce the words in the tail. Specifically, we keep a game if all its messages contain at least one of the 12 most frequent words (shown in Figure 3d ) or \"yes\" or \"no\". For example, in Figure 4 , the games containing messages such as what color, mid row, color are filtered because they don't contain any frequent words. Messages such as middle, either middle, middle maybe, middle objects are mapped to middle. 1259 of 1680 games survived. Table 1 compares the statistics between all games and the ones that were kept. Most games that were filtered out contained less frequent synonyms (e.g. round instead of circle). Some questions were filtered out too (e.g., what color). Filtered games are 1.15 times longer on average.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 216, |
| "end": 225, |
| "text": "Figure 3d", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 262, |
| "end": 270, |
| "text": "Figure 4", |
| "ref_id": null |
| }, |
| { |
| "start": 518, |
| "end": 525, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data collection", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "In order to understand the principles behind how humans perform planning, inference, and pragmatics, we aim to develop a parsimonious, interpretable model with few parameters rather than a highly expressive, data-driven model. Therefore, following the tradition of Rational Speech Acts (RSA) (Frank and Goodman, 2012; Goodman and Frank, 2016), we will define in this section a mapping from each word to its literal semantics, and rely on the PIP re- currences (which we will describe in Section 4) to provide context-dependence. One could also learn the literal semantics by backpropagating through these recurrences, which has been done for simpler RSA models (Monroe and Potts, 2015) ; or learn the literal semantics from data and then put RSA on top ; we leave this to future work. Suppose a player utters a single word circle. There are multiple possible context-dependent interpretations:", |
| "cite_spans": [ |
| { |
| "start": 661, |
| "end": 685, |
| "text": "(Monroe and Potts, 2015)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Literal Semantics", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 Are any circles goal-consistent?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Literal Semantics", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 All the circles are goal-consistent.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Literal Semantics", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 Some circles but no other objects are goal- We will show that most of these interpretations can arise from a simple fixed semantics: roughly \"some circles are goal consistent\". We will now define a simple literal semantics of message actions such as circle, which forms the base case of PIP. Recall that the shared world state w contains the goal (e.g., B2) and, assuming P letter goes first, the private state s \u22121 (s +1 ) of player P letter (P digit ) contains the letter (digit) of each object. For notational simplicity, let us define s \u22121 (s +1 ) to be a matrix corresponding to the spatial locations of the objects, where an entry is 1 if the corresponding object has the goal letter (digit) and 0 otherwise. Thus s j also represents the set of goal-consistent objects given the private knowledge of that player. Figure 5 shows the private states of the players.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 821, |
| "end": 829, |
| "text": "Figure 5", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Literal Semantics", |
| "sec_num": "3" |
| }, |
| { |
| "text": "s \u22121 0 1 1 s +1 1 0 1 Find B2 Find", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Literal Semantics", |
| "sec_num": "3" |
| }, |
| { |
| "text": "P digit view square = s : s \u2227 0 1 0 = 0 0 0 top bottom = s : s \u2227 1 0 0 \u2228 0 0 1 = 0 0 0 top blue = s : s \u2227 1 0 0 \u2227 1 1 1 = 0 0 0", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Literal Semantics", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We define two types of message actions: informative (e.g., blue, top) and verifying (e.g., yes, no). Informative messages have immediate meaning, while verifying messages depend on the previous utterance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Literal Semantics", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Informative messages. Informative messages describe constraints on the speaker's private state (which the partner does not know). For a message a, define a to be the set of consistent private states. For example, bottom is all private states where there are goal-consistent objects in the bottom row.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Literal Semantics", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Formally, for each word x that specifies some object property (e.g., blue, top), define v x to be an n\u00d7m matrix where an entry is 1 if the corresponding object has the property x, and 0 otherwise. Then, define the literal semantics of a single-word message x to be x def = {s : s \u2227 v x = 0}, where \u2227 denotes element-wise and and 0 denotes the zero matrix. That is, single-property messages can be glossed as \"some goal-consistent object has property x\".", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Literal Semantics", |
| "sec_num": "3" |
| }, |
| { |
| "text": "For a two-word message xy, we define the literal semantics depending on the relationship between x and y. If x and y are mutually exclusive, then we interpret xy as x or y (e.g., square circle); otherwise, we interpret xy as x and y (e.g., blue", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Literal Semantics", |
| "sec_num": "3" |
| }, |
| { |
| "text": "top). Formally, xy def = {s : s \u2227 (v x \u2227 v y ) = 0} if v x \u2227 v y = 0 and {s : s \u2227 (v x \u2228 v y ) = 0} otherwise.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Literal Semantics", |
| "sec_num": "3" |
| }, |
| { |
| "text": "See Figure 5 for some examples.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 4, |
| "end": 12, |
| "text": "Figure 5", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Literal Semantics", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Action sequences. We now define the literal semantics of an entire action sequence a 1:t j with respect to player j, which is the set of possible partner private states s \u2212j . Intuitively, we want to simply intersect the set of consistent private states of the informative messages, but we need to also handle verifying messages (yes and no), which are context-dependent. Formally, we say that private state s \u2212j \u2208 a 1:t j if the following holds: for all informative messages a i uttered by \u2212j, s \u2212j \u2208 a i ; and for all verifying messages a i uttered by \u2212j if", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Literal Semantics", |
| "sec_num": "3" |
| }, |
| { |
| "text": "a i = yes then, s \u2212j \u2208 a i\u22121 ; and if a i = no then, s \u2212j \u2208 a i\u22121 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Literal Semantics", |
| "sec_num": "3" |
| }, |
| { |
| "text": "4 The Planning-Inference-Pragmatics (PIP) Model", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Literal Semantics", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Why does P digit in Figure 1 choose circle rather than top or click(1,2)? Intuitively, when a player chooses an action, she should take into account her previous actions, her partner's actions, and the effect of her actions on future turns. She should do all these while reasoning pragmatically that her partner is also a strategic player. At a high-level, PIP defines a system of recurrences revolving around three concepts, depicted in Figure 6 : player j's beliefs over the partner's pri- Figure 6 : PIP is defined via a system of recurrences that simultaneously captures planning, inference, and pragmatics. The arrows show the dependencies between beliefs p, expected utilities V , and policy \u03c0.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 20, |
| "end": 28, |
| "text": "Figure 1", |
| "ref_id": null |
| }, |
| { |
| "start": 438, |
| "end": 446, |
| "text": "Figure 6", |
| "ref_id": null |
| }, |
| { |
| "start": 492, |
| "end": 500, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Literal Semantics", |
| "sec_num": "3" |
| }, |
| { |
| "text": "vate state p k j (s \u2212j | s j , a 1:t )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Literal Semantics", |
| "sec_num": "3" |
| }, |
| { |
| "text": ", her expected utility of the game V k j (s +1 , s \u22121 , a 1:t ), and her policy \u03c0 k j (a t | s j , a 1:t\u22121 ). Here, t indexes the current time and k indexes the depth of pragmatic recursion, which will be explained later in Section 4.3. To simplify the notation, we have dropped w (shared world state) from the notation, since everything conditions on it.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Literal Semantics", |
| "sec_num": "3" |
| }, |
| { |
| "text": "From player j's point of view, the purpose of inference is to compute a distribution over the partner's private state s \u2212j given all actions thus far a 1:t . We first consider a \"level 0\" player, which simply assigns a uniform distribution over all states consistent with the literal semantics of a 1:t , which we defined in Section 3:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "p 0 j (s \u2212j | s j , a 1:t ) \u221d 1 s \u2212j \u2208 a 1:t j , 0 otherwise.", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Inference", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "For example, Figure 7 , shows the P letter 's belief about P digit 's private state after observing circle. Remember we show the private state of the players as a matrix where an entry is 1 if the corresponding object has the goal letter (digit) and 0 otherwise. A player's own private state s j can also constrain her beliefs about her partner's private state s \u2212j . For example, in InfoJigsaw, the active player knows there is a goal, and so we set p k j (s \u2212j | s j , a 1:t ) = 0 if s \u2212j \u2227 s j = 0. Figure 7 : P letter 's probability distribution over P digit 's private state after P digit says circle in the game shown in Figure 5 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 13, |
| "end": 21, |
| "text": "Figure 7", |
| "ref_id": null |
| }, |
| { |
| "start": 502, |
| "end": 510, |
| "text": "Figure 7", |
| "ref_id": null |
| }, |
| { |
| "start": 627, |
| "end": 635, |
| "text": "Figure 5", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Inference", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The purpose of planning is to compute a policy \u03c0 k j , which specifies a distribution over player j's actions a t given all past actions a 1:t\u22121 . To construct the policy, we first define an expected utility V k j via the following forward-looking recurrence: When the game is over (e.g., in InfoJigsaw, one player clicks on an object), the expected utility of the dialogue is simply its utility as defined by the game:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Planning", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "V k j (s +1 , s \u22121 , a 1:t ) = U (s +1 , s \u22121 , a 1:t ). (2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Planning", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Otherwise, we compute the expected utility assuming that in the next turn, player j chooses action a t+1 with probability governed by her policy \u03c0 k j (a t+1 | s j , a 1:t ):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Planning", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "V k j (s +1 , s \u22121 , a 1:t ) = a t+1 \u03c0 k j (a t+1 | s j , a 1:t ) V k \u2212j (s \u22121 , s +1 , a 1:t+1 ).", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Planning", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Having defined the expected utility, we now define the policy. First, let D k j be the gain in expected utility V k \u2212j (s +1 , s \u22121 , a 1:t ) over a simple baseline policy that ends the game immediately, yielding utility U (s +1 , s \u22121 , a 1:t\u22121 ) (which is simply a penalty for not finding the correct goal and a penalty for each action). Of course, the partner's private state s \u2212j is unknown and must be marginalized out based on player j's beliefs; let E k j be the expected gain. Let the probability of an action a t be proportional to max(0, E k j ) \u03b1 , where \u03b1 \u2208 [0, \u221e) is a hyperparameter that controls the rationality of the agent (a larger \u03b1 means that the player chooses utility-maximizing actions more aggressively). Formally:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Planning", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "D k j = V k \u2212j (s +1 , s \u22121 , a 1:t ) \u2212 U (s +1 , s \u22121 , a 1:t\u22121 ), E k j = s \u2212j p k j (s \u2212j | s j , a 1:t\u22121 )D k j , \u03c0 k j (a t | s j , a 1:t\u22121 ) \u221d max 0, E k j \u03b1 .", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Planning", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In practice, we use a depth-limited recurrence, where the expected utility is computed assuming that the game will end in f turns and the last action is a click action (meaning that we only consider the action sequences with size \u2264 f and a clicking action as the last action). Figure 8 shows how P digit computes the expected gain (Eqn. 4) of saying circle. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 277, |
| "end": 285, |
| "text": "Figure 8", |
| "ref_id": "FIGREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Planning", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The purpose of pragmatics is to take into account the partner's strategizing. We do this by constructing a level-k player that infers the partner's private state, following the tradition of Rational Speech Acts (RSA) (Frank and Goodman, 2012; Goodman and Frank, 2016) . Recall that a level-0 player p 0 j (Section 4.1) puts a uniform distribution over all the were P digit 's state, then clicking on the square would be a better option (given the previous actions). But given that P digit uttered circle,", |
| "cite_spans": [ |
| { |
| "start": 217, |
| "end": 242, |
| "text": "(Frank and Goodman, 2012;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 243, |
| "end": 267, |
| "text": "Goodman and Frank, 2016)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pragmatics", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "1 0 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pragmatics", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "is most likely, as reflected by p 1 j .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pragmatics", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "semantically valid private states of the partner. A level-k player assigns probability over the partner's private state proportional to the probability that a level-(k \u2212 1) player would have performed the last action a t : Figure 9 shows an example of the pragmatic reasoning.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 223, |
| "end": 231, |
| "text": "Figure 9", |
| "ref_id": "FIGREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Pragmatics", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "p k j (s \u2212j | s j , a 1:t ) \u221d \u03c0 k\u22121 \u2212j (a t | s \u2212j , a 1:t\u22121 ) p k j (s \u2212j | s j , a 1:t\u22122 ). (5)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pragmatics", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "In the Section 4.2, we modeled the players as rational agents that choose actions that lead to higher gain utility. In the pragmatics section (Section 4.3), we described how a player infers the partner's private state taking into account that her partner is acting cooperatively. The phenomena that emerges from the combination of the two is the topic of this section.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A closer look at the meaning of actions", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "We first define the belief marginals B j of a player j to be the marginal probabilities that each object is goal-consistent under the hypothesized partner's private state s \u2212j \u2208 R m\u00d7n , conditioned on actions a 1:t :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A closer look at the meaning of actions", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "B j (s j , a 1:t ) = s \u2212j p k j (s \u2212j | s j , a 1:t )s \u2212j .", |
| "eq_num": "(6)" |
| } |
| ], |
| "section": "A closer look at the meaning of actions", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "At time t = 0 (before any actions), the belief marginals of both players are m \u00d7 n matrices with 0.5 in all entries. The change in a belief marginal after observing an action a t gives a sense of the effective (context-dependent) meaning of that action. We first explain how pragmatics (k > 0 in (Eqn. 5)) leads to rich action meanings. When a player observes her partner's action a t , she assumes this ac- Figure 10 : Belief marginals of P digit (Eqn. 6) after observing sequences of actions for different pragmatic depths k. (b) Without pragmatics (k = 0), P digit thinks both objects on the right has the same probability to be goal-consistent. With pragmatics (k = 1), P digit thinks that the object in the bottom right is more likely to be goal-consistent.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 408, |
| "end": 417, |
| "text": "Figure 10", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "A closer look at the meaning of actions", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "tion was chosen because it results in a higher utility than the alternatives. In other words, she infers that her partner's private state cannot be one in which a t does not lead to high utility. As an example, saying circle instead of top circle or bottom circle implies that there is more than one goalconsistent circle. The pragmatic depth k governs the extent to which this type of reasoning is applied. Recall in Section 4.2, a player chooses an action conditioned on all previous actions, and the other player assumed this context-dependence. As an example, Figure 10(d) shows how right changes its meaning when it follows bottom.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 564, |
| "end": 576, |
| "text": "Figure 10(d)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "A closer look at the meaning of actions", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "We a priori set the reward of clicking on the goal to be +100 and clicking on the wrong object to be \u2212100. We set the smoothing \u03b1 = 10 and the action cost to be \u221250 based on the data. The larger the action cost, the fewer messages will be used before selecting an object. Formally, after k actions: Utility = \u221250k + +100 the goal object is clicked, \u2212100 otherwise.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Setup", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We smoothed all polices by adding 0.01 to the probability of each action and re-normalizing. By default, we set k = 1 (pragmatic depth (Eqn. 4)). When computing the expected utility (Eqn. 3) of the game, we use a lookahead of f = 2. Inference looks back b time steps (i.e., (Eqn. 1) and (Eqn. 5) are based on a t\u2212b+1:t rather than a 1:t ); we set b = \u221e by default.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Setup", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We implemented two baseline policies: Random policy: for player j, the random policy randomly chooses one of the semantically valid (Section 3) actions with respect to s j or clicks on a goal-consistent object. Formally, the random policy places a uniform distribution over:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Setup", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "{a : s j \u2208 a } \u222a {click(u, v) : (s j ) u,v = 1}.", |
| "eq_num": "(8)" |
| } |
| ], |
| "section": "Setup", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Greedy Policy: assigns higher probability to the actions that convey more information about the player's private state. We heuristically set the probability of generating an action proportional to how much it shrinks the set of semantically valid states. Formally, for the message actions:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Setup", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u03c0 msg j (a t | a 1:t\u22121 , s j ) \u221d | a 1:t\u22121 \u2212j | \u2212 | a 1:t \u2212j |", |
| "eq_num": "(9)" |
| } |
| ], |
| "section": "Setup", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "For the clicking actions, we compute the belief state as explained in Section 4.4. Remember B u,v is the marginal probability of the object in the row u and column v being goal-consistent in the partner's private state. Formally, for clicking actions:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Setup", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u03c0 click j (click(u, v) | a 1:t , s j ) \u221d min((s j ) u,v , B j (s j , a 1:t ) u,v ).", |
| "eq_num": "(10)" |
| } |
| ], |
| "section": "Setup", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Finally, the greedy policy chooses a click action with probability \u03b3 and a message action with probability 1 \u2212 \u03b3. So that \u03b3 increases as the player gets more confident about the position of the goal, we set \u03b3 to be the probability of the most probable position of the goal: Figure 11 compares the two baselines with PIP on the task of predicting human behavior as measured by log-likelihood. 4 To estimate the best possible (i.e., ceiling) performance, we compute the entropy of the actions on the first time step based on approximately 100 data points per scenario. For each policy, we rank the actions by their probability in decreasing order (actions with the same probability are randomly ordered), and then compute the average ranking across actions according to the different policies; see Figure 13 for the results.", |
| "cite_spans": [ |
| { |
| "start": 392, |
| "end": 393, |
| "text": "4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 274, |
| "end": 283, |
| "text": "Figure 11", |
| "ref_id": "FIGREF7" |
| }, |
| { |
| "start": 796, |
| "end": 805, |
| "text": "Figure 13", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Setup", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "\u03b3 = max u,v \u03c0 click j (click(u, v) | a 1:t , s j ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Setup", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "To assess the different components (planning, inference, pragmatics) of PIP, we run PIP, ablating one component at a time from the default setting of k = 1, f = 2, and b = \u221e (see Figure 12 ).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 179, |
| "end": 188, |
| "text": "Figure 12", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Pragmatics. Let PIP -prag be PIP but with a pragmatic depth (Eqn. 4) of k = 0 rather than k = 1, which means that PIP -prag only draws inferences based on the literal semantics of messages. PIP -prag loses 0.21 in average log-likelihood per action, highlighting the importance of pragmatics in modeling human behavior.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Planning. Let PIP -plan be PIP, but looking ahead only f = 1 step when computing the expected utility (Eqn. 3) rather than f = 2. With a shorter future horizon, PIP -plan tries to give as much information as possible at each turn, whereas human players tend to give information about their state incremen-tally. PIP -plan cannot capture this behavior and allocates low probability to these kinds of dialogue. PIP -plan has an average log-likelihood which is 0.05 lower than that of PIP, highlighting the importance of planning.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Inference. Let PIP -infer be PIP, but only looking at the last utterance (b = 1) rather than the full history (b = \u221e). The results here are more nuanced. Although PIP -infer actually performs better than PIP on all games, we find that PIP -infer is worse than PIP by an average log-likelihood of 0.15 in predicting messages after time step 3, highlighting the importance of inference, but only in long games. It is likely that additional noise involved in the inference process leads to the decreased performance when backward looking inference is not actually needed.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Our work touches on ideas in game theory, pragmatic modeling, dialogue modeling, and learning communicative agents, which we highlight below.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work and Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Game theory. According to game theory terminology (Shoham and Leyton-Brown, 2008) , Info-Jigsaw is a non-cooperative (there is no offline optimization of the player's policy before the game starts), common-payoff (the players have the same utility), incomplete information (the players have private state) game with the sequential actions. One related concept in game theory related to our model is rationalizability (Bernheim, 1984; Pearce, 1984) . A strategy is rationalizable if it is justifiable to play against a completely rational player. Another related concept is epistemic games (Dekel and Siniscalchi, 2015; Perea, 2012) . Epistemic game theory studies the behavioral implications of rationality and mutual beliefs in games.", |
| "cite_spans": [ |
| { |
| "start": 50, |
| "end": 81, |
| "text": "(Shoham and Leyton-Brown, 2008)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 417, |
| "end": 433, |
| "text": "(Bernheim, 1984;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 434, |
| "end": 447, |
| "text": "Pearce, 1984)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 589, |
| "end": 618, |
| "text": "(Dekel and Siniscalchi, 2015;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 619, |
| "end": 631, |
| "text": "Perea, 2012)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work and Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "It is important to note that we are not interested in notions of global optima or equilibria; rather, we are interested in modeling human behavior. Restricting words to a very restricted natural language has been studied in the context of language games (Wittgenstein, 1953; Lewis, 2008; Nowak et al., 1999; Franke, 2009; Huttegger et al., 2010) . ", |
| "cite_spans": [ |
| { |
| "start": 254, |
| "end": 274, |
| "text": "(Wittgenstein, 1953;", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 275, |
| "end": 287, |
| "text": "Lewis, 2008;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 288, |
| "end": 307, |
| "text": "Nowak et al., 1999;", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 308, |
| "end": 321, |
| "text": "Franke, 2009;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 322, |
| "end": 345, |
| "text": "Huttegger et al., 2010)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work and Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "k (pragmatics) 1 0 1 1 f (planning) 2 2 1 2 b (inference) \u221e \u221e \u221e 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work and Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "rank all 17.1 19.3 17.2 16.9 rank \u2265 3 10.4 10.8 11.6 13.1 (c) Top: parameter setup. Bottom: expected ranking of human messages according to the different ablations Figure 12 : Performance on ablations of PIP. Average log-likelihood per message, the whiskers show 90% confidence intervals. PIP has better performance of ablation of planning and pragmatics over all rounds. Looking only one step backward has a better performance in the first few rounds but it is worse after round 3. which defines recurrences capturing how one agent reasons about another. Similar ideas were explored in the precursor work of Golland et al. (2010) , and much work has ensued (Smith et al., 2013; Qing and Franke, 2014; Monroe and Potts, 2015; Ullman et al., 2016; . Most of this work is restricted to production and comprehension of a single utterance. Hawkins et al. (2015) extend these ideas to two utterances (a question and an answer). Vogel et al. (2013b) in-tegrates planning with pragmatics using decentralized partially observable Markov processes (DEC-POMDPs). In their task, two bots should find and co-locate with a specific card. In contrast to Info-Jigsaw, their task can be completed without communication; their agents only communicate once sharing the card location. They also only study artificial agents playing together and were not concerned about modeling human behavior.", |
| "cite_spans": [ |
| { |
| "start": 609, |
| "end": 630, |
| "text": "Golland et al. (2010)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 658, |
| "end": 678, |
| "text": "(Smith et al., 2013;", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 679, |
| "end": 701, |
| "text": "Qing and Franke, 2014;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 702, |
| "end": 725, |
| "text": "Monroe and Potts, 2015;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 726, |
| "end": 746, |
| "text": "Ullman et al., 2016;", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 923, |
| "end": 943, |
| "text": "Vogel et al. (2013b)", |
| "ref_id": "BIBREF37" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 164, |
| "end": 173, |
| "text": "Figure 12", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Related Work and Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Learning to communicate. There is a rich literature on multi-agent reinforcement learning (Busoniu et al., 2008) . Some works assume full visibility and cooperate without communication, assuming the world is completely visible to all agents (Lauer and Riedmiller, 2000; Littman, 2001) ; others assume a predefined convention for communication (Zhang and Lesser, 2013; Tan, 1993) . There is also some work that learns the convention itself (Foerster et al., 2016; Sukhbaatar et al., 2016; Lazaridou et al., 2017; Mordatch and Abbeel, 2018) . Lazaridou et al. (2017) puts humans in the loop to make the communication more human-interpretable. In comparison to these works, we seek to predict human behavior instead of modeling artificial agents that communicate with each other.", |
| "cite_spans": [ |
| { |
| "start": 90, |
| "end": 112, |
| "text": "(Busoniu et al., 2008)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 241, |
| "end": 269, |
| "text": "(Lauer and Riedmiller, 2000;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 270, |
| "end": 284, |
| "text": "Littman, 2001)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 343, |
| "end": 367, |
| "text": "(Zhang and Lesser, 2013;", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 368, |
| "end": 378, |
| "text": "Tan, 1993)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 439, |
| "end": 462, |
| "text": "(Foerster et al., 2016;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 463, |
| "end": 487, |
| "text": "Sukhbaatar et al., 2016;", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 488, |
| "end": 511, |
| "text": "Lazaridou et al., 2017;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 512, |
| "end": 538, |
| "text": "Mordatch and Abbeel, 2018)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 541, |
| "end": 564, |
| "text": "Lazaridou et al. (2017)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work and Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Dialogue. There is also a lot of work in computational linguistics and NLP on modeling dialogue. Allen and Perrault (1980) provides a model that in-fers the intention/plan of the other agent and uses this plan to generate a response. Clark and Brennan (1991) explains how two players update their common ground (mutual knowledge, mutual beliefs, and mutual assumptions) in order to coordinate. Recent work in task-oriented dialogue uses POMDPs and end-to-end neural networks (Young, 2000; Young et al., 2013; Wen et al., 2017; He et al., 2017) . In this work, instead of learning from a large corpus, we predict human behavior without learning, albeit in a much more strategic, stylized setting (two words per utterance).", |
| "cite_spans": [ |
| { |
| "start": 97, |
| "end": 122, |
| "text": "Allen and Perrault (1980)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 475, |
| "end": 488, |
| "text": "(Young, 2000;", |
| "ref_id": "BIBREF41" |
| }, |
| { |
| "start": 489, |
| "end": 508, |
| "text": "Young et al., 2013;", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 509, |
| "end": 526, |
| "text": "Wen et al., 2017;", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 527, |
| "end": 543, |
| "text": "He et al., 2017)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work and Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In this paper, we started with the observation that humans use language in a very contextual way driven by their goals. We identified three salient aspectsplanning, inference, pragmatics-and proposed a unified model, PIP, that captures all three aspects simultaneously. Our main result is that a very simple, context-independent literal semantics can give rise via the recurrences to rich phenomena. We study these phenomena in a new game, InfoJigsaw, and show that PIP is able to capture human behavior.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "All code, data, and experiments for this paper are available on the CodaLab platform at https:// worksheets.codalab.org/worksheets/ 0x052129c7afa9498481185b553d23f0f9/.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reproducibility", |
| "sec_num": null |
| }, |
| { |
| "text": "Specifically, two agents must both co-locate with a specific card. The agent which finds the card sooner shares the card location information with the other agent.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "One could in principle solve for an optimal communication strategy for InfoJigsaw, but this would likely result in a solution far from human communication.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We bootstrap the data 1000 times and we show 90% confidence intervals.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We would like to thank the anonymous reviewers and the action editor for their helpful comments. We also thank Will Monroe for providing valuable feedback on early drafts.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Analyzing intention in utterances", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [ |
| "F" |
| ], |
| "last": "Allen", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "Raymond" |
| ], |
| "last": "Perrault", |
| "suffix": "" |
| } |
| ], |
| "year": 1980, |
| "venue": "Artificial Intelligence", |
| "volume": "15", |
| "issue": "3", |
| "pages": "143--178", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "James F. Allen and C. Raymond Perrault. 1980. Ana- lyzing intention in utterances. Artificial Intelligence, 15(3):143-178.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Reasoning about pragmatics with neural listeners and speakers", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Andreas", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "1173--1182", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Andreas and Dan Klein. 2016. Reasoning about pragmatics with neural listeners and speakers. In Empirical Methods in Natural Language Processing (EMNLP), pages 1173-1182.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Learning to compose neural networks for question answering", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Andreas", |
| "suffix": "" |
| }, |
| { |
| "first": "Marcus", |
| "middle": [], |
| "last": "Rohrbach", |
| "suffix": "" |
| }, |
| { |
| "first": "Trevor", |
| "middle": [], |
| "last": "Darrell", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "1545--1554", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Learning to compose neural net- works for question answering. In Association for Computational Linguistics (ACL), pages 1545-1554.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Rationalizable strategic behavior", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Douglas", |
| "middle": [], |
| "last": "Bernheim", |
| "suffix": "" |
| } |
| ], |
| "year": 1984, |
| "venue": "Econometrica: Journal of the Econometric Society", |
| "volume": "", |
| "issue": "", |
| "pages": "1007--1028", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. Douglas Bernheim. 1984. Rationalizable strategic be- havior. Econometrica: Journal of the Econometric So- ciety, pages 1007-1028.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "A comprehensive survey of multiagent reinforcement learning", |
| "authors": [ |
| { |
| "first": "Lucian", |
| "middle": [], |
| "last": "Busoniu", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Babuska", |
| "suffix": "" |
| }, |
| { |
| "first": "Bart", |
| "middle": [ |
| "De" |
| ], |
| "last": "Schutter", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "IEEE Trans. Systems, Man, and Cybernetics, Part C", |
| "volume": "38", |
| "issue": "2", |
| "pages": "156--172", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lucian Busoniu, Robert Babuska, and Bart De Schutter. 2008. A comprehensive survey of multiagent rein- forcement learning. IEEE Trans. Systems, Man, and Cybernetics, Part C, 38(2):156-172.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Grounding in Communication", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Herbert", |
| "suffix": "" |
| }, |
| { |
| "first": "Susan", |
| "middle": [ |
| "E" |
| ], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Brennan", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Herbert H. Clark and Susan E. Brennan. 1991. Ground- ing in Communication. Perspectives on Socially Shared Cognition.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Handbook of Game Theory with Economic Applications", |
| "authors": [ |
| { |
| "first": "Eddie", |
| "middle": [], |
| "last": "Dekel", |
| "suffix": "" |
| }, |
| { |
| "first": "Marciano", |
| "middle": [], |
| "last": "Siniscalchi", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "4", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eddie Dekel and Marciano Siniscalchi. 2015. Epistemic game theory, volume 4. Handbook of Game Theory with Economic Applications.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Learning to communicate with deep multi-agent reinforcement learning", |
| "authors": [ |
| { |
| "first": "Jakob", |
| "middle": [], |
| "last": "Foerster", |
| "suffix": "" |
| }, |
| { |
| "first": "Yannis", |
| "middle": [ |
| "M" |
| ], |
| "last": "Assael", |
| "suffix": "" |
| }, |
| { |
| "first": "Shimon", |
| "middle": [], |
| "last": "Nando De Freitas", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Whiteson", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Advances in Neural Information Processing Systems (NIPS)", |
| "volume": "", |
| "issue": "", |
| "pages": "2137--2145", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jakob Foerster, Yannis M. Assael, Nando de Freitas, and Shimon Whiteson. 2016. Learning to communi- cate with deep multi-agent reinforcement learning. In Advances in Neural Information Processing Systems (NIPS), pages 2137-2145.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Predicting pragmatic reasoning in language games", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Michael", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "D" |
| ], |
| "last": "Frank", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Goodman", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Science", |
| "volume": "336", |
| "issue": "", |
| "pages": "998--998", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael C. Frank and Noah D. Goodman. 2012. Predict- ing pragmatic reasoning in language games. Science, 336:998-998.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Signal to act: Game theory in pragmatics. Institute for Logic, Language and Computation", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Franke", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Franke. 2009. Signal to act: Game theory in pragmatics. Institute for Logic, Language and Com- putation.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "A game-theoretic approach to generating spatial descriptions", |
| "authors": [ |
| { |
| "first": "Dave", |
| "middle": [], |
| "last": "Golland", |
| "suffix": "" |
| }, |
| { |
| "first": "Percy", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "410--419", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dave Golland, Percy Liang, and Dan Klein. 2010. A game-theoretic approach to generating spatial descrip- tions. In Empirical Methods in Natural Language Pro- cessing (EMNLP), pages 410-419.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Pragmatic language interpretation as probabilistic inference", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Noah", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [ |
| "C" |
| ], |
| "last": "Goodman", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Frank", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Trends in Cognitive Sciences", |
| "volume": "20", |
| "issue": "11", |
| "pages": "818--829", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Noah D. Goodman and Michael C. Frank. 2016. Prag- matic language interpretation as probabilistic infer- ence. Trends in Cognitive Sciences, 20(11):818-829.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Logic and conversation", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Herbert", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Grice", |
| "suffix": "" |
| } |
| ], |
| "year": 1975, |
| "venue": "Syntax and Semantics", |
| "volume": "3", |
| "issue": "", |
| "pages": "41--58", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Herbert P. Grice. 1975. Logic and conversation. Syntax and Semantics, 3:41-58.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Why do you ask? Good questions provoke informative answers", |
| "authors": [ |
| { |
| "first": "X", |
| "middle": [ |
| "D" |
| ], |
| "last": "Robert", |
| "suffix": "" |
| }, |
| { |
| "first": "Andreas", |
| "middle": [], |
| "last": "Hawkins", |
| "suffix": "" |
| }, |
| { |
| "first": "Judith", |
| "middle": [], |
| "last": "Stuhlm\u00fcller", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "D" |
| ], |
| "last": "Degen", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Goodman", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the Thirty-Seventh Annual Conference of the Cognitive Science Society", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Robert X. D. Hawkins, Andreas Stuhlm\u00fcller, Judith De- gen, and Noah D. Goodman. 2015. Why do you ask? Good questions provoke informative answers. In Pro- ceedings of the Thirty-Seventh Annual Conference of the Cognitive Science Society.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Conducting real-time multiplayer experiments on the web", |
| "authors": [ |
| { |
| "first": "X", |
| "middle": [ |
| "D" |
| ], |
| "last": "Robert", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hawkins", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Behavior Research Methods", |
| "volume": "47", |
| "issue": "4", |
| "pages": "966--976", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Robert X. D. Hawkins. 2015. Conducting real-time mul- tiplayer experiments on the web. Behavior Research Methods, 47(4):966-976.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Learning symmetric collaborative dialogue agents with dynamic knowledge graph embeddings", |
| "authors": [ |
| { |
| "first": "He", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Anusha", |
| "middle": [], |
| "last": "Balakrishnan", |
| "suffix": "" |
| }, |
| { |
| "first": "Mihail", |
| "middle": [], |
| "last": "Eric", |
| "suffix": "" |
| }, |
| { |
| "first": "Percy", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "1766--1776", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "He He, Anusha Balakrishnan, Mihail Eric, and Percy Liang. 2017. Learning symmetric collaborative dia- logue agents with dynamic knowledge graph embed- dings. In Association for Computational Linguistics (ACL), pages 1766-1776.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Evolutionary dynamics of Lewis signaling games: Signaling systems vs. partial pooling", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Simon", |
| "suffix": "" |
| }, |
| { |
| "first": "Brian", |
| "middle": [], |
| "last": "Huttegger", |
| "suffix": "" |
| }, |
| { |
| "first": "Rory", |
| "middle": [], |
| "last": "Skyrms", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [ |
| "J S" |
| ], |
| "last": "Smead", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zollman", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Synthese", |
| "volume": "172", |
| "issue": "1", |
| "pages": "177--191", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Simon M. Huttegger, Brian Skyrms, Rory Smead, and Kevin J.S. Zollman. 2010. Evolutionary dynamics of Lewis signaling games: Signaling systems vs. partial pooling. Synthese, 172(1):177-191.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Jointly learning to parse and perceive: Connecting natural language to the physical world", |
| "authors": [ |
| { |
| "first": "Jayant", |
| "middle": [], |
| "last": "Krishnamurthy", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Kollar", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Transactions of the Association for Computational Linguistics (TACL)", |
| "volume": "1", |
| "issue": "", |
| "pages": "193--206", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jayant Krishnamurthy and Thomas Kollar. 2013. Jointly learning to parse and perceive: Connecting natural lan- guage to the physical world. Transactions of the Asso- ciation for Computational Linguistics (TACL), 1:193- 206.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "An algorithm for distributed reinforcement learning in cooperative multi-agent systems", |
| "authors": [ |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Lauer", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Riedmiller", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "International Conference on Machine Learning (ICML)", |
| "volume": "", |
| "issue": "", |
| "pages": "535--542", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Martin Lauer and Martin Riedmiller. 2000. An algorithm for distributed reinforcement learning in cooperative multi-agent systems. In International Conference on Machine Learning (ICML), pages 535-542.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Multi-agent cooperation and the emergence of (natural) language", |
| "authors": [ |
| { |
| "first": "Angeliki", |
| "middle": [], |
| "last": "Lazaridou", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Peysakhovich", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "International Conference on Learning Representations (ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Angeliki Lazaridou, Alexander Peysakhovich, and Marco Baroni. 2017. Multi-agent cooperation and the emer- gence of (natural) language. In International Confer- ence on Learning Representations (ICLR).", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Convention: A philosophical study", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Lewis. 2008. Convention: A philosophical study. John Wiley & Sons.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Value-function reinforcement learning in Markov games", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Michael", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Littman", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Cognitive Systems Research", |
| "volume": "2", |
| "issue": "1", |
| "pages": "55--66", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael L. Littman. 2001. Value-function reinforce- ment learning in Markov games. Cognitive Systems Research, 2(1):55-66.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "A joint model of language and perception for grounded attribute learning", |
| "authors": [ |
| { |
| "first": "Cynthia", |
| "middle": [], |
| "last": "Matuszek", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicholas", |
| "middle": [], |
| "last": "Fitzgerald", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Liefeng", |
| "middle": [], |
| "last": "Bo", |
| "suffix": "" |
| }, |
| { |
| "first": "Dieter", |
| "middle": [], |
| "last": "Fox", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "International Conference on Machine Learning (ICML)", |
| "volume": "", |
| "issue": "", |
| "pages": "1671--1678", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cynthia Matuszek, Nicholas FitzGerald, Luke Zettle- moyer, Liefeng Bo, and Dieter Fox. 2012. A joint model of language and perception for grounded at- tribute learning. In International Conference on Ma- chine Learning (ICML), pages 1671-1678.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Learning in the Rational Speech Acts model", |
| "authors": [ |
| { |
| "first": "Will", |
| "middle": [], |
| "last": "Monroe", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Potts", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of 20th Amsterdam Colloquium", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Will Monroe and Christopher Potts. 2015. Learning in the Rational Speech Acts model. In Proceedings of 20th Amsterdam Colloquium.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "The proper treatment of quantification in ordinary English", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Montague", |
| "suffix": "" |
| } |
| ], |
| "year": 1973, |
| "venue": "Approaches to Natural Language", |
| "volume": "", |
| "issue": "", |
| "pages": "221--242", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard Montague. 1973. The proper treatment of quan- tification in ordinary English. In Approaches to Natu- ral Language, pages 221-242.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Emergence of grounded compositional language in multi-agent populations", |
| "authors": [ |
| { |
| "first": "Igor", |
| "middle": [], |
| "last": "Mordatch", |
| "suffix": "" |
| }, |
| { |
| "first": "Pieter", |
| "middle": [], |
| "last": "Abbeel", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Association for the Advancement of Artificial Intelligence (AAAI)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Igor Mordatch and Pieter Abbeel. 2018. Emergence of grounded compositional language in multi-agent pop- ulations. In Association for the Advancement of Artifi- cial Intelligence (AAAI).", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "The evolutionary language game", |
| "authors": [ |
| { |
| "first": "Martin", |
| "middle": [ |
| "A" |
| ], |
| "last": "Nowak", |
| "suffix": "" |
| }, |
| { |
| "first": "Joshua", |
| "middle": [ |
| "B" |
| ], |
| "last": "Plotkin", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [ |
| "C" |
| ], |
| "last": "Krakauer", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Journal of Theoretical Biology", |
| "volume": "200", |
| "issue": "2", |
| "pages": "147--162", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Martin A. Nowak, Joshua B. Plotkin, and David C. Krakauer. 1999. The evolutionary language game. Journal of Theoretical Biology, 200(2):147-162.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Rationalizable strategic behavior and the problem of perfection", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [ |
| "G" |
| ], |
| "last": "Pearce", |
| "suffix": "" |
| } |
| ], |
| "year": 1984, |
| "venue": "Econometrica: Journal of the Econometric Society", |
| "volume": "", |
| "issue": "", |
| "pages": "1029--1050", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David G. Pearce. 1984. Rationalizable strategic behavior and the problem of perfection. Econometrica: Journal of the Econometric Society, pages 1029-1050.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Epistemic game theory: reasoning and choice", |
| "authors": [ |
| { |
| "first": "'", |
| "middle": [], |
| "last": "Andr", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Perea", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andr'es Perea. 2012. Epistemic game theory: reasoning and choice. Cambridge University Press.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Goal-driven answers in the Cards dialogue corpus", |
| "authors": [ |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Potts", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 30th West Coast Conference on Formal Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1--20", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christopher Potts. 2012. Goal-driven answers in the Cards dialogue corpus. In Proceedings of the 30th West Coast Conference on Formal Linguistics, pages 1-20.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Gradable adjectives, vagueness, and optimal language use: A speaker-oriented model", |
| "authors": [ |
| { |
| "first": "Ciyang", |
| "middle": [], |
| "last": "Qing", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Franke", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Semantics and Linguistic Theory", |
| "volume": "24", |
| "issue": "", |
| "pages": "23--41", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ciyang Qing and Michael Franke. 2014. Gradable adjectives, vagueness, and optimal language use: A speaker-oriented model. In Semantics and Linguistic Theory, volume 24, pages 23-41.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations", |
| "authors": [ |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Shoham", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Leyton-Brown", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoav Shoham and Kevin Leyton-Brown. 2008. Multi- agent Systems: Algorithmic, Game-Theoretic, and Logical Foundations. Cambridge University Press.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Learning and using language via recursive pragmatic reasoning about other agents", |
| "authors": [ |
| { |
| "first": "Nathaniel", |
| "middle": [ |
| "J" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "D" |
| ], |
| "last": "Goodman", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [ |
| "C" |
| ], |
| "last": "Frank", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Advances in Neural Information Processing Systems (NIPS)", |
| "volume": "", |
| "issue": "", |
| "pages": "3039--3047", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nathaniel J. Smith, Noah D. Goodman, and Michael C. Frank. 2013. Learning and using language via re- cursive pragmatic reasoning about other agents. In Advances in Neural Information Processing Systems (NIPS), pages 3039-3047.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Learning multiagent communication with backpropagation", |
| "authors": [ |
| { |
| "first": "Sainbayar", |
| "middle": [], |
| "last": "Sukhbaatar", |
| "suffix": "" |
| }, |
| { |
| "first": "Rob", |
| "middle": [], |
| "last": "Fergus", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Advances in Neural Information Processing Systems (NIPS)", |
| "volume": "", |
| "issue": "", |
| "pages": "2244--2252", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sainbayar Sukhbaatar, Rob Fergus, et al. 2016. Learning multiagent communication with backpropagation. In Advances in Neural Information Processing Systems (NIPS), pages 2244-2252.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Multi-agent reinforcement learning: Independent vs. cooperative agents", |
| "authors": [ |
| { |
| "first": "Ming", |
| "middle": [], |
| "last": "Tan", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "International Conference on Machine Learning (ICML)", |
| "volume": "", |
| "issue": "", |
| "pages": "330--337", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ming Tan. 1993. Multi-agent reinforcement learning: Independent vs. cooperative agents. In International Conference on Machine Learning (ICML), pages 330- 337.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "The pragmatics of spatial language", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Tomer", |
| "suffix": "" |
| }, |
| { |
| "first": "Yang", |
| "middle": [], |
| "last": "Ullman", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "D" |
| ], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Goodman", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 38th Annual Conference of the Cognitive Science Society", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomer D. Ullman, Yang Xu, and Noah D. Goodman. 2016. The pragmatics of spatial language. In Pro- ceedings of the 38th Annual Conference of the Cogni- tive Science Society.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Emergence of Gricean maxims from multi-agent decision theory", |
| "authors": [ |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Vogel", |
| "suffix": "" |
| }, |
| { |
| "first": "Max", |
| "middle": [], |
| "last": "Bodoia", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Potts", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "North American Association for Computational Linguistics (NAACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "1072--1081", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adam Vogel, Max Bodoia, Christopher Potts, and Daniel Jurafsky. 2013a. Emergence of Gricean maxims from multi-agent decision theory. In North American Asso- ciation for Computational Linguistics (NAACL), pages 1072-1081.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Implicatures and nested beliefs in approximate decentralized-POMDPs", |
| "authors": [ |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Vogel", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Potts", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "74--80", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adam Vogel, Christopher Potts, and Dan Jurafsky. 2013b. Implicatures and nested beliefs in approximate decentralized-POMDPs. In Association for Computa- tional Linguistics (ACL), pages 74-80.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "A network-based end-to-end trainable task-oriented dialogue system", |
| "authors": [ |
| { |
| "first": "Milica", |
| "middle": [], |
| "last": "Tsung-Hsien Wen", |
| "suffix": "" |
| }, |
| { |
| "first": "Nikola", |
| "middle": [], |
| "last": "Gasic", |
| "suffix": "" |
| }, |
| { |
| "first": "Lina", |
| "middle": [ |
| "M" |
| ], |
| "last": "Mrksic", |
| "suffix": "" |
| }, |
| { |
| "first": "Pei-Hao", |
| "middle": [], |
| "last": "Rojas-Barahona", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Su", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Ultes", |
| "suffix": "" |
| }, |
| { |
| "first": "Steve", |
| "middle": [], |
| "last": "Vandyke", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Young", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "European Association for Computational Linguistics (EACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "438--449", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, David Vandyke, and Steve Young. 2017. A network-based end-to-end trainable task-oriented dialogue system. In European Association for Computational Linguistics (EACL), pages 438-449.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "Philosophical Investigations", |
| "authors": [ |
| { |
| "first": "Ludwig", |
| "middle": [], |
| "last": "Wittgenstein", |
| "suffix": "" |
| } |
| ], |
| "year": 1953, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ludwig Wittgenstein. 1953. Philosophical Investiga- tions. Blackwell, Oxford.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "POMDP-based statistical spoken dialog systems: A review", |
| "authors": [ |
| { |
| "first": "Steve", |
| "middle": [], |
| "last": "Young", |
| "suffix": "" |
| }, |
| { |
| "first": "Milica", |
| "middle": [], |
| "last": "Ga\u0161i\u0107", |
| "suffix": "" |
| }, |
| { |
| "first": "Blaise", |
| "middle": [], |
| "last": "Thomson", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [ |
| "D" |
| ], |
| "last": "Williams", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the IEEE", |
| "volume": "", |
| "issue": "", |
| "pages": "1160--1179", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Steve Young, Milica Ga\u0161i\u0107, Blaise Thomson, and Ja- son D. Williams. 2013. POMDP-based statistical spo- ken dialog systems: A review. In Proceedings of the IEEE, number 5, pages 1160-1179.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "Probabilistic methods in spokendialogue systems", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Steve", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Young", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences", |
| "volume": "358", |
| "issue": "", |
| "pages": "1389--1402", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Steve J. Young. 2000. Probabilistic methods in spoken- dialogue systems. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 358(1769):1389-1402.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "Coordinating multi-agent reinforcement learning with limited communication", |
| "authors": [ |
| { |
| "first": "Chongjie", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Victor", |
| "middle": [], |
| "last": "Lesser", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 2013 International Conference on Autonomous Agents and Multi-agent Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "1101--1108", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chongjie Zhang and Victor Lesser. 2013. Coordinating multi-agent reinforcement learning with limited com- munication. In Proceedings of the 2013 International Conference on Autonomous Agents and Multi-agent Systems, pages 1101-1108.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF1": { |
| "uris": null, |
| "num": null, |
| "type_str": "figure", |
| "text": "as follows: 200 pairs of players Average score over multiple rounds of game play, which interestingly remains constant." |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "num": null, |
| "type_str": "figure", |
| "text": "Statistics of the collected corpus." |
| }, |
| "FIGREF3": { |
| "uris": null, |
| "num": null, |
| "type_str": "figure", |
| "text": "Private state of the players and meaning of two action sequences.consistent.\u2022 Most of the circles are goal-consistent.\u2022 At least one circle is goal-consistent." |
| }, |
| "FIGREF5": { |
| "uris": null, |
| "num": null, |
| "type_str": "figure", |
| "text": "Planning reasoning for the game in Figure 1 (reproduced here in the bottom right). (a) In order to calculate the expected gain (E) of generating circle, for every state s, P digit computes the probability of s being the P letter 's private state. (b) She then computes the expected utility (V ) if she generates circle assuming P letter 's private state is s." |
| }, |
| "FIGREF6": { |
| "uris": null, |
| "num": null, |
| "type_str": "figure", |
| "text": "Pragmatic reasoning for the game inFigure 1(reproduced here in the upper right) at time step 3. Players reason recursively about each others beliefs: the level-0 player puts a uniform distribution p 0 j over all the states in which at least one circle is goal-consistent independent of the shared world state and previous actions. The level-1 player assigns probability over the partner's private states s \u2212j proportional to the probability that she would have performed the last action given that state s \u2212j . For example, if digit 's private state, then saying bottom would be more probable (given the shared world state); if 1 1 1" |
| }, |
| "FIGREF7": { |
| "uris": null, |
| "num": null, |
| "type_str": "figure", |
| "text": "Average log-likelihood across messages. (a) Performance of PIP and baselines on all time steps. (b) Performance of PIP and baselines on only the first time step along with the ceiling given by the entropy of the human data. The error bars show 90% confidence intervals." |
| }, |
| "FIGREF8": { |
| "uris": null, |
| "num": null, |
| "type_str": "figure", |
| "text": "Rational speech acts. The pragmatic component of PIP is based on Rational Speech Act framework(Frank and Goodman, 2012;Golland et al., 2010)," |
| }, |
| "FIGREF9": { |
| "uris": null, |
| "num": null, |
| "type_str": "figure", |
| "text": "Expected ranking of the human messages according to different policies. Error bars show 90% confidence intervals." |
| }, |
| "TABREF3": { |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "text": "Bottom: one of the games played by Turkers. Top: the distribution of utterances on the first message. Players choose to explain their private state in different ways. Some use more general messages (e.g., square diamond), while some use more specific ones (e.g., blue square). Top diagram shows the first 20 most frequent messages on the first round (72% of all the messages).", |
| "content": "<table><tr><td/><td>20</td><td/><td/><td/><td/><td/><td/></tr><tr><td>frequency</td><td>10 15</td><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>5</td><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>blue square</td><td>middle row square diamond bottom right not top</td><td>color middle yellow square</td><td>middle left square</td><td>not circle squares</td><td>middle two what color yellow circle diamond square</td><td>either middle maybe middle</td><td>mid row middle objects</td></tr><tr><td/><td/><td colspan=\"2\">Find A1</td><td/><td colspan=\"2\">Find A1</td><td/></tr><tr><td/><td/><td>A ?</td><td>B ?</td><td/><td>? 2</td><td>? 2</td><td/></tr><tr><td/><td/><td>B ?</td><td>B ?</td><td/><td>? 1</td><td>? 1</td><td/></tr><tr><td/><td/><td>A ?</td><td>A ?</td><td/><td>? 3</td><td>? 1</td><td/></tr><tr><td/><td/><td colspan=\"2\">P letter view</td><td/><td colspan=\"2\">P digit view</td><td/></tr><tr><td/><td/><td colspan=\"3\">P digit : middle</td><td/><td/><td/></tr><tr><td/><td/><td colspan=\"5\">P letter : yellow circle</td><td/></tr><tr><td/><td/><td colspan=\"5\">P digit : bottom right</td><td/></tr><tr><td/><td/><td colspan=\"5\">P letter : click (1,2)</td><td/></tr><tr><td colspan=\"2\">Figure 4:</td><td/><td/><td/><td/><td/><td/></tr></table>" |
| } |
| } |
| } |
| } |