| { |
| "paper_id": "D10-1040", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:52:37.419466Z" |
| }, |
| "title": "A Game-Theoretic Approach to Generating Spatial Descriptions", |
| "authors": [ |
| { |
| "first": "Dave", |
| "middle": [], |
| "last": "Golland", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "UC", |
| "location": { |
| "postCode": "94720", |
| "settlement": "Berkeley Berkeley", |
| "region": "CA" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Percy", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "UC", |
| "location": { |
| "postCode": "94720", |
| "settlement": "Berkeley Berkeley", |
| "region": "CA" |
| } |
| }, |
| "email": "pliang@cs.berkeley.edu" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "UC", |
| "location": { |
| "postCode": "94720", |
| "settlement": "Berkeley Berkeley", |
| "region": "CA" |
| } |
| }, |
| "email": "klein@cs.berkeley.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Language is sensitive to both semantic and pragmatic effects. To capture both effects, we model language use as a cooperative game between two players: a speaker, who generates an utterance, and a listener, who responds with an action. Specifically, we consider the task of generating spatial references to objects, wherein the listener must accurately identify an object described by the speaker. We show that a speaker model that acts optimally with respect to an explicit, embedded listener model substantially outperforms one that is trained to directly generate spatial descriptions.", |
| "pdf_parse": { |
| "paper_id": "D10-1040", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Language is sensitive to both semantic and pragmatic effects. To capture both effects, we model language use as a cooperative game between two players: a speaker, who generates an utterance, and a listener, who responds with an action. Specifically, we consider the task of generating spatial references to objects, wherein the listener must accurately identify an object described by the speaker. We show that a speaker model that acts optimally with respect to an explicit, embedded listener model substantially outperforms one that is trained to directly generate spatial descriptions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Language is about successful communication between a speaker and a listener. For example, if the goal is to reference the target object O1 in Figure 1 , a speaker might choose one of the following two utterances:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 142, |
| "end": 150, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "(a) right of O2 (b) on O3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Although both utterances are semantically correct, (a) is ambiguous between O1 and O3, whereas (b) unambiguously identifies O1 as the target object, and should therefore be preferred over (a). In this paper, we present a game-theoretic model that captures this communication-oriented aspect of language interpretation and generation. Successful communication can be broken down into semantics and pragmatics. Most computational Figure 1 : An example of a 3D model of a room. The speaker's goal is to reference the target object O1 by describing its spatial relationship to other object(s). The listener's goal is to guess the object given the speaker's description.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 428, |
| "end": 436, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "work on interpreting language focuses on compositional semantics (Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Piantadosi et al., 2008) , which is concerned with verifying the truth of a sentence. However, what is missing from this truthoriented view is the pragmatic aspect of languagethat language is used to accomplish an end goal, as exemplified by speech acts (Austin, 1962) . Indeed, although both utterances (a) and (b) are semantically valid, only (b) is pragmatically felicitous: (a) is ambiguous and therefore violates the Gricean maxim of manner (Grice, 1975) . To capture this maxim, we develop a model of pragmatics based on game theory, in the spirit of J\u00e4ger (2008) but extended to the stochastic setting. We show that Gricean maxims fall out naturally as consequences of the model. An effective way to empirically explore the pragmatic aspects of language is to work in the grounded setting, where the basic idea is to map language to some representation of the non-linguistic world (Yu and Ballard, 2004; Feldman and Narayanan, 2004; Fleischman and Roy, 2007; Chen and Mooney, 2008; Frank et al., 2009; Liang et al., 2009) . Along similar lines, past work has also focused on interpreting natural language instructions (Branavan et al., 2009; Eisenstein et al., 2009; Kollar et al., 2010) , which takes into account the goal of the communication. This work differs from ours in that it does not clarify the formal relationship between pragmatics and the interpretation task. Pragmatics has also been studied in the context of dialog systems. For instance, DeVault and Stone (2007) present a model of collaborative language between multiple agents that takes into account contextual ambiguities.", |
| "cite_spans": [ |
| { |
| "start": 65, |
| "end": 96, |
| "text": "(Zettlemoyer and Collins, 2005;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 97, |
| "end": 119, |
| "text": "Wong and Mooney, 2007;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 120, |
| "end": 144, |
| "text": "Piantadosi et al., 2008)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 374, |
| "end": 388, |
| "text": "(Austin, 1962)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 566, |
| "end": 579, |
| "text": "(Grice, 1975)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 677, |
| "end": 689, |
| "text": "J\u00e4ger (2008)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 1008, |
| "end": 1030, |
| "text": "(Yu and Ballard, 2004;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 1031, |
| "end": 1059, |
| "text": "Feldman and Narayanan, 2004;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 1060, |
| "end": 1085, |
| "text": "Fleischman and Roy, 2007;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 1086, |
| "end": 1108, |
| "text": "Chen and Mooney, 2008;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 1109, |
| "end": 1128, |
| "text": "Frank et al., 2009;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 1129, |
| "end": 1148, |
| "text": "Liang et al., 2009)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 1245, |
| "end": 1268, |
| "text": "(Branavan et al., 2009;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 1269, |
| "end": 1293, |
| "text": "Eisenstein et al., 2009;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 1294, |
| "end": 1314, |
| "text": "Kollar et al., 2010)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 1582, |
| "end": 1606, |
| "text": "DeVault and Stone (2007)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We present our pragmatic model in a grounded setting where a speaker must describe a target object to a listener via spatial description (such as in the example given above). Though we use some of the techniques from work on the semantics of spatial descriptions (Regier and Carlson, 2001; Gorniak and Roy, 2004; Tellex and Roy, 2009) , we empirically demonstrate that having a model of pragmatics enables more successful communication.", |
| "cite_spans": [ |
| { |
| "start": 263, |
| "end": 289, |
| "text": "(Regier and Carlson, 2001;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 290, |
| "end": 312, |
| "text": "Gorniak and Roy, 2004;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 313, |
| "end": 334, |
| "text": "Tellex and Roy, 2009)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To model Grice's cooperative principle (Grice, 1975) , we formulate the interaction between a speaker S and a listener L as a cooperative game, that is, one in which S and L share the same utility function. For simplicity, we focus on the production and interpretation of single utterances, where the speaker and listener have access to a shared context. To simplify notation, we suppress writing the dependence on the context. The Communication Game 1. In order to communicate a target o to L, S produces an utterance w chosen according to a strategy p S (w | o).", |
| "cite_spans": [ |
| { |
| "start": 39, |
| "end": 52, |
| "text": "(Grice, 1975)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Language as a Game", |
| "sec_num": "2" |
| }, |
| { |
| "text": "2. L interprets w and responds with a guess g according to a strategy p L (g | w).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Language as a Game", |
| "sec_num": "2" |
| }, |
| { |
| "text": "U (o, g). o w g U speaker listener p s (w | o) p l (g | w)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "S and L collectively get a utility of", |
| "sec_num": "3." |
| }, |
| { |
| "text": "target utterance guess utility Figure 1 . For each instance, the target o, utterance w, guess g, and the resulting utility U are shown in their respective positions. A utility of 1 is awarded only when the guess matches the target.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 31, |
| "end": 39, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "S and L collectively get a utility of", |
| "sec_num": "3." |
| }, |
| { |
| "text": "cally in Figure 2 . Figure 3 shows several instances of the communication game being played for the scenario in Figure 1 . Grice's maxim of manner encourages utterances to be unambiguous, which motivates the following utility, which we call (communicative) success:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 9, |
| "end": 17, |
| "text": "Figure 2", |
| "ref_id": null |
| }, |
| { |
| "start": 20, |
| "end": 28, |
| "text": "Figure 3", |
| "ref_id": null |
| }, |
| { |
| "start": 112, |
| "end": 120, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "S and L collectively get a utility of", |
| "sec_num": "3." |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "U (o, g) def = I[o = g],", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "S and L collectively get a utility of", |
| "sec_num": "3." |
| }, |
| { |
| "text": "where the indicator function I[o = g] is 1 if o = g and 0 otherwise. Hence, a utility-maximizing speaker will attempt to produce unambiguous utterances because they increase the probability that the listener will correctly guess the target.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "S and L collectively get a utility of", |
| "sec_num": "3." |
| }, |
| { |
| "text": "Given a speaker strategy p S (w | o), a listener strategy p L (g | w), and a prior distribution over targets p(o), the expected utility obtained by S and L is as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "S and L collectively get a utility of", |
| "sec_num": "3." |
| }, |
| { |
| "text": "EU(S, L) = o,w,g p(o)p S (w|o)p L (g|w)U (o, g) = o,w p(o)p S (w|o)p L (o|w).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "S and L collectively get a utility of", |
| "sec_num": "3." |
| }, |
| { |
| "text": "( 2)3 From Reflex Speaker to Rational Speaker", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "S and L collectively get a utility of", |
| "sec_num": "3." |
| }, |
| { |
| "text": "Having formalized the language game, we now explore various speaker and listener strategies. First, let us consider literal strategies. A literal speaker (denoted S:LITERAL) chooses uniformly from the set of utterances consistent with a target object, i.e., the ones which are semantically valid; 1 a literal listener (denoted L:LITERAL) guesses an object consistent with the utterance uniformly at random. In the running example (Figure 1) , where the target object is O1, there are two semantically valid utterances:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 430, |
| "end": 440, |
| "text": "(Figure 1)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "S and L collectively get a utility of", |
| "sec_num": "3." |
| }, |
| { |
| "text": "(a) right of O2 (b) on O3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "S and L collectively get a utility of", |
| "sec_num": "3." |
| }, |
| { |
| "text": "S:LITERAL selects (a) or (b) each with probability 1 2 . If S:LITERAL chooses (a), L:LITERAL will guess the target object O1 correctly with probability 1 2 ; if S:LITERAL chooses (b), L:LITERAL will guess correctly with probability 1. Therefore, the expected utility EU(S:LITERAL, L:LITERAL) = 3 4 . We say S:LITERAL is an example of a reflex speaker because it chooses an utterance without taking the listener into account. A general reflex speaker is depicted in Figure 4(a) , where each edge represents a potential utterance.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 465, |
| "end": 476, |
| "text": "Figure 4(a)", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "S and L collectively get a utility of", |
| "sec_num": "3." |
| }, |
| { |
| "text": "Suppose we now have a model of some listener L. Motivated by game theory, we would optimize the expected utility (2) given p L (g | w). We call the resulting speaker S(L) the rational speaker with respect to listener L. Solving for this strategy yields:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "S and L collectively get a utility of", |
| "sec_num": "3." |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "p S(L) (w | o) = I[w = w * ], where w * = argmax w p L (o | w ).", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "S and L collectively get a utility of", |
| "sec_num": "3." |
| }, |
| { |
| "text": "1 Semantic validity is approximated by a set of heuristic rules (e.g. left is all positions with smaller x-coordinates).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "S and L collectively get a utility of", |
| "sec_num": "3." |
| }, |
| { |
| "text": "S w 1 o w 2 w 3 S(L) w 1 o L g 1 w 2 g 2 g 3 w 3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "S and L collectively get a utility of", |
| "sec_num": "3." |
| }, |
| { |
| "text": "(a) Reflex speaker (b) Rational speaker Intuitively, S(L) chooses an utterance, w * , such that, if listener L were to interpret w * , the probability of L guessing the target would be maximized. 2 The rational speaker is depicted in Figure 4 (b), where, as before, each edge at the first level represents a possible choice for the speaker, but there is now a second layer representing the response of the listener.", |
| "cite_spans": [ |
| { |
| "start": 196, |
| "end": 197, |
| "text": "2", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 234, |
| "end": 242, |
| "text": "Figure 4", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "S and L collectively get a utility of", |
| "sec_num": "3." |
| }, |
| { |
| "text": "To see how an embedded model of the listener improves communication, again consider our running example in Figure 1 . A speaker can describe the target object O1 using either w 1 = on O3 or w 2 = right of O2. Suppose the embedded listener is L:LITERAL, which chooses uniformly from the set of objects consistent with the given utterance. In this scenario, p L:LITERAL (O1 | w 1 ) = 1 because w 1 unambiguously describes the target object, but p L:LITERAL (O1 | w 2 ) = 1 2 . The rational speaker S(L:LITERAL) would therefore choose w 1 , achieving a utility of 1, which is an improvement over the reflex speaker S:LITERAL's utility of 3 4 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 107, |
| "end": 115, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "S and L collectively get a utility of", |
| "sec_num": "3." |
| }, |
| { |
| "text": "In the previous section, we showed that a literal strategy, one that considers only semantically valid choices, can be used to directly construct a reflex speaker S:LITERAL or an embedded listener in a rational speaker S(L:LITERAL). This section focuses on an orthogonal direction: improving literal strategies with learning. Specifically, we construct learned strategies from log-linear models trained on human annotations. These learned strategies can then be used to construct reflex and rational speaker variants-S:LEARNED and S(L:LEARNED), respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "From Literal Speaker to Learned Speaker", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We train the speaker, S:LEARNED, (similarly, listener, L:LEARNED) on training examples which comprise the utterances produced by the human annotators (see Section 6.1 for details on how this data was collected). Each example consists of a 3D model of a room in a house that specifies the 3D positions of each object and the coordinates of a 3D camera. When training the speaker, each example is a pair (o, w), where o is the input target object and w is the output utterance. When training the listener, each example is (w, g), where w is the input utterance and g is the output guessed object. For now, an utterance w consists of two parts:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training a Log-Linear Speaker/Listener", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2022 A spatial preposition w.r (e.g., right of) from a set of possible prepositions. 3 \u2022 A reference object w.o (e.g., O3) from the set of objects in the room.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training a Log-Linear Speaker/Listener", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We consider more complex utterances in Section 5. Both S:LEARNED and L:LEARNED are parametrized by log-linear models:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training a Log-Linear Speaker/Listener", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "p S:LEARNED (w|o; \u03b8 S ) \u221d exp{\u03b8 S \u03c6(o, w)} (4) p L:LEARNED (g|w; \u03b8 L ) \u221d exp{\u03b8 L \u03c6(g, w)} (5)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training a Log-Linear Speaker/Listener", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "where \u03c6(\u2022, \u2022) is the feature vector (see below), \u03b8 S and \u03b8 L are the parameter vectors for speaker and listener. Note that the speaker and listener use the same set of features, but they have different parameters. Furthermore, the first normalization sums over possible utterances w while the second normalization sums over possible objects g in the scene. The two parameter vectors are trained to optimize the loglikelihood of the training data under the respective models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training a Log-Linear Speaker/Listener", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Features We now describe the features \u03c6(o, w). These features draw inspiration from Landau and Jackendoff (1993) and Tellex and Roy (2009) .", |
| "cite_spans": [ |
| { |
| "start": 84, |
| "end": 112, |
| "text": "Landau and Jackendoff (1993)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 117, |
| "end": 138, |
| "text": "Tellex and Roy (2009)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training a Log-Linear Speaker/Listener", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Each object o in the 3D scene is represented by its bounding box, which is the smallest rectangular prism containing o. The following are functions of the camera, target (or guessed object) o, and the reference object w.o in the utterance. The full set of features is obtained by conjoining these functions with indicator functions of the form I[w.r = r], where r ranges over the set of valid prepositions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training a Log-Linear Speaker/Listener", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2022 Proximity functions measure the distance between o and w.o. This is implemented as the minimum over all the pairwise Euclidean distances between the corners of the bounding boxes. We also have indicator functions for whether o is the closest object, among the top 5 closest objects, and among the top 10 closest objects to w.o.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training a Log-Linear Speaker/Listener", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2022 Topological functions measure containment between o and w.o: vol(o \u2229 w.o)/vol(o) and vol(o \u2229 w.o)/vol(w.o). To simplify volume computation, we approximate each object by a bounding box that is aligned with the camera axes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training a Log-Linear Speaker/Listener", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2022 Projection functions measure the relative position of the bounding boxes with respect to one another. Specifically, let v be the vector from the center of w.o to the center of o. There is a function for the projection of v onto each of the axes defined by the camera orientation (see Figure 5) . Additionally, there is a set of indicator functions that capture the relative magnitude of these projections. For example, there is a indicator function denoting whether the projection of v onto the camera's x-axis is the largest of all three projections. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 286, |
| "end": 295, |
| "text": "Figure 5)", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Training a Log-Linear Speaker/Listener", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "So far, we have only considered speakers and listeners that deal with utterances consisting of one preposition and one reference object. We now extend these strategies to handle more complex utterances. Specifically, we consider utterances that conform to the following grammar: 4", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Handling Complex Utterances", |
| "sec_num": "5" |
| }, |
| { |
| "text": "[noun]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Handling Complex Utterances", |
| "sec_num": "5" |
| }, |
| { |
| "text": "N \u2192 something | O1 | O2 | \u2022 \u2022 \u2022 [relation] R \u2192 in front of | on | \u2022 \u2022 \u2022 [conjunction] NP \u2192 N RP * [relativization] RP \u2192 R NP", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Handling Complex Utterances", |
| "sec_num": "5" |
| }, |
| { |
| "text": "This grammar captures two phenomena of language use, conjunction and relativization.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Handling Complex Utterances", |
| "sec_num": "5" |
| }, |
| { |
| "text": "\u2022 Conjunction is useful when one spatial relation is insufficient to disambiguate the target object. For example, in Figure 1 , right of O2 could refer to the vase or the table, but using the conjunction right of O2 and on O3 narrows down the target object to just the vase.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 117, |
| "end": 125, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Handling Complex Utterances", |
| "sec_num": "5" |
| }, |
| { |
| "text": "\u2022 The main purpose of relativization is to refer to objects without a precise nominal descriptor. With complex utterances, it is possible to chain relative prepositional phrases, for example, using on something right of O2 to refer to the vase. 4 Naturally, we disallow direct reference to the target object.", |
| "cite_spans": [ |
| { |
| "start": 245, |
| "end": 246, |
| "text": "4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Handling Complex Utterances", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Given an utterance w, we define its complexity |w| as the number of applications of the relativization rule, RP \u2192 R NP, used to produce w. We had only considered utterances of complexity 1 in previous sections.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Handling Complex Utterances", |
| "sec_num": "5" |
| }, |
| { |
| "text": "To illustrate the types of utterances available under the grammar, again consider the scene in Figure 1 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 95, |
| "end": 103, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Example Utterances", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Utterances of complexity 2 can be generated either using the relativization rule exclusively, or both the conjunction and relativization rules. The relativization rule can be used to generate the following utterances:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example Utterances", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "\u2022 on something that is right of O2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example Utterances", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "\u2022 right of something that is left of O3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example Utterances", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Applying the conjunction rule leads to the following utterances:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example Utterances", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "\u2022 right of O2 and on O3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example Utterances", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "\u2022 right of O2 and under O1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example Utterances", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "\u2022 left of O1 and left of O3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example Utterances", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Note that we inserted the words that is after each N and the word and between every adjacent pair of RPs generated via the conjunction rule. This is to help a human listener interpret an utterance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example Utterances", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Suppose we have a rational speaker S(L) defined in terms of an embedded listener L which operates over utterances of complexity 1. We first extend L to interpret arbitrary utterances of our grammar. The rational speaker (defined in (2)) automatically inherits this extension.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extending the Rational Speaker", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Compositional semantics allows us to define the interpretation of complex utterances in terms of simpler ones. Specifically, each node in the parse tree has a denotation, which is computed recursively in terms of the node's children via a set of simple rules. Usually, denotations are represented as lambda-calculus functions, but for us, they will be distributions over objects in the scene. As a base case for interpreting utterances of complexity 1, we can use either L:LITERAL or L:LEARNED (defined in Sections 3 and 4).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extending the Rational Speaker", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Given a subtree w rooted at u \u2208 {N, NP, RP}, we define the denotation of w, w , to be a distribution over the objects in the scene in which the utterance was generated. The listener strategy p L (g|w) = w is recursively as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extending the Rational Speaker", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "\u2022 If w is rooted at N with a single child x, then w is the uniform distribution over N (x), the set of objects consistent with the word x. \u2022 If w is rooted at NP, we recursively compute the distributions over objects g for each child tree, multiply the probabilities, and renormalize (Hinton, 1999). \u2022 If w is rooted at RP with relation r, we recursively compute the distribution over objects g for the child NP tree. We then appeal to the base case to produce a distribution over objects g which are related to g via relation r.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extending the Rational Speaker", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "This strategy is defined formally as follows: Figure 6 shows an example of this bottomup denotation computation for the utterance on something right of O2 with respect to the scene in Figure 1 . The denotation starts with the lowest NP node O2 , which places all the mass on O2 in the scene. Moving up the tree, we compute the denotation of the RP, right of O2 , using the RP case of (6), which results in a distribution that places equal mass on O1 and O3. 5 The denotation of the N node something is a flat distribution over all the objects in the scene. Continuing up the tree, the denotation of the NP is computed by taking a product of the object distributions, and turns out to be exactly the same split distribution as its RP child. Finally, the denotation at the root is computed by applying the base case to on and the resulting distribution from the previous step. 5 It is worth mentioning that this split distribution between O1 and O3 represents the ambiguity mentioned in Section 3 when discussing the shortcomings of S:LITERAL. Generation So far, we have defined the listener strategy p L (g | w). Given target o, the rational speaker S(L) with respect to this listener needs to compute argmax w p L (o | w) as dictated by (3). This maximization is performed by enumerating all utterances of bounded complexity.", |
| "cite_spans": [ |
| { |
| "start": 875, |
| "end": 876, |
| "text": "5", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 46, |
| "end": 54, |
| "text": "Figure 6", |
| "ref_id": "FIGREF3" |
| }, |
| { |
| "start": 184, |
| "end": 192, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Extending the Rational Speaker", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "p L (g | w) \u221d \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 I[g \u2208 N (x)] w = (N x) k j=1 p L (g | w j ) w = (NP w 1 . . . w k ) g p L (g | (r, g ))p L (g | w ) w = (RP (R r) w )", |
| "eq_num": "(6)" |
| } |
| ], |
| "section": "Extending the Rational Speaker", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "One shortcoming of the previous approach for extending a listener is that it falsely assumes that a listener can reliably interpret a simple utterance just as well as it can a complex utterance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling Listener Confusion", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "We now describe a more realistic speaker which is robust to listener confusion. Let \u03b1 \u2208 [0, 1] be a focus parameter which determines the confusion level. Suppose we have a listener L. When presented with an utterance w, for each application of the relativization rule, we have a 1 \u2212 \u03b1 probability of losing focus. If we stay focused for the entire utterance (with probability \u03b1 |w| ), then we interpret the utterance according to p L . Otherwise (with probability 1 \u2212 \u03b1 |w| ), we guess an object at random according to p rnd (g | w). We then use (3) to define the rational speaker S(L) with respect the following \"confused listener\" strategy:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling Listener Confusion", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "p L (g | w) = \u03b1 |w| p L (g | w) + (1 \u2212 \u03b1 |w| )p rnd (g | w).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling Listener Confusion", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "(7) As \u03b1 \u2192 0, the confused listener is more likely to make a random guess, and thus there is a stronger penalty against using more complex utterances. As \u03b1 \u2192 1, the confused listener converges to p L and the penalty for using complex utterances vanishes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling Listener Confusion", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Notice that the rational speaker as defined so far does not make full use of our grammar. Specifically, the rational speaker will never use the \"wildcard\" noun something nor the relativization rule in the grammar because an NP headed by the wildcard something can always be replaced by the object ID to obtain a higher utility. For instance, in Figure 6 , the NP spanning something right of O2 can be replaced by O3.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 345, |
| "end": 353, |
| "text": "Figure 6", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "The Taboo Setting", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "However, it is not realistic to assume that all objects can be referenced directly. To simulate scenarios where some objects cannot be referenced directly (and to fully exercise our grammar), we introduce the taboo setting. In this setting, we remove from the lexicon some fraction of the object IDs which are closest to the target object. Since the tabooed objects cannot be referenced directly, a speaker must resort to use of the wildcard something and relativization.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Taboo Setting", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "For example, in Figure 7 , we enable tabooing around the target O1. This prevents the speaker from referring directly to O3, so the speaker is forced to describe O3 via the relativization rule, for example, producing something right of O2. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 16, |
| "end": 24, |
| "text": "Figure 7", |
| "ref_id": "FIGREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "The Taboo Setting", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "We now present our empirical results, showing that rational speakers, who have embedded models of lis- Figure 8 : Mechanical Turk speaker task: Given the target object (e.g., O1), a human speaker must choose an utterance to describe the object (e.g., right of O2).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 103, |
| "end": 111, |
| "text": "Figure 8", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "6" |
| }, |
| { |
| "text": "teners, can communicate more successfully than reflex speakers, who do not.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We collected 43 scenes (rooms) from the Google Sketchup 3D Warehouse, each containing an average of 22 objects (household items and pieces of furniture arranged in a natural configuration). For each object o in a scene, we create a scenario, which represents an instance of the communication game with o as the target object. There are a total of 2,860 scenarios, which we split evenly into a training set (denoted TR) and a test set (denoted TS).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Setup", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "We created the following two Amazon Mechanical Turk tasks, which enable humans to play the language game on the scenarios: Speaker Task In this task, human annotators play the role of speakers in the language game. They are prompted with a target object o and asked to each produce an utterance w (by selecting a preposition w.r from a dropdown list and clicking on a reference object w.o) that best informs a listener of the identity of the target object.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Setup", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "For each training scenario o, we asked three speakers to produce an utterance w. The three resulting (o, w) pairs are used to train the learned reflex speaker (S:LITERAL). These pairs were also used to train the learned reflex listener (L:LITERAL), where the target o is treated as the guessed object. See Section 4.1 for the details of the training procedure.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Setup", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Listener Task In this task, human annotators play the role of listeners. Given an utterance generated by a speaker (human or not), the human listener must guess the target object that the speaker saw by clicking on an object. The purpose of the listener task is to evaluate speakers, as described in the next section.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Setup", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Utility (Communicative Success) We primarily evaluate a speaker by its ability to communicate successfully with a human listener. For each test scenario, we asked three listeners to guess the object. We use p L:HUMAN (g | w) to denote the distribution over guessed objects g given prompt w. For example, if two of the three listeners guessed O1, then p L:HUMAN (O1 | w) = 2 3 . The expected utility (2) is then computed by averaging the utility (communicative success) over the test scenarios TS:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "SUCCESS(S) = EU(S, L:HUMAN) (8) = 1 |TS| o\u2208TS w p S (w|o)p L:HUMAN (o|w).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "Exact Match As a secondary evaluation metric, we also measure the ability of our speaker to exactly match an utterance produced by a human speaker. Note that since there are many ways of describing an object, exact match is neither necessary nor sufficient for successful communication.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "We asked three human speakers to each produce an utterance w given a target o. We use p S:HUMAN (w | o) to denote this distribution; for example, p S:HUMAN (right of O2 | o) = 1 3 if exactly one of the three speakers uttered right of O2. We then", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "Success Exact Match S:LITERAL [reflex] 4.62% 1.11% S(L:LITERAL) [rational] 33.65% 2.91% S:LEARNED [reflex] 38.36% 5.44% S(L:LEARNED) [rational] 52.63%", |
| "cite_spans": [ |
| { |
| "start": 30, |
| "end": 38, |
| "text": "[reflex]", |
| "ref_id": null |
| }, |
| { |
| "start": 64, |
| "end": 74, |
| "text": "[rational]", |
| "ref_id": null |
| }, |
| { |
| "start": 98, |
| "end": 106, |
| "text": "[reflex]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Speaker", |
| "sec_num": null |
| }, |
| { |
| "text": "14.03%", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Speaker", |
| "sec_num": null |
| }, |
| { |
| "text": "S:HUMAN 41.41% 19.95% Table 1 : Comparison of various speakers on communicative success and exact match, where only utterances of complexity 1 are allowed. The rational speakers (with respect to both the literal listener L:LITERAL and the learned listener L:LEARNED) perform better than their reflex counterparts. While the human speaker (composed of three people) has higher exact match (it is better at mimicking itself), the rational speaker S(L:LEARNED) actually achieves higher communicative success than the human listener.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 22, |
| "end": 29, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Speaker", |
| "sec_num": null |
| }, |
| { |
| "text": "define the exact match of a speaker S as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Speaker", |
| "sec_num": null |
| }, |
| { |
| "text": "MATCH(S) = 1 |TS| o\u2208TS w p S:HUMAN (w | o)p S (w | o). (9)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Speaker", |
| "sec_num": null |
| }, |
| { |
| "text": "We first evaluate speakers in the setting where only utterances of complexity 1 are allowed. Table 1 shows the results on both success and exact match. First, our main result is that the two rational speakers S(L:LITERAL) and S(L:LEARNED), which each model a listener explicitly, perform significantly better than the corresponding reflex speakers, both in terms of success and exact match.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 93, |
| "end": 100, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Reflex versus Rational Speakers", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "Second, it is natural that the speakers that involve learning (S:LITERAL and S(L:LITERAL)) outperform the speakers that only consider the literal meaning of utterances (S:LEARNED and S(L:LEARNED)), as the former models capture subtler preferences using features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reflex versus Rational Speakers", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "Finally, we see that in terms of exact match, the human speaker S:HUMAN performs the best (this is not surprising because human exact match is essentially the inter-annotator agreement), but in terms of communicative success, S(L:LEARNED) achieves a higher success rate than S:HUMAN, suggesting that the game-theoretic modeling undertaken by the rational speakers is effective for communication, which is ultimate goal of language.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reflex versus Rational Speakers", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "Note that exact match is low even for the \"human speaker\", since there are often many equally good ways to evoke an object. At the same time, the success rates for all speakers are rather low, reflecting the fundamental difficulty of the setting: sometimes it is impossible to unambiguously evoke the target object via short utterances. In the next section, we show that we can improve the success rate by allowing the speakers to generate more complex utterances.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reflex versus Rational Speakers", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "We now evaluate the rational speaker S(L:LEARNED) when it is allowed to generate utterances of complexity 1 or 2. Recall from Section 5.3 that the speaker depends on a focus parameter \u03b1, which governs the embedded listener's ability to interpret the utterance. We divided the test set (TS) in two halves: TSDEV, which we used to tune the value of \u03b1 and TSFINAL, which we used to evaluate success rates. Figure 10 shows the communicative success as a function of \u03b1 on TSDEV. When \u03b1 is small, the embedded listener is confused more easily by more complex utterances; therefore the speaker tends to choose mostly utterances of complexity 1. As \u03b1 increases, the utterances increase in complexity, as does the success rate. However, when \u03b1 approaches 1, the utterances are too complex and the success rate decreases. The dependence between \u03b1 and average utterance complexity is shown in Figure 11 . Table 2 shows the success rates on TSFINAL for \u03b1 \u2192 0 (all utterances have complexity 1), \u03b1 = 1 (all utterances have complexity 2), and \u03b1 tuned to maximize the success rate based on TSDEV. Setting \u03b1 in this manner allows us to effectively balance complexity and ambiguity, resulting in an improvement in the success rate. 12.98% 0.81 Table 2 : Communicative success (on TSFINAL) of the rational speaker S(L:LEARNED) for various values of \u03b1 across different taboo amounts. When the taboo amount is small, small values of \u03b1 lead to higher success rates. As the taboo amount increases, larger values of \u03b1 (resulting in more complex utterances) are better.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 403, |
| "end": 412, |
| "text": "Figure 10", |
| "ref_id": "FIGREF6" |
| }, |
| { |
| "start": 882, |
| "end": 891, |
| "text": "Figure 11", |
| "ref_id": "FIGREF7" |
| }, |
| { |
| "start": 894, |
| "end": 901, |
| "text": "Table 2", |
| "ref_id": null |
| }, |
| { |
| "start": 1227, |
| "end": 1234, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Generating More Complex Utterances", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "Starting with the view that the purpose of language is successful communication, we developed a gametheoretic model in which a rational speaker generates utterances by explicitly taking the listener into account. On the task of generating spatial descriptions, we showed the rational speaker substantially outperforms a baseline reflex speaker that does not have an embedded model. Our results therefore suggest that a model of the pragmatics of communication is an important factor to consider for generation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "If there are ties, any distribution over the utterances having the same utility is optimal.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We chose 10 prepositions commonly used by people to describe objects in a preliminary data gathering experiment. This list includes multi-word units, which function equivalently to prepositions, such as left of.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "Acknowledgements This work was supported by the National Science Foundation through a Graduate Research Fellowship to the first two authors. We also would like to acknowledge Surya Murali, the designer of the 3D Google Sketchup models, and thank the anonymous reviewers for their comments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "acknowledgement", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "How to do Things with Words: The William James Lectures delivered at Harvard University in 1955", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "L" |
| ], |
| "last": "Austin", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Oxford", |
| "suffix": "" |
| }, |
| { |
| "first": "Uk", |
| "middle": [ |
| "S" |
| ], |
| "last": "Clarendon", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Branavan", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [ |
| "S" |
| ], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Barzilay", |
| "suffix": "" |
| } |
| ], |
| "year": 1962, |
| "venue": "Association for Computational Linguistics and International Joint Conference on Natural Language Processing (ACL-IJCNLP), Singapore. Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. L. Austin. 1962. How to do Things with Words: The William James Lectures delivered at Harvard Univer- sity in 1955. Oxford, Clarendon, UK. S. Branavan, H. Chen, L. S. Zettlemoyer, and R. Barzilay. 2009. Reinforcement learning for mapping instruc- tions to actions. In Association for Computational Lin- guistics and International Joint Conference on Natural Language Processing (ACL-IJCNLP), Singapore. As- sociation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Learning to sportscast: A test of grounded language acquisition", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [ |
| "L" |
| ], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "J" |
| ], |
| "last": "Mooney", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "International Conference on Machine Learning (ICML)", |
| "volume": "", |
| "issue": "", |
| "pages": "128--135", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. L. Chen and R. J. Mooney. 2008. Learning to sportscast: A test of grounded language acquisition. In International Conference on Machine Learning (ICML), pages 128-135. Omnipress.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Managing ambiguities across utterances in dialogue", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Devault", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthew", |
| "middle": [], |
| "last": "Stone", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David DeVault and Matthew Stone. 2007. Managing ambiguities across utterances in dialogue.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Reading to learn: Constructing features from semantic abstracts", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Eisenstein", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Clarke", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Goldwasser", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Eisenstein, J. Clarke, D. Goldwasser, and D. Roth. 2009. Reading to learn: Constructing features from semantic abstracts. In Empirical Methods in Natural Language Processing (EMNLP), Singapore.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Embodied meaning in a neural theory of language", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Feldman", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Narayanan", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Brain and Language", |
| "volume": "89", |
| "issue": "", |
| "pages": "385--392", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Feldman and S. Narayanan. 2004. Embodied meaning in a neural theory of language. Brain and Language, 89:385-392.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Representing intentions in a cognitive model of language acquisition: Effects of phrase structure on situated verb learning", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Fleischman", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Roy", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Association for the Advancement of Artificial Intelligence (AAAI)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Fleischman and D. Roy. 2007. Representing inten- tions in a cognitive model of language acquisition: Ef- fects of phrase structure on situated verb learning. In Association for the Advancement of Artificial Intelli- gence (AAAI), Cambridge, MA. MIT Press.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Using speakers' referential intentions to model early cross-situational word learning", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [ |
| "C" |
| ], |
| "last": "Frank", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [ |
| "D" |
| ], |
| "last": "Goodman", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "B" |
| ], |
| "last": "Tenenbaum", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Psychological Science", |
| "volume": "20", |
| "issue": "5", |
| "pages": "578--585", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. C. Frank, N. D. Goodman, and J. B. Tenenbaum. 2009. Using speakers' referential intentions to model early cross-situational word learning. Psychological Science, 20(5):578-585.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Grounded semantic composition for visual scenes", |
| "authors": [ |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Gorniak", |
| "suffix": "" |
| }, |
| { |
| "first": "Deb", |
| "middle": [], |
| "last": "Roy", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "In Journal of Artificial Intelligence Research", |
| "volume": "21", |
| "issue": "", |
| "pages": "429--470", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter Gorniak and Deb Roy. 2004. Grounded semantic composition for visual scenes. In Journal of Artificial Intelligence Research, volume 21, pages 429-470.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Syntax and Semantics; Logic and Conversation", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [ |
| "P" |
| ], |
| "last": "Grice", |
| "suffix": "" |
| } |
| ], |
| "year": 1975, |
| "venue": "Speech Acts", |
| "volume": "3", |
| "issue": "", |
| "pages": "41--58", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "H. P. Grice. 1975. Syntax and Semantics; Logic and Conversation. 3:Speech Acts:41-58.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Products of experts", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Hinton", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "International Conference on Artificial Neural Networks (ICANN)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G. Hinton. 1999. Products of experts. In International Conference on Artificial Neural Networks (ICANN).", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Game theory in semantics and pragmatics", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "J\u00e4ger", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G. J\u00e4ger. 2008. Game theory in semantics and pragmat- ics. Technical report, University of T\u00fcbingen.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Toward understanding natural language directions", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Kollar", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Tellex", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Roy", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Roy", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Human-Robot Interaction", |
| "volume": "", |
| "issue": "", |
| "pages": "259--266", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "T. Kollar, S. Tellex, D. Roy, and N. Roy. 2010. Toward understanding natural language directions. In Human- Robot Interaction, pages 259-266.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "what\" and \"where\" in spatial language and spatial cognition", |
| "authors": [ |
| { |
| "first": "Barbara", |
| "middle": [], |
| "last": "Landau", |
| "suffix": "" |
| }, |
| { |
| "first": "Ray", |
| "middle": [], |
| "last": "Jackendoff", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Behavioral and Brain Sciences", |
| "volume": "16", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Barbara Landau and Ray Jackendoff. 1993. \"what\" and \"where\" in spatial language and spatial cognition. Behavioral and Brain Sciences, 16(2spatial preposi- tions analysis, cross linguistic conceptual similarities;", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Learning semantic correspondences with less supervision", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "I" |
| ], |
| "last": "Jordan", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Association for Computational Linguistics and International Joint Conference on Natural Language Processing (ACL-IJCNLP), Singapore. Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Liang, M. I. Jordan, and D. Klein. 2009. Learning se- mantic correspondences with less supervision. In As- sociation for Computational Linguistics and Interna- tional Joint Conference on Natural Language Process- ing (ACL-IJCNLP), Singapore. Association for Com- putational Linguistics.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "A Bayesian model of the acquisition of compositional semantics", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [ |
| "T" |
| ], |
| "last": "Piantadosi", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [ |
| "D" |
| ], |
| "last": "Goodman", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [ |
| "A" |
| ], |
| "last": "Ellis", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "B" |
| ], |
| "last": "Tenenbaum", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the Thirtieth Annual Conference of the Cognitive Science Society", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. T. Piantadosi, N. D. Goodman, B. A. Ellis, and J. B. Tenenbaum. 2008. A Bayesian model of the acquisi- tion of compositional semantics. In Proceedings of the Thirtieth Annual Conference of the Cognitive Science Society.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Journal of experimental psychology. general; grounding spatial language in perception: an empirical and computational investigation", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Regier", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [ |
| "A" |
| ], |
| "last": "Carlson", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "", |
| "volume": "130", |
| "issue": "", |
| "pages": "273--298", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "T Regier and LA Carlson. 2001. Journal of experimen- tal psychology. general; grounding spatial language in perception: an empirical and computational investiga- tion. 130(2):273-298.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Grounding spatial prepositions for video search", |
| "authors": [ |
| { |
| "first": "Stefanie", |
| "middle": [], |
| "last": "Tellex", |
| "suffix": "" |
| }, |
| { |
| "first": "Deb", |
| "middle": [], |
| "last": "Roy", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "ICMI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stefanie Tellex and Deb Roy. 2009. Grounding spatial prepositions for video search. In ICMI.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Learning synchronous grammars for semantic parsing with lambda calculus", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [ |
| "W" |
| ], |
| "last": "Wong", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "J" |
| ], |
| "last": "Mooney", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "960--967", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Y. W. Wong and R. J. Mooney. 2007. Learning syn- chronous grammars for semantic parsing with lambda calculus. In Association for Computational Linguis- tics (ACL), pages 960-967, Prague, Czech Republic. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "On the integration of grounding language and learning objects", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [ |
| "H" |
| ], |
| "last": "Ballard", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Association for the Advancement of Artificial Intelligence (AAAI)", |
| "volume": "", |
| "issue": "", |
| "pages": "488--493", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C. Yu and D. H. Ballard. 2004. On the integration of grounding language and learning objects. In Asso- ciation for the Advancement of Artificial Intelligence (AAAI), pages 488-493, Cambridge, MA. MIT Press.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [ |
| "S" |
| ], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Uncertainty in Artificial Intelligence (UAI)", |
| "volume": "", |
| "issue": "", |
| "pages": "658--666", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "L. S. Zettlemoyer and M. Collins. 2005. Learning to map sentences to logical form: Structured classifica- tion with probabilistic categorial grammars. In Uncer- tainty in Artificial Intelligence (UAI), pages 658-666.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "uris": null, |
| "text": "Diagram representing the communication game. A target, o, is given to the speaker that generates an utterance w. Based on this utterance, the listener generates a guess g. If o = g, then both the listener and speaker get a utility of 1, otherwise they get a utility of 0.This communication game is described graphi-Three instances of the communication game on the scenario in", |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "num": null, |
| "uris": null, |
| "text": "(a) A reflex speaker (S) directly selects an utterance based only on the target object. Each edge represents a different choice of utterance. (b) A rational speaker (S(L)) selects an utterance based on an embedded model of the listener (L). Each edge in the first layer represents a different choice the speaker can make, and each edge in the second layer represents a response of the listener.", |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "num": null, |
| "uris": null, |
| "text": "The projection features are computed by projecting a vector v extending from the center of the reference object to the center of the target object onto the camera axes f x and f y .", |
| "type_str": "figure" |
| }, |
| "FIGREF3": { |
| "num": null, |
| "uris": null, |
| "text": "The listener model maps an utterance to a distribution over objects in the room. Each internal NP or RP node is a distribution over objects in the room.", |
| "type_str": "figure" |
| }, |
| "FIGREF4": { |
| "num": null, |
| "uris": null, |
| "text": "With tabooing enabled around O1, O3 can no longer be referred to directly (represented by an X).", |
| "type_str": "figure" |
| }, |
| "FIGREF5": { |
| "num": null, |
| "uris": null, |
| "text": "Mechanical Turk listener task: a human listener is prompted with an utterance generated by a speaker (e.g., right of O2), and asked to click on an object (shown by the red arrow).", |
| "type_str": "figure" |
| }, |
| "FIGREF6": { |
| "num": null, |
| "uris": null, |
| "text": "Communicative success as a function of focus parameter \u03b1 without tabooing on TSDEV. The optimal value of \u03b1 is obtained at 0.79.", |
| "type_str": "figure" |
| }, |
| "FIGREF7": { |
| "num": null, |
| "uris": null, |
| "text": "Average utterance complexity as a function of the focus parameter \u03b1 on TSDEV. Higher values of \u03b1 yield more complex utterances.", |
| "type_str": "figure" |
| } |
| } |
| } |
| } |