| { |
| "paper_id": "2021", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T01:13:31.705308Z" |
| }, |
| "title": "Reference and coreference in situated dialogue", |
| "authors": [ |
| { |
| "first": "Sharid", |
| "middle": [], |
| "last": "Lo\u00e1iciga", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Potsdam", |
| "location": { |
| "country": "Germany" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "Dobnik", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Gothenburg", |
| "location": { |
| "country": "Sweden" |
| } |
| }, |
| "email": "simon.dobnik@gu.se" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Schlangen", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Potsdam", |
| "location": { |
| "country": "Germany" |
| } |
| }, |
| "email": "david.schlangen@uni-potsdam.de" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "In recent years, a large number of corpora have been developed for vision and language tasks. We argue that there is still significant room for corpora that increase the complexity of both visual and linguistic domains and which capture different varieties of perceptual and conversational contexts. Working with two corpora approaching this goal, we present a linguistic perspective on some of the challenges in creating and extending resources combining language and vision while preserving continuity with the existing best practices in the area of coreference annotation.", |
| "pdf_parse": { |
| "paper_id": "2021", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "In recent years, a large number of corpora have been developed for vision and language tasks. We argue that there is still significant room for corpora that increase the complexity of both visual and linguistic domains and which capture different varieties of perceptual and conversational contexts. Working with two corpora approaching this goal, we present a linguistic perspective on some of the challenges in creating and extending resources combining language and vision while preserving continuity with the existing best practices in the area of coreference annotation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "With the ease of combining representations from different modalities provided by neural networks, text and vision are coming together. There is a growing body of resources addressing a setting in which the visual context can be exploited to support a textual task, for example visual coreference resolution.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Several corpora have been developed in the domain of vision and language (V&L), for example corpora of image captions (Lin et al., 2014; Young et al., 2014; , images and paragraph descriptions (Krause et al., 2017) , visual question answering (Antol et al., 2015) , visual dialogue (Das et al., 2017) and embodied question answering (Das et al., 2018) . Through these the V&L research has progressively moved from sentence descriptions to descriptions involving utterances and conversations, therefore adding complexity to their semantic representations. In parallel to the corpora, V&L systems have been developed but of course these are limited by the complexity of the task for which the dataset has been collected. The end goal of the current research is to move to a more complex linguistic setting involving multiparty dialogue and visual representations that go beyond individual images.", |
| "cite_spans": [ |
| { |
| "start": 118, |
| "end": 136, |
| "text": "(Lin et al., 2014;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 137, |
| "end": 156, |
| "text": "Young et al., 2014;", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 193, |
| "end": 214, |
| "text": "(Krause et al., 2017)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 243, |
| "end": 263, |
| "text": "(Antol et al., 2015)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 282, |
| "end": 300, |
| "text": "(Das et al., 2017)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 333, |
| "end": 351, |
| "text": "(Das et al., 2018)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Situated reference resolution involves grounding linguistic expressions in perceptual representations (Harnad, 1990) . Coreference resolution, traditionally a textual task, involves linking linguistic expressions referring to the same discourse entities (Stede, 2012) . While challenging, the task is defined by the familiar nature of written texts: linear, planned and structured; defining thus the coreference mechanisms and devices found in them. In resources combining V&L, however, the textual part is often a dialogue or pairs of question-answers. As a result, the coreference devices differ considerably from those found in texts and are closer to actual conversations whereby people create reference to entities on the fly. This of course comes with its own challenges, but there are also some relations made easier since they can be grounded in the image.", |
| "cite_spans": [ |
| { |
| "start": 102, |
| "end": 116, |
| "text": "(Harnad, 1990)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 254, |
| "end": 267, |
| "text": "(Stede, 2012)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "As V&L come together, there is therefore an increased need for extending resources for the task of visual coreference resolution. This means engaging with the challenges along two axes:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 Dialogue: built by two speakers who each have their own mental state and cognitive process but who are communicating through referring expressions which are projected in the same conversation. \u2022 Shared physical context: simultaneous access to an image or other perceptual context which enables non-linear references to it. Instead, the reference is guided by visual attention. We present a linguistic perspective on these challenges by analysing a pilot annotation of two situated dialogue corpora: the Cups corpus (Dobnik et al., 2020) and the Tell-me-more corpus (Ilinykh et al., 2019) , shown below in Figure 1 and example (1) respectively. Starting from the annotation scheme for several textual coreference datasets (Artstein and Poesio, 2006; Pradhan et al., 2007; Uryupina et al., 2019) , this exercise proved useful to pinpoint in what ways the purely textual doc-ument scenario is different from the domain of embodied interaction.", |
| "cite_spans": [ |
| { |
| "start": 517, |
| "end": 538, |
| "text": "(Dobnik et al., 2020)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 567, |
| "end": 589, |
| "text": "(Ilinykh et al., 2019)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 723, |
| "end": 750, |
| "text": "(Artstein and Poesio, 2006;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 751, |
| "end": 772, |
| "text": "Pradhan et al., 2007;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 773, |
| "end": 795, |
| "text": "Uryupina et al., 2019)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 607, |
| "end": 615, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The first corpus contains a conversation between two participants over an almost identical visual scene involving a table and cups where participants have different locations (Figure 1 ). Some cups have been removed from each participant's view and they are instructed to discuss over a computer terminal in order to find the cups that each does not see. The Tell-me-more corpus consists of images accompanied with a small text of five complete sentences, collected by asking participants to describe the image to a friend, successively adding details. The genre of these texts is therefore mixed: in between standard text (as found in news text for example) and dialogue data which reflects the features found in conversations rather than written conventions.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 175, |
| "end": 184, |
| "text": "(Figure 1", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "These corpora are complementary as Cups gives us accurate visual ground truth information with free and unrestricted dialogue, while Tell-me-more offers a richer unrestricted image with short and task constrained (pseudo-)dialogues.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we discuss a number of cases from these corpora that challenge both standard language grounding annotations as well as standard coreference annotation. This work points thus towards required future work in creating (co)reference annotation schemes that can handle situated dialogue.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Pointing to the inability of NLP tools to handle the textual part in situated dialogue, early works had described the need to ground the dialogue in the image in a manner informed by linguistics (Byron, 2003) .", |
| "cite_spans": [ |
| { |
| "start": 195, |
| "end": 208, |
| "text": "(Byron, 2003)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "As content develops in a text, entities are introduced and re-mentioned, establishing discourse referents. The context is provided by the document and no extra-linguistic reference is needed for resolving the reference to an entity (Karttunen, 1969) . In situated dialogue, on the other hand, the visual modality brings the extra-linguistic context as a source of referents. Here, resolving references to entities can be thus achieved by either looking at the picture or by reading the discourse. Recording both strategies separately is crucial if we want to understand and model them soundly, in keeping with theories of cognitive processing (cf. (Kelleher et al., 2005) ). Extending the coreference annotation paradigm is thus the best bet although not a lot Textual coreference Annotated data for the coreference resolution task has mainly focused on news texts and concrete nouns, excluding reference to events and other coreferential relations such as bridging, deixis, and ambiguous items well documented in the linguistic literature but deemed infrequent or too difficult to process (Poesio, 2016). In contrast, there is a growing body of literature interested in phenomena beyond the nominal case (Kolhatkar et al., 2018; Nedoluzhko and Lapshinova-Koltunski, 2016) , resulting in new, although still small in size, annotated corpora (Lapshinova-Koltunski et al., 2018; Zeldes, 2017; Uryupina et al., 2020) .", |
| "cite_spans": [ |
| { |
| "start": 232, |
| "end": 249, |
| "text": "(Karttunen, 1969)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 648, |
| "end": 671, |
| "text": "(Kelleher et al., 2005)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 1205, |
| "end": 1229, |
| "text": "(Kolhatkar et al., 2018;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 1230, |
| "end": 1272, |
| "text": "Nedoluzhko and Lapshinova-Koltunski, 2016)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 1341, |
| "end": 1376, |
| "text": "(Lapshinova-Koltunski et al., 2018;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 1377, |
| "end": 1390, |
| "text": "Zeldes, 2017;", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 1391, |
| "end": 1413, |
| "text": "Uryupina et al., 2020)", |
| "ref_id": "BIBREF30" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Visual coreference Coreference work based on the popular VisDial dataset (Das et al., 2017) targets only a limited set of referential expressions, partly because it relies on automatic tools (Kottur et al., 2018; Yu et al., 2019) , which are known to be problematic with this genre. With a focus in grounded human interaction, there are corpora whose textual part comprises question answer pairs (Antol et al., 2015; Goyal et al., 2017) . Those, however, are short in nature, with few opportunities for re-mention of the different objects in the image and hence coreference. Last, corpora designed towards navigation and location (Stoia et al., 2008; Thomason et al., 2019) focusing on different kind of task and descriptions might be good candidates that could be explored and extended in a similar fashion as our corpora.", |
| "cite_spans": [ |
| { |
| "start": 73, |
| "end": 91, |
| "text": "(Das et al., 2017)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 191, |
| "end": 212, |
| "text": "(Kottur et al., 2018;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 213, |
| "end": 229, |
| "text": "Yu et al., 2019)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 396, |
| "end": 416, |
| "text": "(Antol et al., 2015;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 417, |
| "end": 436, |
| "text": "Goyal et al., 2017)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 630, |
| "end": 650, |
| "text": "(Stoia et al., 2008;", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 651, |
| "end": 673, |
| "text": "Thomason et al., 2019)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Referring expressions generation The goal in this area is to generate expressions over several turns of conversation in a natural and non-repetitive way, following principles of communicative discourse as for example in the recent PhotoBook dataset (Takmaz et al., 2020) . Our work is complementary to such undertakings as it focuses on the interpretative rather than generative part.", |
| "cite_spans": [ |
| { |
| "start": 249, |
| "end": 270, |
| "text": "(Takmaz et al., 2020)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The notion of coreference chain-the sequence of mentions pointing to a same entity in a text-is central in coreference resolution. Built on top of the document as a unit, this notion relies on and in turn informs theories about accessibility hierarchy and salience of entities (Ariel, 1988 (Ariel, , 2004 Grosz et al., 1995) . In dialogue, however, references crisscross between the speakers and, one step further, in situated dialogue references crisscross between the speakers and the objects in the image. In this section we revise the annotation challenges in the annotation of anaphoric phenomena in data of this genre.", |
| "cite_spans": [ |
| { |
| "start": 277, |
| "end": 289, |
| "text": "(Ariel, 1988", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 290, |
| "end": 304, |
| "text": "(Ariel, , 2004", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 305, |
| "end": 324, |
| "text": "Grosz et al., 1995)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Understanding reference in situated dialogue", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In spoken discourse people try their best to ground the references so they make sure they understand each other. To do so, they rely on the mechanisms of attention (Lavie et al., 2004) . Although most concrete references can be grounded to the image easily, there are also some difficult cases. References can be found to portions of the image without a bounding box, such as base of the tub in example (1).", |
| "cite_spans": [ |
| { |
| "start": 164, |
| "end": 184, |
| "text": "(Lavie et al., 2004)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grounding and referentiality", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "(1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grounding and referentiality", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "1. This is a picture of a bathtub.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grounding and referentiality", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "2. The tub is white.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grounding and referentiality", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "3. The wall and base of the tub are brown. 4. The door appears to be glass. 5. There is a handrail on the side wall.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grounding and referentiality", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "In the previous example the difficulty arises because the object detector failed to recognise the target object. However, referring expressions are referential to a different degree, e.g., \"Where are your blue ones?\" -is the speaker referring to a particular subset of blue cups, all the blue cups in the scene, blue cups in general, or not referring to any particular set of objects? The distinction is sometimes not clear. Last, as the image determines the scope of the referentiality, typical semantic properties are frequently used to refer back to the objects in the image: colour, shapes, sizes. These can be genuinely referential (a form of ellipsis) or used in an attributive manner. Compare for example white in the second sentence of (1), with (2) below.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grounding and referentiality", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "(2) P1: closest to me, from left to right red, blue, white, red P2: ok, on your side I only see red, blue, white", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grounding and referentiality", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Contrary to a Gricean-based analysis of spoken discourse, coherence-based theories of discourse do not traditionally take the cognitive state of the speaker as a necessary element to text interpretation (Bender and Lascarides, 2019) . In situated dialogue, however, although the image can be treated as the ground truth of the situation, the speaker's cognitive state has to be considered to disambiguate their utterances, the hearer makes a model of their beliefs, desires and intentions associated with the utterance. This is exemplified in the following excerpt from Cups where both participants do not see one of the two red cups close by, but each a differ-ent one. They mistakenly believe that there is only one missing red cup and this dis-alignment of their beliefs gradually leads to increasingly diverging cognitive states. ", |
| "cite_spans": [ |
| { |
| "start": 203, |
| "end": 232, |
| "text": "(Bender and Lascarides, 2019)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Speakers' cognitive state", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We observe a common strategy of grouping things in order to refer to them collectively. This raises the question: What is the level of specification needed in a coreference annotation? One could think about this in linguistic terms, for instance mass nouns or sets; alternatively, in computer vision, there is the distinction between things and stuff (Caesar et al., 2018) .", |
| "cite_spans": [ |
| { |
| "start": 351, |
| "end": 372, |
| "text": "(Caesar et al., 2018)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Level of specification", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "In (4) below, is the reference to the curtains a case of a set composed by individual instances, or is it a mass noun? Note the curtains is a type of stuff in Caesar et al.'s work. (4) 1. I see a picture of an entertainment room. 2. there is a round table in the foreground and a fussball table in the middle of the room, as well as a pool table further back. 3. there is a sitting area with chairs facing a television set. 4. the room has several windows with green curtains. 5.the floors are made of a brown tile.", |
| "cite_spans": [ |
| { |
| "start": 159, |
| "end": 184, |
| "text": "Caesar et al.'s work. (4)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Level of specification", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "In (5) from Cups, on the other hand, the speakers refer to rows of objects even though these are not arranged in strict geometric lines. Hence, which objects are included is contextually defined and not always clear.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Level of specification", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "P2: ok, so your next row P2: you said there 's a takeaway cup somewhere marooned all alone P1: Okay. So we have that row I described with the now found red cup. Then a takeaway cup that is between that row and the next. It's very much in the middle of the two rows.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(5)", |
| "sec_num": null |
| }, |
| { |
| "text": "Different referring expressions have different properties and behaviour, an idea behind theories of salience and accessibility. They are based on the observation that some forms are used to introduce entities and some others to refer to them: some entities are discourse-new and some are discourseold. In situated dialogue, the image provides an additional context and source of referents, but it does not follow that the status of subsequent mentions is old. In the example below, the fact that the discourse starts with It is licensed by the image and this source of reference should be accounted for differently in the annotation than a genuine discourse-old case such as the it in sentence 2. 61. It s a well-lit kitchen with stainded [sic] wooden cupboards . 2. There's a microwave mounted over the stove, which has a red tea kettle on it. 3. The appliances are black and stainless steel in the kitchen. 4. The countertops look like they 're black granite. 5. The window has sunlight streaming in and it 's very brightly light.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Information status", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "V&L resources provide a unique opportunity to explore the notion of discourse entity in grounded context. Extending the coreference annotation to this domain is essential to understand the relationship between reference and coreference. The same mechanisms that humans adopt to solve coreference in the textual domain should underlay results in the V&L domain. Indeed, reference is underspecified in both modalities; any kind of information extraction from these domains will benefit from mechanisms that resolve this underspecification: capturing coreference is a door to capturing coherence. Furthermore, a rich annotation scheme leads to the development of corpora allowing the training of data driven systems for the V&L domain and social robotics.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "4" |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "VQA: Visual Question Answering", |
| "authors": [ |
| { |
| "first": "Stanislaw", |
| "middle": [], |
| "last": "Antol", |
| "suffix": "" |
| }, |
| { |
| "first": "Aishwarya", |
| "middle": [], |
| "last": "Agrawal", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiasen", |
| "middle": [], |
| "last": "Lu", |
| "suffix": "" |
| }, |
| { |
| "first": "Margaret", |
| "middle": [], |
| "last": "Mitchell", |
| "suffix": "" |
| }, |
| { |
| "first": "Dhruv", |
| "middle": [], |
| "last": "Batra", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "Lawrence" |
| ], |
| "last": "Zitnick", |
| "suffix": "" |
| }, |
| { |
| "first": "Devi", |
| "middle": [], |
| "last": "Parikh", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the IEEE International Conference on Computer Vision (ICCV)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual Question An- swering. In Proceedings of the IEEE International Conference on Computer Vision (ICCV).", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Referring and accessibility", |
| "authors": [ |
| { |
| "first": "Mira", |
| "middle": [ |
| "Ariel" |
| ], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "Journal of Linguistics", |
| "volume": "24", |
| "issue": "1", |
| "pages": "65--87", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mira Ariel. 1988. Referring and accessibility. Journal of Linguistics, 24(1):65-87.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Accessibility marking: Discourse functions, discourse profiles, and processing cues", |
| "authors": [ |
| { |
| "first": "Mira", |
| "middle": [ |
| "Ariel" |
| ], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Discourse Processes", |
| "volume": "37", |
| "issue": "", |
| "pages": "91--116", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mira Ariel. 2004. Accessibility marking: Discourse functions, discourse profiles, and processing cues. Discourse Processes, 37(2):91-116.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Arrau annotation manual (trains dialogues)", |
| "authors": [ |
| { |
| "first": "Ron", |
| "middle": [], |
| "last": "Artstein", |
| "suffix": "" |
| }, |
| { |
| "first": "Massimo", |
| "middle": [], |
| "last": "Poesio", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ron Artstein and Massimo Poesio. 2006. Arrau anno- tation manual (trains dialogues).", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Linguistic Fundamentals for Natural Language Processing II: 100 Essentials from Semantics and Pragmatics", |
| "authors": [ |
| { |
| "first": "Emily", |
| "middle": [ |
| "M" |
| ], |
| "last": "Bender", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Lascarides", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Synthesis Lectures on Human Language Technologies", |
| "volume": "12", |
| "issue": "3", |
| "pages": "1--268", |
| "other_ids": { |
| "DOI": [ |
| "10.2200/S00935ED1V02Y201907HLT043" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Emily M. Bender and Alex Lascarides. 2019. Linguis- tic Fundamentals for Natural Language Processing II: 100 Essentials from Semantics and Pragmatics. Synthesis Lectures on Human Language Technolo- gies, 12(3):1-268.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Understanding referring expressions in situated language some challenges for real-world agents", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Donna", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Byron", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the First International Workshop on Language Understanding and Agents for Real World Interaction", |
| "volume": "", |
| "issue": "", |
| "pages": "39--47", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Donna K Byron. 2003. Understanding referring ex- pressions in situated language some challenges for real-world agents. In Proceedings of the First Inter- national Workshop on Language Understanding and Agents for Real World Interaction, pages 39-47.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Coco-stuff: Thing and stuff classes in context", |
| "authors": [ |
| { |
| "first": "Holger", |
| "middle": [], |
| "last": "Caesar", |
| "suffix": "" |
| }, |
| { |
| "first": "Jasper", |
| "middle": [], |
| "last": "Uijlings", |
| "suffix": "" |
| }, |
| { |
| "first": "Vittorio", |
| "middle": [], |
| "last": "Ferrari", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Holger Caesar, Jasper Uijlings, and Vittorio Ferrari. 2018. Coco-stuff: Thing and stuff classes in con- text. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Embodied question answering", |
| "authors": [ |
| { |
| "first": "Abhishek", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| }, |
| { |
| "first": "Samyak", |
| "middle": [], |
| "last": "Datta", |
| "suffix": "" |
| }, |
| { |
| "first": "Georgia", |
| "middle": [], |
| "last": "Gkioxari", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Devi", |
| "middle": [], |
| "last": "Parikh", |
| "suffix": "" |
| }, |
| { |
| "first": "Dhruv", |
| "middle": [], |
| "last": "Batra", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops", |
| "volume": "", |
| "issue": "", |
| "pages": "2054--2063", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Abhishek Das, Samyak Datta, Georgia Gkioxari, Ste- fan Lee, Devi Parikh, and Dhruv Batra. 2018. Em- bodied question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 2054-2063.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", |
| "authors": [ |
| { |
| "first": "Abhishek", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| }, |
| { |
| "first": "Satwik", |
| "middle": [], |
| "last": "Kottur", |
| "suffix": "" |
| }, |
| { |
| "first": "Khushi", |
| "middle": [], |
| "last": "Gupta", |
| "suffix": "" |
| }, |
| { |
| "first": "Avi", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "" |
| }, |
| { |
| "first": "Deshraj", |
| "middle": [], |
| "last": "Yadav", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "F" |
| ], |
| "last": "Jos\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "Devi", |
| "middle": [], |
| "last": "Moura", |
| "suffix": "" |
| }, |
| { |
| "first": "Dhruv", |
| "middle": [], |
| "last": "Parikh", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Batra", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "326--335", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos\u00e9 MF Moura, Devi Parikh, and Dhruv Batra. 2017. Visual dialog. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 326-335.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Local alignment of frame of reference assignment in English and Swedish dialogue", |
| "authors": [ |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "Dobnik", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [ |
| "D" |
| ], |
| "last": "Kelleher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christine", |
| "middle": [], |
| "last": "Howes", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Spatial Cognition XII: Proceedings of the 12th International Conference, Spatial Cognition 2020", |
| "volume": "", |
| "issue": "", |
| "pages": "251--267", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Simon Dobnik, John D. Kelleher, and Christine Howes. 2020. Local alignment of frame of reference assign- ment in English and Swedish dialogue. In Spatial Cognition XII: Proceedings of the 12th International Conference, Spatial Cognition 2020, Riga, Latvia, pages 251-267, Cham, Switzerland. Springer Inter- national Publishing.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering", |
| "authors": [ |
| { |
| "first": "Yash", |
| "middle": [], |
| "last": "Goyal", |
| "suffix": "" |
| }, |
| { |
| "first": "Tejas", |
| "middle": [], |
| "last": "Khot", |
| "suffix": "" |
| }, |
| { |
| "first": "Douglas", |
| "middle": [], |
| "last": "Summers-Stay", |
| "suffix": "" |
| }, |
| { |
| "first": "Dhruv", |
| "middle": [], |
| "last": "Batra", |
| "suffix": "" |
| }, |
| { |
| "first": "Devi", |
| "middle": [], |
| "last": "Parikh", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Conference on Computer Vision and Pattern Recognition (CVPR)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the V in VQA matter: Elevating the role of image under- standing in Visual Question Answering. In Confer- ence on Computer Vision and Pattern Recognition (CVPR).", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Centering: A framework for modelling the local coherence of discourse", |
| "authors": [ |
| { |
| "first": "Barbara", |
| "middle": [ |
| "J" |
| ], |
| "last": "Grosz", |
| "suffix": "" |
| }, |
| { |
| "first": "Aravind", |
| "middle": [ |
| "K" |
| ], |
| "last": "Joshi", |
| "suffix": "" |
| }, |
| { |
| "first": "Scott", |
| "middle": [], |
| "last": "Weinstein", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Computational Linguistics", |
| "volume": "2", |
| "issue": "21", |
| "pages": "203--225", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Barbara J. Grosz, Aravind K. Joshi, and Scott Wein- stein. 1995. Centering: A framework for modelling the local coherence of discourse. Computational Linguistics, 2(21):203-225.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "The symbol grounding problem", |
| "authors": [ |
| { |
| "first": "Stevan", |
| "middle": [], |
| "last": "Harnad", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Physica D", |
| "volume": "42", |
| "issue": "1-3", |
| "pages": "335--346", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stevan Harnad. 1990. The symbol grounding problem. Physica D, 42(1-3):335-346.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Tell me more: A dataset of visual scene description sequences", |
| "authors": [ |
| { |
| "first": "Nikolai", |
| "middle": [], |
| "last": "Ilinykh", |
| "suffix": "" |
| }, |
| { |
| "first": "Sina", |
| "middle": [], |
| "last": "Zarrie\u00df", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Schlangen", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 12th International Conference on Natural Language Generation", |
| "volume": "", |
| "issue": "", |
| "pages": "152--157", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/W19-8621" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nikolai Ilinykh, Sina Zarrie\u00df, and David Schlangen. 2019. Tell me more: A dataset of visual scene de- scription sequences. In Proceedings of the 12th In- ternational Conference on Natural Language Gener- ation, pages 152-157, Tokyo, Japan. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Discourse referents", |
| "authors": [ |
| { |
| "first": "Lauri", |
| "middle": [], |
| "last": "Karttunen", |
| "suffix": "" |
| } |
| ], |
| "year": 1969, |
| "venue": "International Conference on Computational Linguistics COLING 1969: Preprint No. 70", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lauri Karttunen. 1969. Discourse referents. In Inter- national Conference on Computational Linguistics COLING 1969: Preprint No. 70, S\u00e5nga S\u00e4by, Swe- den.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Dynamically structuring updating and interrelating representations of visual and linguistic discourse", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [ |
| "D" |
| ], |
| "last": "Kelleher", |
| "suffix": "" |
| }, |
| { |
| "first": "Fintan", |
| "middle": [ |
| "J" |
| ], |
| "last": "Costello", |
| "suffix": "" |
| }, |
| { |
| "first": "Josef", |
| "middle": [], |
| "last": "Van Genabith", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Artificial Intelligence", |
| "volume": "167", |
| "issue": "", |
| "pages": "62--102", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John D. Kelleher, Fintan J. Costello, and Josef van Gen- abith. 2005. Dynamically structuring updating and interrelating representations of visual and linguistic discourse. Artificial Intelligence, 167:62-102.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Anaphora with nonnominal antecedents in computational linguistics: a survey", |
| "authors": [ |
| { |
| "first": "Varada", |
| "middle": [], |
| "last": "Kolhatkar", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Roussel", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefanie", |
| "middle": [], |
| "last": "Dipper", |
| "suffix": "" |
| }, |
| { |
| "first": "Heike", |
| "middle": [], |
| "last": "Zinsmeister", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Computational Linguistics", |
| "volume": "44", |
| "issue": "3", |
| "pages": "547--612", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Varada Kolhatkar, Adam Roussel, Stefanie Dipper, and Heike Zinsmeister. 2018. Anaphora with non- nominal antecedents in computational linguistics: a survey. Computational Linguistics, 44(3):547-612.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Visual coreference resolution in visual dialog using neural module networks", |
| "authors": [ |
| { |
| "first": "Satwik", |
| "middle": [], |
| "last": "Kottur", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "F" |
| ], |
| "last": "Jos\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "Devi", |
| "middle": [], |
| "last": "Moura", |
| "suffix": "" |
| }, |
| { |
| "first": "Dhruv", |
| "middle": [], |
| "last": "Parikh", |
| "suffix": "" |
| }, |
| { |
| "first": "Marcus", |
| "middle": [], |
| "last": "Batra", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Rohrbach", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "The European Conference on Computer Vision (ECCV)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Satwik Kottur, Jos\u00e9 M. F. Moura, Devi Parikh, Dhruv Batra, and Marcus Rohrbach. 2018. Visual corefer- ence resolution in visual dialog using neural mod- ule networks. In The European Conference on Com- puter Vision (ECCV).", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "A hierarchical approach for generating descriptive image paragraphs", |
| "authors": [ |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Krause", |
| "suffix": "" |
| }, |
| { |
| "first": "Justin", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| }, |
| { |
| "first": "Ranjay", |
| "middle": [], |
| "last": "Krishna", |
| "suffix": "" |
| }, |
| { |
| "first": "Li", |
| "middle": [], |
| "last": "Fei-Fei", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", |
| "volume": "", |
| "issue": "", |
| "pages": "3337--3345", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jonathan Krause, Justin Johnson, Ranjay Krishna, and Li Fei-Fei. 2017. A hierarchical approach for gen- erating descriptive image paragraphs. In 2017 IEEE Conference on Computer Vision and Pattern Recog- nition (CVPR), pages 3337-3345.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Visual Genome: Connecting language and vision using crowdsourced dense image annotations", |
| "authors": [ |
| { |
| "first": "Ranjay", |
| "middle": [], |
| "last": "Krishna", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuke", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| }, |
| { |
| "first": "Oliver", |
| "middle": [], |
| "last": "Groth", |
| "suffix": "" |
| }, |
| { |
| "first": "Justin", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenji", |
| "middle": [], |
| "last": "Hata", |
| "suffix": "" |
| }, |
| { |
| "first": "Joshua", |
| "middle": [], |
| "last": "Kravitz", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephanie", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Yannis", |
| "middle": [], |
| "last": "Kalantidis", |
| "suffix": "" |
| }, |
| { |
| "first": "Li-Jia", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [ |
| "A" |
| ], |
| "last": "Shamma", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Bernstein", |
| "suffix": "" |
| }, |
| { |
| "first": "Li", |
| "middle": [], |
| "last": "Fei-Fei", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "International Journal of Computer Vision", |
| "volume": "123", |
| "issue": "1", |
| "pages": "32--73", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin John- son, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, Michael Bernstein, and Li Fei-Fei. 2017. Visual Genome: Connecting language and vision using crowdsourced dense image annotations. Interna- tional Journal of Computer Vision, 123(1):32-73.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "ParCorFull: a parallel corpus annotated with full coreference", |
| "authors": [ |
| { |
| "first": "Ekaterina", |
| "middle": [], |
| "last": "Lapshinova-Koltunski", |
| "suffix": "" |
| }, |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Hardmeier", |
| "suffix": "" |
| }, |
| { |
| "first": "Pauline", |
| "middle": [], |
| "last": "Krielke", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of 11th Language Resources and Evaluation Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "423--428", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ekaterina Lapshinova-Koltunski, Christian Hardmeier, and Pauline Krielke. 2018. ParCorFull: a parallel corpus annotated with full coreference. In Proceed- ings of 11th Language Resources and Evaluation Conference, pages 423-428, Miyazaki, Japan. Euro- pean Language Resources Association (ELRA). To appear.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Load theory of selective attention and cognitive control", |
| "authors": [ |
| { |
| "first": "Nilli", |
| "middle": [], |
| "last": "Lavie", |
| "suffix": "" |
| }, |
| { |
| "first": "Aleksandra", |
| "middle": [], |
| "last": "Hirst", |
| "suffix": "" |
| }, |
| { |
| "first": "Jan", |
| "middle": [ |
| "W" |
| ], |
| "last": "De Fockert", |
| "suffix": "" |
| }, |
| { |
| "first": "Essi", |
| "middle": [], |
| "last": "Viding", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Journal of Experimental Psychology: General", |
| "volume": "133", |
| "issue": "3", |
| "pages": "339--354", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nilli Lavie, Aleksandra Hirst, Jan W de Fockert, and Essi Viding. 2004. Load theory of selective atten- tion and cognitive control. Journal of Experimental Psychology: General, 133(3):339-354.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Microsoft coco: Common objects in context", |
| "authors": [ |
| { |
| "first": "Tsung-Yi", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Maire", |
| "suffix": "" |
| }, |
| { |
| "first": "Serge", |
| "middle": [], |
| "last": "Belongie", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Hays", |
| "suffix": "" |
| }, |
| { |
| "first": "Pietro", |
| "middle": [], |
| "last": "Perona", |
| "suffix": "" |
| }, |
| { |
| "first": "Deva", |
| "middle": [], |
| "last": "Ramanan", |
| "suffix": "" |
| }, |
| { |
| "first": "Piotr", |
| "middle": [], |
| "last": "Doll\u00e1r", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "Lawrence" |
| ], |
| "last": "Zitnick", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Computer Vision -ECCV 2014", |
| "volume": "", |
| "issue": "", |
| "pages": "740--755", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C. Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In Computer Vision - ECCV 2014, pages 740-755, Cham. Springer Inter- national Publishing.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Abstract coreference in a multilingual perspective: a view on czech and german", |
| "authors": [ |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Nedoluzhko", |
| "suffix": "" |
| }, |
| { |
| "first": "Ekaterina", |
| "middle": [], |
| "last": "Lapshinova-Koltunski", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the Workshop on Coreference Resolution Beyond OntoNotes, COR-BON 2016", |
| "volume": "", |
| "issue": "", |
| "pages": "47--52", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anna Nedoluzhko and Ekaterina Lapshinova- Koltunski. 2016. Abstract coreference in a multilingual perspective: a view on czech and german. In Proceedings of the Workshop on Coreference Resolution Beyond OntoNotes, COR- BON 2016, pages 47-52, Ann Arbor, Michigan. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Linguistic and cognitive evidence about anaphora", |
| "authors": [ |
| { |
| "first": "Massimo", |
| "middle": [], |
| "last": "Poesio", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Anaphora Resolution: Algorithms, Resources and Applications", |
| "volume": "", |
| "issue": "", |
| "pages": "23--54", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Massimo Poesio. 2016. Linguistic and cognitive evi- dence about anaphora. In Massimo Poesio, Roland Stuckardt, and Yannick Versley, editors, Anaphora Resolution: Algorithms, Resources and Applica- tions, pages 23-54. Springer-Verlag, Berlin Heidel- berg.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Unrestricted coreference: Identifying entities and events in OntoNotes", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Sameer", |
| "suffix": "" |
| }, |
| { |
| "first": "Lance", |
| "middle": [], |
| "last": "Pradhan", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Ramshaw", |
| "suffix": "" |
| }, |
| { |
| "first": "Jessica Macbrideand Linnea", |
| "middle": [], |
| "last": "Weischedel", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Micciulla", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "International Conference on Semantic Computing (ICSC 2007)", |
| "volume": "", |
| "issue": "", |
| "pages": "446--453", |
| "other_ids": { |
| "DOI": [ |
| "10.1109/ICSC.2007.93" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sameer S. Pradhan, Lance Ramshaw, Ralph Weischedel, and Jessica MacBrideand Linnea Micciulla. 2007. Unrestricted coreference: Iden- tifying entities and events in OntoNotes. In International Conference on Semantic Computing (ICSC 2007), pages 446-453.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Disourse Processing", |
| "authors": [ |
| { |
| "first": "Manfred", |
| "middle": [], |
| "last": "Stede", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Manfred Stede. 2012. Disourse Processing. Morgan and Claypool Publishers, Toronto.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "SCARE: a situated corpus with annotated referring expressions", |
| "authors": [ |
| { |
| "first": "Laura", |
| "middle": [], |
| "last": "Stoia", |
| "suffix": "" |
| }, |
| { |
| "first": "Darla", |
| "middle": [ |
| "Magdalene" |
| ], |
| "last": "Shockley", |
| "suffix": "" |
| }, |
| { |
| "first": "Donna", |
| "middle": [ |
| "K" |
| ], |
| "last": "Byron", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Fosler-Lussier", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Laura Stoia, Darla Magdalene Shockley, Donna K. Byron, and Eric Fosler-Lussier. 2008. SCARE: a situated corpus with annotated referring expres- sions. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08), Marrakech, Morocco. European Lan- guage Resources Association (ELRA).", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Refer, Reuse, Reduce: Generating Subsequent References in Visual and Conversational Contexts", |
| "authors": [ |
| { |
| "first": "Ece", |
| "middle": [], |
| "last": "Takmaz", |
| "suffix": "" |
| }, |
| { |
| "first": "Mario", |
| "middle": [], |
| "last": "Giulianelli", |
| "suffix": "" |
| }, |
| { |
| "first": "Sandro", |
| "middle": [], |
| "last": "Pezzelle", |
| "suffix": "" |
| }, |
| { |
| "first": "Arabella", |
| "middle": [], |
| "last": "Sinclair", |
| "suffix": "" |
| }, |
| { |
| "first": "Raquel", |
| "middle": [], |
| "last": "Fern\u00e1ndez", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "4350--4368", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/2020.emnlp-main.353" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ece Takmaz, Mario Giulianelli, Sandro Pezzelle, Ara- bella Sinclair, and Raquel Fern\u00e1ndez. 2020. Refer, Reuse, Reduce: Generating Subsequent References in Visual and Conversational Contexts. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4350-4368, Online. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Vision-and-dialog navigation", |
| "authors": [ |
| { |
| "first": "Jesse", |
| "middle": [], |
| "last": "Thomason", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Murray", |
| "suffix": "" |
| }, |
| { |
| "first": "Maya", |
| "middle": [], |
| "last": "Cakmak", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Conference on Robot Learning (CoRL)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jesse Thomason, Michael Murray, Maya Cakmak, and Luke Zettlemoyer. 2019. Vision-and-dialog naviga- tion. In Conference on Robot Learning (CoRL).", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Annotating a broad range of anaphoric phenomena, in multiple genres: the ARRAU corpus", |
| "authors": [ |
| { |
| "first": "Olga", |
| "middle": [], |
| "last": "Uryupina", |
| "suffix": "" |
| }, |
| { |
| "first": "Ron", |
| "middle": [], |
| "last": "Artstein", |
| "suffix": "" |
| }, |
| { |
| "first": "Antonella", |
| "middle": [], |
| "last": "Bristot", |
| "suffix": "" |
| }, |
| { |
| "first": "Federica", |
| "middle": [], |
| "last": "Cavicchio", |
| "suffix": "" |
| }, |
| { |
| "first": "Francesca", |
| "middle": [], |
| "last": "Delogu", |
| "suffix": "" |
| }, |
| { |
| "first": "Kepa", |
| "middle": [], |
| "last": "Rodriguez", |
| "suffix": "" |
| }, |
| { |
| "first": "Massimo", |
| "middle": [], |
| "last": "Poesio", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Natural Language Engineering", |
| "volume": "26", |
| "issue": "1", |
| "pages": "95--128", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Olga Uryupina, Ron Artstein, Antonella Bristot, Feder- ica Cavicchio, Francesca Delogu, Kepa Rodriguez, and Massimo Poesio. 2020. Annotating a broad range of anaphoric phenomena, in multiple genres: the ARRAU corpus. Natural Language Engineer- ing, 26(1):95-128.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Annotating a broad range of anaphoric phenomena, in a variety of genres: the ARRAU corpus", |
| "authors": [ |
| { |
| "first": "Olga", |
| "middle": [], |
| "last": "Uryupina", |
| "suffix": "" |
| }, |
| { |
| "first": "Ron", |
| "middle": [], |
| "last": "Artstein", |
| "suffix": "" |
| }, |
| { |
| "first": "Antonella", |
| "middle": [], |
| "last": "Bristot", |
| "suffix": "" |
| }, |
| { |
| "first": "Federica", |
| "middle": [], |
| "last": "Cavicchio", |
| "suffix": "" |
| }, |
| { |
| "first": "Francesca", |
| "middle": [], |
| "last": "Delogu", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Kepa", |
| "suffix": "" |
| }, |
| { |
| "first": "Massimo", |
| "middle": [], |
| "last": "Rodriguez", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Poesio", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Natural Language Engineering", |
| "volume": "26", |
| "issue": "1", |
| "pages": "95--128", |
| "other_ids": { |
| "DOI": [ |
| "10.1017/s1351324919000056" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Olga Uryupina, Ron Artstein, Antonella Bristot, Fed- erica Cavicchio, Francesca Delogu, Kepa J. Ro- driguez, and Massimo Poesio. 2019. Annotating a broad range of anaphoric phenomena, in a variety of genres: the ARRAU corpus. Natural Language En- gineering, 26(1):95-128.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions", |
| "authors": [ |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Young", |
| "suffix": "" |
| }, |
| { |
| "first": "Alice", |
| "middle": [], |
| "last": "Lai", |
| "suffix": "" |
| }, |
| { |
| "first": "Micah", |
| "middle": [], |
| "last": "Hodosh", |
| "suffix": "" |
| }, |
| { |
| "first": "Julia", |
| "middle": [], |
| "last": "Hockenmaier", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "67--78", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter Young, Alice Lai, Micah Hodosh, and Julia Hock- enmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic in- ference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67-78.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "What you see is what you get: Visual pronoun coreference resolution in dialogues", |
| "authors": [ |
| { |
| "first": "Xintong", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Hongming", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yangqiu", |
| "middle": [], |
| "last": "Song", |
| "suffix": "" |
| }, |
| { |
| "first": "Yan", |
| "middle": [], |
| "last": "Song", |
| "suffix": "" |
| }, |
| { |
| "first": "Changshui", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "5123--5132", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/D19-1516" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xintong Yu, Hongming Zhang, Yangqiu Song, Yan Song, and Changshui Zhang. 2019. What you see is what you get: Visual pronoun coreference resolu- tion in dialogues. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 5123-5132, Hong Kong, China. As- sociation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "The GUM corpus: creating multilayer resources in the classroom. Language Resources and Evaluation", |
| "authors": [ |
| { |
| "first": "Amir", |
| "middle": [], |
| "last": "Zeldes", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "51", |
| "issue": "", |
| "pages": "581--612", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Amir Zeldes. 2017. The GUM corpus: creating mul- tilayer resources in the classroom. Language Re- sources and Evaluation, 51(3):581-612.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "num": null, |
| "text": "(a) Perspective of participant 1. (b) Perspective of participant 2. (c) Top-down perspective of the Cups corpus scene with ground truth object IDs.", |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "num": null, |
| "text": "Participant 1 cannot see the cups circled in blue, whereas participant 2 cannot see the cups circled in red. Person 3 is visible to both participants as a reference point. of work exists in this area.", |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "num": null, |
| "text": "the red one is slightly close to you P1: is that right? P1: to my left from that red mug there is a yellow mug P2: hm... P2: can't see that and now i'm confused DIALOGUE_STATE: B cannot see O-29. P2: describe the second row away from you like you see it P1: only one thing there, a white funny top P2: aha, so it's closer to you than those i call \"the second row\" P1: behind that, there is a yellow, red, white and blue P1: from my left to right P1: yes, that must be it! P1: so what do you see in the \"second row\" from my perspective? P2: i see a red, then space, then white and blue (same as katie's\") P2: no yellow P2: is it on the edge of the", |
| "type_str": "figure" |
| } |
| } |
| } |
| } |