ACL-OCL / Base_JSON /prefixL /json /lantern /2021.lantern-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:13:50.903912Z"
},
"title": "What Did This Castle Look Like Before? Exploring Referential Relations in Naturally Occurring Multimodal Texts",
"authors": [
{
"first": "Ronja",
"middle": [],
"last": "Utescher",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Friedrich-Schiller-University Jena/ Bielefeld University",
"location": {}
},
"email": "r.utescher@uni-bielefeld.de"
},
{
"first": "Sina",
"middle": [],
"last": "Zarrie\u00df",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bielefeld University",
"location": {}
},
"email": "sina.zarriess@uni-bielefeld.de"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Multi-modal texts are abundant and diverse in structure, yet Vision & Language research of these naturally occurring texts has mostly focused on genres that are comparatively light on text, like tweets. In this paper, we discuss the challenges and potential benefits of a V&L framework that explicitly models referential relations, taking Wikipedia articles about buildings as an example. We briefly survey existing related tasks in V&L and propose multi-modal information extraction as a general direction for future research.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Multi-modal texts are abundant and diverse in structure, yet Vision & Language research of these naturally occurring texts has mostly focused on genres that are comparatively light on text, like tweets. In this paper, we discuss the challenges and potential benefits of a V&L framework that explicitly models referential relations, taking Wikipedia articles about buildings as an example. We briefly survey existing related tasks in V&L and propose multi-modal information extraction as a general direction for future research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Many types of naturally occuring texts are inherently multi-modal: articles, social media posts, recipes, encyclopedias, manuals, advertisement, comics, etc. Research on semiotics has long noted that the relationship between the linguistic and visual elements of such texts is extremely complex (Hardy-Vall\u00e9e, 2016) and varies widely across genres (Delin and Bateman, 2002) . To date, research in Vision & Language, however, has mostly focussed on crowdsourced data that simply aligns relatively short snippets of text to images (e.g. Wu et al. (2017) ), sequences of images (e.g. Yang et al. (2019) ) or video (e.g. Pan et al. (2020) ). Here, the text-image relationship is simplified to a substantial, if not artificial, degree.",
"cite_spans": [
{
"start": 348,
"end": 373,
"text": "(Delin and Bateman, 2002)",
"ref_id": "BIBREF6"
},
{
"start": 535,
"end": 551,
"text": "Wu et al. (2017)",
"ref_id": "BIBREF31"
},
{
"start": 581,
"end": 599,
"text": "Yang et al. (2019)",
"ref_id": "BIBREF32"
},
{
"start": 617,
"end": 634,
"text": "Pan et al. (2020)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we take a qualitative look at some examples of real-world multi-modal texts, i.e. Wikipedia articles on entities of the type \"building\". We find that many phenomena occuring jointly in these texts are currenlty tackled as separated tasks in V&L or text processing. We argue that a promising direction for future research in V&L is to aim for a joint framework that combines these different phenomena and levels of analysis. We believe that such a framework would be useful in a range of typical NLP applications (such as information extraction) where, currently, state-of-the-art models usually only process the text of a multimodal document. Arnold and Tilton (2020) discuss the motivation for such projects in the context Digital Humanities.",
"cite_spans": [
{
"start": 658,
"end": 682,
"text": "Arnold and Tilton (2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The example documents discussed in this paper differ from typical objects of V&L research in many respects, but most importantly in terms of their (i) structure and (ii) semantics or topic. Thus, our building articles are relatively long (i.e. much longer than image paragraphs in Krause et al. (2017) ), contain multiple images and text segments that do not directly relate to any of the images. Concerning their semantics, the documents discuss buildings which constitute a type of named entity. This entity can be depicted visually in very diverse ways (see Section 2) and that can be associated with a rich body of knowledge (e.g. historical events) described in the text. We will show how these two aspects call for a V&L framework that accounts for diverse referential relations whereas in most V&L tasks assume a single, fixed text-image relation.",
"cite_spans": [
{
"start": 281,
"end": 301,
"text": "Krause et al. (2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This section discusses observations we made by manually exploring a range of building articles from Wikipedia. We leave the empirical consolidation of our findings for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Case Study",
"sec_num": "2"
},
{
"text": "We first look at different structural aspects of referential relations in the Holstentor article, shown in Fig. 1 . The first paragraph contains a general description of the entity, its location, importance etc., and is accompanied by a captioned image on its right side. In parallel to the text, this first image visually introduces the entity of the Holstentor but, other than that, has no further relations to paragraph it is aligned with. This is completely different in the following paragraph which provides a detailed description of the building's appearance and is accompanied by a second image. The second image is visually very similar to the first: it shows the building in its entirety, but from a different perspective. The two opposing perspectives are explicitly referenced and explained in the appearance section, which even uses the same phrasing as the image captions. This paragraph also mentions parts of the gate, such as the conical roofs, which are shown in both images.",
"cite_spans": [],
"ref_spans": [
{
"start": 107,
"end": 113,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Structure of Referential Relations",
"sec_num": "2.1"
},
{
"text": "The first subsection of the appearance paragraph contains a third image that is spatially aligned with it. This subsection talks about the passageway; the image shows part (of one side) of it. Note that the caption refers to the inscription, which is located in the center of the aforementioned image. This inscription is first mentioned in the paragraph that is aligned with the image. Furthermore, the entire first paragraph passageway lists features of the building that are no longer visible in the contemporary photographs. The second paragraph talks about the other side of the gate, which is not pictured. The most relevant reference to the image of the passageway is contained in the final paragraph which describes its inscription in detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structure of Referential Relations",
"sec_num": "2.1"
},
{
"text": "In sum, this example document shows (among other things) that discourse fragments of very different size (paragraph, sentences, sentence parts, noun phrases) can refer to images as well as their regions, in a way that can be difficult to disentangle.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structure of Referential Relations",
"sec_num": "2.1"
},
{
"text": "While in the previous Section 2.1, we showed different cases for what text fragments can relate to an image, we now discuss examples for how these images relate to the text. In our qualitative case study on buildings, we observed 4 frequent relations -generic (view of the building), related entity, detail/part and event., discussed below. This classification of relation types is not empirically validated and probably far from complete, but we intend to show that there are semantically very distinct types, even in the highly restricted building domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Types of Referential Relations",
"sec_num": "2.2"
},
{
"text": "Generic Generic images show the general appearance of the building and lack a concrete refer-ence to a specific part of the text, as, for instance, the first image in Fig. 1 . These images appear at various points in the document and often contain the year the image was created in their caption. In most cases, they are arranged in chronological fashion, as in Fig. 2a 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 167,
"end": 173,
"text": "Fig. 1",
"ref_id": "FIGREF0"
},
{
"start": 362,
"end": 369,
"text": "Fig. 2a",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semantic Types of Referential Relations",
"sec_num": "2.2"
},
{
"text": "Related Entity Images of related entities show people (or rarely, other buildings) that are relevant to the history of the building under discussion in the article. The entity depicted in the picture is almost always explicitly referenced in the article. For example, Fig. 2b has a named individual that is named in both the caption and article.",
"cite_spans": [],
"ref_spans": [
{
"start": 268,
"end": 275,
"text": "Fig. 2b",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semantic Types of Referential Relations",
"sec_num": "2.2"
},
{
"text": "Part or Detail Images of the parts type show parts or details of the entity in question. This is the most diverse category in terms of the content of the images themselves. What is depicted can range from small details like plaques to major parts of the buildings like a tower, see Fig. 1 . In some cases, these parts are not physically part of the building itself, but instead something that is permanently at the same location (e.g. an organ in a church).",
"cite_spans": [],
"ref_spans": [
{
"start": 282,
"end": 288,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Semantic Types of Referential Relations",
"sec_num": "2.2"
},
{
"text": "Event Images in our building domain can also depict an event that takes place at the building in question. In the example in Fig. 2c , the image portrays the event in progress, e.g. a plane flying through the Arc de Triomphe. There is also another, more subtle but also frequent subtype of event-related image which relates to the existence of the architectural object itself and is often linked to its (partial or complete) destruction, construction or its renovation. In these cases, images often show the aftermath of the destruction (as is the case in Fig. 2d ) or the site in the process of renovation/rebuilding. This latter case is particularly challenging in terms of semantic analysis and image-text matching, because it often entails scenarios where the text refers to (parts of) buildings that are no longer present in the image. That means, a model that fully and correctly analyses this scenario would need to make the connection between the text passages talking about the building's destruction and the image of it in a ruined state. Both types of events generally have a clear reference to the article's text, though the length can vary.",
"cite_spans": [],
"ref_spans": [
{
"start": 125,
"end": 132,
"text": "Fig. 2c",
"ref_id": null
},
{
"start": 556,
"end": 563,
"text": "Fig. 2d",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semantic Types of Referential Relations",
"sec_num": "2.2"
},
{
"text": "Medium In addition to the image content, we observe that the original medium is highly relevant in figuring out its semantic relation. The domain 1 for Fig. 2, see supplementary material of buildings is especially rich in different original media -articles contain digitized images of paintings, sketches, maps, diagrams, post cards or photographs. To a human reader, these are not only understood differently, but themselves contain information.",
"cite_spans": [],
"ref_spans": [
{
"start": 152,
"end": 186,
"text": "Fig. 2, see supplementary material",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semantic Types of Referential Relations",
"sec_num": "2.2"
},
{
"text": "Inutitively, the Wikipedia articles on buildings that we have discussed in this section constitute a relatively simple type of multi-modal document: each document introduces and discusses a single, depictable, main entity, both in terms of textual and visual elements. Many images have captions that refer to the image's main object. Typically, this object is also referred to explicitly (often with very similar verbiage) in the text, except the tricky case of generic images where it is less clear which specific discourse fragment is referentially related. 2 Moreover, the articles have a clear hierarchical structure and, most of the time, images are positioned next to the paragraph that they are related to.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "2.3"
},
{
"text": "Formally, however, these examples indicate that there is a lot of structural and semantic complexity found in naturally occuring text-image correspondences. A long range of questions could be asked to capture this correspondence: (i) which text spans refer to an image or image region and which do not refer? (ii) what is the size of the text span that refers to the image (i.e. paragraph, sentence, noun phrase, ...), (iii) which text spans refer to the same image?, (iv) which images refer to the same text span or entity?, (v) how does the text refer to the image?, (vi) how do caption and text relate to each other?, etc. As we will discuss below, most existing V&L task come nowhere near this complexity and, most notably, make a lot of simplications on the text analysis side.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "2.3"
},
{
"text": "Fixed Text-Image Relation Probably the most widely used image-text data in Vision & Language are so-called image captions that provide a general, neutral description of an image's content, cf. MSCOCO (Lin et al., 2014) or Flickr30k (Plummer et al., 2015) captions . Typical captions consist of a single sentence that directly refers to the image. Other work has looked at more fine-grained referential relations such as referring expressions in the form of noun phrases that identify specific objects in an image (Kazemzadeh et al., 2014) . Work on even more fine-grained resolution that captures object parts is relatively rare, but see (H\u00fcrlimann and Bos, 2016) . A complementary trend is to use texts that are (slightly) longer than image captions, such as image paragraphs that describe the image content in a sequence of sentences (Krause et al., 2017) or dialogues that center on identifying an object in a sequence of turns (de Vries et al., 2017) or an image from a set of images (Das et al., 2017) . All of these datasets are crowdsourced and target a referential task on a specific, fixed level, i.e. image-sentence, object-phrase, object-dialogue, image-dialogue. It is worth noting that, internally, many recent largescale models in V&L process object-phrase relations while encoding image-sentence pairs (Lee et al., 2018; Anderson et al., 2018; Kottur et al., 2018; Tan and Bansal, 2019; Lu et al., 2020) , combining referential relations on two different levels. None of these tasks and models, however, deal with image-text pairs where significant parts of the text have no relation to the visual content, thereby circumventing the need to identify fragments that do indeed stand in a referential relation to a given image.",
"cite_spans": [
{
"start": 200,
"end": 218,
"text": "(Lin et al., 2014)",
"ref_id": "BIBREF18"
},
{
"start": 513,
"end": 538,
"text": "(Kazemzadeh et al., 2014)",
"ref_id": "BIBREF11"
},
{
"start": 638,
"end": 663,
"text": "(H\u00fcrlimann and Bos, 2016)",
"ref_id": "BIBREF9"
},
{
"start": 836,
"end": 857,
"text": "(Krause et al., 2017)",
"ref_id": "BIBREF13"
},
{
"start": 931,
"end": 954,
"text": "(de Vries et al., 2017)",
"ref_id": "BIBREF30"
},
{
"start": 988,
"end": 1006,
"text": "(Das et al., 2017)",
"ref_id": "BIBREF5"
},
{
"start": 1317,
"end": 1335,
"text": "(Lee et al., 2018;",
"ref_id": "BIBREF15"
},
{
"start": 1336,
"end": 1358,
"text": "Anderson et al., 2018;",
"ref_id": "BIBREF1"
},
{
"start": 1359,
"end": 1379,
"text": "Kottur et al., 2018;",
"ref_id": "BIBREF12"
},
{
"start": 1380,
"end": 1401,
"text": "Tan and Bansal, 2019;",
"ref_id": "BIBREF28"
},
{
"start": 1402,
"end": 1418,
"text": "Lu et al., 2020)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work in Vision & Language",
"sec_num": "3"
},
{
"text": "Diverse Referential Relations There is some initial work on datasets and tasks that capture more varied semantic or discursive relations between image and text: Kruk et al. (2019) tag the image intent in multi-modal Twitter posts, distinguishing between intents like 'provocative', 'expressive' or 'promotive'. Their annotations assign a global label to the image which captures the relation to the text as a whole. This goes beyond literal image descriptions, but still does not capture structurally diverse referential relations. Alikhani et al. (2019) investigate text-image coherence in recipe texts that describe sequences of consecutive actions in a cooking context. Structurally, the recipe's text is already segmented, with an image aligned to each step. Alikhani et al. (2019) distinguish image-text relations with respect to which part of the action is shown and whether all entities affected by an action are visible/mentioned in the text. Both papers work on naturally occuring text, though these are still relatively short (tweets and 1-2 sentences per step respectively). Neither task faces the segmentation problem to a degree that is similar to the complex structure we encountered in multi-modal Wikipedia articles. By contrast, in our building example, the rhetorical purpose and authorial intent of each picture seems to be more or less uniform. That is, images are included to illustrate (as opposed to being provocative or expressive). Likewise, the semiotics of these images are overwhelmingly parallel to the content of the text. Muraoka et al. (2020) work with a more coarsegrained and somewhat simplified version of the problem discussed in this paper. Their task is to correctly predict the physical alignment of images and sections in Wikipedia articles. This approach utilizes the inherent document structure 3 , however our observations (see Section 2) call into question the presupposition that alignment in layout entails alignment in content. A similar text-image matching task is discussed in Hessel et al. (2019) , where the authors seek to match the images in a document to the most relevant sentences in it (leaving out the captions). Their model is trained on collections of sentences and images from the same documents; or different documents, for instances of non-relatedness. This information is used at test time to estimate the individual links between the sentences and images of a given document. Hessel et al. (2019) is highly relevant to the concerns discussed in this paper because it has some success in grappling with the comparatively large amounts of text in the Wikipedia article genre.",
"cite_spans": [
{
"start": 161,
"end": 179,
"text": "Kruk et al. (2019)",
"ref_id": "BIBREF14"
},
{
"start": 532,
"end": 554,
"text": "Alikhani et al. (2019)",
"ref_id": "BIBREF0"
},
{
"start": 763,
"end": 785,
"text": "Alikhani et al. (2019)",
"ref_id": "BIBREF0"
},
{
"start": 1553,
"end": 1574,
"text": "Muraoka et al. (2020)",
"ref_id": "BIBREF24"
},
{
"start": 2026,
"end": 2046,
"text": "Hessel et al. (2019)",
"ref_id": "BIBREF8"
},
{
"start": 2441,
"end": 2461,
"text": "Hessel et al. (2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work in Vision & Language",
"sec_num": "3"
},
{
"text": "Many of the phenomena we have discussed in Section 2 have long been researched in NLP models that extract information from text only. Prominently, text-oriented NLP has long been interested in detecting and processing entities, i.e. Named Entity Recognition (NER) is a very well-known NLP task that is useful in a range of applications . The most standard named entity categories are person, location and organisation, however there is a number of NER tools with varying tag sets. One way to model texts of the type illustrated in Fig. 1 would be to move towards multi-modal NER models that identify mentions of entities in a text and link them to corresponding images or image regions, cf. Asgari-Chenaghlu et al. (2020) for a similar proposal. Event detection is another text-based NLP task that has been approached with the use of CNNs (Nguyen and Grishman, 2015) and, more recently, attention mechanisms (Liu et al., 2017b) . Multimodal event detection could be useful to capture referential relations as shown in Fig. 2c . Finally, models that represent or encode relations between entities (Lin et al., 2016; Zhang et al., 2019) in a multi-modal text would be an extremely useful tool in our setting. As a step towards processing comparatively large chunks of text, discourse segmentation (Braud et al. (2017) , Iruskieta et al. (2019) ) splits documents into elementary Discourse Units. Parsing these texts as a discourse is also a topic of ongoing research (Liu et al., 2017a; Li et al., 2016) .",
"cite_spans": [
{
"start": 908,
"end": 927,
"text": "(Liu et al., 2017b)",
"ref_id": "BIBREF21"
},
{
"start": 1096,
"end": 1114,
"text": "(Lin et al., 2016;",
"ref_id": "BIBREF19"
},
{
"start": 1115,
"end": 1134,
"text": "Zhang et al., 2019)",
"ref_id": "BIBREF33"
},
{
"start": 1295,
"end": 1315,
"text": "(Braud et al. (2017)",
"ref_id": "BIBREF4"
},
{
"start": 1318,
"end": 1341,
"text": "Iruskieta et al. (2019)",
"ref_id": "BIBREF10"
},
{
"start": 1465,
"end": 1484,
"text": "(Liu et al., 2017a;",
"ref_id": "BIBREF20"
},
{
"start": 1485,
"end": 1501,
"text": "Li et al., 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 531,
"end": 537,
"text": "Fig. 1",
"ref_id": "FIGREF0"
},
{
"start": 1018,
"end": 1025,
"text": "Fig. 2c",
"ref_id": null
}
],
"eq_spans": [],
"section": "Towards Multi-Modal Information Extraction",
"sec_num": "4"
},
{
"text": "To the best of our knowledge, research in Vision & Language has hardly been inspired by these classical, entity-centric task in text processing. This general impression is corroborated by the very comprehensive V&L survey of Mogadala et al. (2019) .",
"cite_spans": [
{
"start": 225,
"end": 247,
"text": "Mogadala et al. (2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Towards Multi-Modal Information Extraction",
"sec_num": "4"
},
{
"text": "In this paper, we have discussed some complexities of referential relations that arise in natually occuring multi-modal texts. Solving at least some of these requires the use of far more involved text processing techniques than is common for widespread V&L tasks such as image captioning or visual dialogue. Our domain -architectural sites -narrows this potentially sprawling problem somewhat. While every building is an entity unto itself, there are common features that are shared by large subsets. We argue that Wikipedia articles are a valuable source of raw data for multi-modal document analysis, since they constitute a genre of document that is freely available in large quantities and across languages. 4 However, it is questionable whether models that identify the type of image-text relations discussed in this paper can be developed without hand-annotated data. In terms of uitlity and intended audience, it may be worth considering work like Arnold and Tilton (2020) , whose aim is to add robust, searchable annotations to existing collections of historical images. This leads the authors to develop a model that automatically labels images using image segmentation and a pre-defined ontology. We believe that moving towards such more realistic texts in V&L is interesting both from a linguistic, and from an application-oriented perspective, i.e. for multi-modal information extraction. ",
"cite_spans": [
{
"start": 955,
"end": 979,
"text": "Arnold and Tilton (2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "They could be seen as being linked to the text as a whole, but this is not particularly informative for information extraction or similar tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "and consequently save on expensive manual annotation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Some tasks and datasets may also benefit from existing knowledge bases such as Wikidata(Vrande\u010di\u0107 and Kr\u00f6tzsch, 2014).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by a grant from the Federal Ministry of Education and Research (BMBF, grant No. 01UG2120A).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "CITE: A corpus of image-text discourse relations",
"authors": [
{
"first": "Malihe",
"middle": [],
"last": "Alikhani",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sreyasi Nag",
"suffix": ""
},
{
"first": "Gerard",
"middle": [],
"last": "Chowdhury",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "De Melo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Stone",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "570--575",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1056"
]
},
"num": null,
"urls": [],
"raw_text": "Malihe Alikhani, Sreyasi Nag Chowdhury, Gerard de Melo, and Matthew Stone. 2019. CITE: A cor- pus of image-text discourse relations. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 570-575, Minneapo- lis, Minnesota. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Bottom-up and top-down attention for image captioning and visual question answering",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Buehler",
"suffix": ""
},
{
"first": "Damien",
"middle": [],
"last": "Teney",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Gould",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "6077--6086",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE conference on computer vi- sion and pattern recognition, pages 6077-6086.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Enriching historic photography with structured data using image region segmentation",
"authors": [
{
"first": "Taylor",
"middle": [],
"last": "Arnold",
"suffix": ""
},
{
"first": "Lauren",
"middle": [],
"last": "Tilton",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 1st International Workshop on Artificial Intelligence for Historical Image Enrichment and Access",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taylor Arnold and Lauren Tilton. 2020. Enriching his- toric photography with structured data using image region segmentation. In Proceedings of the 1st Inter- national Workshop on Artificial Intelligence for His- torical Image Enrichment and Access, pages 1-10, Marseille, France. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Balafar, and Cina Motamed. 2020. A multimodal deep learning approach for named entity recognition from social media",
"authors": [
{
"first": "Meysam",
"middle": [],
"last": "Asgari-Chenaghlu",
"suffix": ""
},
{
"first": "M",
"middle": [
"Reza"
],
"last": "Feizi-Derakhshi",
"suffix": ""
},
{
"first": "Leili",
"middle": [],
"last": "Farzinvash",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meysam Asgari-Chenaghlu, M. Reza Feizi-Derakhshi, Leili Farzinvash, M. A. Balafar, and Cina Motamed. 2020. A multimodal deep learning approach for named entity recognition from social media.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Cross-lingual and cross-domain discourse segmentation of entire documents",
"authors": [
{
"first": "Chlo\u00e9",
"middle": [],
"last": "Braud",
"suffix": ""
},
{
"first": "Oph\u00e9lie",
"middle": [],
"last": "Lacroix",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "237--243",
"other_ids": {
"DOI": [
"10.18653/v1/P17-2037"
]
},
"num": null,
"urls": [],
"raw_text": "Chlo\u00e9 Braud, Oph\u00e9lie Lacroix, and Anders S\u00f8gaard. 2017. Cross-lingual and cross-domain discourse segmentation of entire documents. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 237-243, Vancouver, Canada. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning cooperative visual dialog agents with deep reinforcement learning",
"authors": [
{
"first": "Abhishek",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Satwik",
"middle": [],
"last": "Kottur",
"suffix": ""
},
{
"first": "M",
"middle": [
"F"
],
"last": "Jos\u00e9",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Moura",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Batra",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IEEE international conference on computer vision",
"volume": "",
"issue": "",
"pages": "2951--2960",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhishek Das, Satwik Kottur, Jos\u00e9 MF Moura, Stefan Lee, and Dhruv Batra. 2017. Learning cooperative visual dialog agents with deep reinforcement learn- ing. In Proceedings of the IEEE international con- ference on computer vision, pages 2951-2960.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Describing and critiquing multimodal documents",
"authors": [
{
"first": "Judy",
"middle": [],
"last": "Delin",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bateman",
"suffix": ""
}
],
"year": 2002,
"venue": "Document Design",
"volume": "3",
"issue": "",
"pages": "140--155",
"other_ids": {
"DOI": [
"10.1075/dd.3.2.05del"
]
},
"num": null,
"urls": [],
"raw_text": "Judy Delin and John Bateman. 2002. Describing and critiquing multimodal documents. Document De- sign, 3:140-155.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Text and image: a critical introduction to the visual/verbal divide by john a. bateman",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Hardy-Vall\u00e9e",
"suffix": ""
}
],
"year": 2016,
"venue": "Visual Studies",
"volume": "31",
"issue": "",
"pages": "366--368",
"other_ids": {
"DOI": [
"10.1080/1472586X.2016.1260358"
]
},
"num": null,
"urls": [],
"raw_text": "Michel Hardy-Vall\u00e9e. 2016. Text and image: a criti- cal introduction to the visual/verbal divide by john a. bateman. Visual Studies, 31:366-368.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Unsupervised discovery of multimodal links in multiimage, multi-sentence documents",
"authors": [
{
"first": "Jack",
"middle": [],
"last": "Hessel",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mimno",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2034--2045",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1210"
]
},
"num": null,
"urls": [],
"raw_text": "Jack Hessel, Lillian Lee, and David Mimno. 2019. Un- supervised discovery of multimodal links in multi- image, multi-sentence documents. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 2034-2045, Hong Kong, China. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Combining lexical and spatial knowledge to predict spatial relations between objects in images",
"authors": [
{
"first": "Manuela",
"middle": [],
"last": "H\u00fcrlimann",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Bos",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 5th Workshop on Vision and Language",
"volume": "",
"issue": "",
"pages": "10--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manuela H\u00fcrlimann and Johan Bos. 2016. Combining lexical and spatial knowledge to predict spatial re- lations between objects in images. In Proceedings of the 5th Workshop on Vision and Language, pages 10-18.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Multilingual segmentation based on neural networks and pre-trained word embeddings",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Iruskieta",
"suffix": ""
},
{
"first": "Kepa",
"middle": [],
"last": "Bengoetxea",
"suffix": ""
},
{
"first": "Aitziber Atutxa",
"middle": [],
"last": "Salazar",
"suffix": ""
},
{
"first": "Arantza",
"middle": [],
"last": "Diaz De Ilarraza",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Workshop on Discourse Relation Parsing and Treebanking",
"volume": "",
"issue": "",
"pages": "125--132",
"other_ids": {
"DOI": [
"10.18653/v1/W19-2716"
]
},
"num": null,
"urls": [],
"raw_text": "Mikel Iruskieta, Kepa Bengoetxea, Aitziber Atutxa Salazar, and Arantza Diaz de Ilarraza. 2019. Multilingual segmentation based on neural networks and pre-trained word embeddings. In Proceedings of the Workshop on Discourse Relation Parsing and Treebanking 2019, pages 125-132, Minneapolis, MN. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "ReferItGame: Referring to objects in photographs of natural scenes",
"authors": [
{
"first": "Sahar",
"middle": [],
"last": "Kazemzadeh",
"suffix": ""
},
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Matten",
"suffix": ""
},
{
"first": "Tamara",
"middle": [],
"last": "Berg",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "787--798",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1086"
]
},
"num": null,
"urls": [],
"raw_text": "Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara Berg. 2014. ReferItGame: Referring to objects in photographs of natural scenes. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 787-798, Doha, Qatar. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Visual coreference resolution in visual dialog using neural module networks",
"authors": [
{
"first": "Satwik",
"middle": [],
"last": "Kottur",
"suffix": ""
},
{
"first": "M",
"middle": [
"F"
],
"last": "Jos\u00e9",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Moura",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the European Conference on Computer Vision (ECCV)",
"volume": "",
"issue": "",
"pages": "153--169",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satwik Kottur, Jos\u00e9 MF Moura, Devi Parikh, Dhruv Ba- tra, and Marcus Rohrbach. 2018. Visual coreference resolution in visual dialog using neural module net- works. In Proceedings of the European Conference on Computer Vision (ECCV), pages 153-169.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A hierarchical approach for generating descriptive image paragraphs",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Krause",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Ranjay",
"middle": [],
"last": "Krishna",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Fei-Fei",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "317--325",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Krause, Justin Johnson, Ranjay Krishna, and Li Fei-Fei. 2017. A hierarchical approach for gener- ating descriptive image paragraphs. In Proceedings of the IEEE conference on computer vision and pat- tern recognition, pages 317-325.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Integrating text and image: Determining multimodal document intent in Instagram posts",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Kruk",
"suffix": ""
},
{
"first": "Jonah",
"middle": [],
"last": "Lubin",
"suffix": ""
},
{
"first": "Karan",
"middle": [],
"last": "Sikka",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Ajay",
"middle": [],
"last": "Divakaran",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "4622--4632",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1469"
]
},
"num": null,
"urls": [],
"raw_text": "Julia Kruk, Jonah Lubin, Karan Sikka, Xiao Lin, Dan Jurafsky, and Ajay Divakaran. 2019. Integrating text and image: Determining multimodal document intent in Instagram posts. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4622-4632, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Stacked cross attention for image-text matching",
"authors": [
{
"first": "Kuang-Huei",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Xi",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the European Conference on Computer Vision (ECCV)",
"volume": "",
"issue": "",
"pages": "201--216",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kuang-Huei Lee, Xi Chen, Gang Hua, Houdong Hu, and Xiaodong He. 2018. Stacked cross attention for image-text matching. In Proceedings of the European Conference on Computer Vision (ECCV), pages 201-216.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A survey on deep learning for named entity recognition",
"authors": [
{
"first": "J",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "IEEE Transactions on Knowledge and Data Engineering",
"volume": "",
"issue": "",
"pages": "1--1",
"other_ids": {
"DOI": [
"10.1109/TKDE.2020.2981314"
]
},
"num": null,
"urls": [],
"raw_text": "J. Li, A. Sun, J. Han, and C. Li. 2020. A survey on deep learning for named entity recognition. IEEE Transactions on Knowledge and Data Engineering, pages 1-1.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Discourse parsing with attention-based hierarchical neural networks",
"authors": [
{
"first": "Qi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Tianshi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Baobao",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "362--371",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1035"
]
},
"num": null,
"urls": [],
"raw_text": "Qi Li, Tianshi Li, and Baobao Chang. 2016. Discourse parsing with attention-based hierarchical neural net- works. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 362-371, Austin, Texas. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Microsoft coco: Common objects in context",
"authors": [
{
"first": "Tsung-Yi",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Maire",
"suffix": ""
},
{
"first": "Serge",
"middle": [],
"last": "Belongie",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Hays",
"suffix": ""
},
{
"first": "Pietro",
"middle": [],
"last": "Perona",
"suffix": ""
},
{
"first": "Deva",
"middle": [],
"last": "Ramanan",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Doll\u00e1r",
"suffix": ""
},
{
"first": "C Lawrence",
"middle": [],
"last": "Zitnick",
"suffix": ""
}
],
"year": 2014,
"venue": "European conference on computer vision",
"volume": "",
"issue": "",
"pages": "740--755",
"other_ids": {
"DOI": [
"https://link.springer.com/chapter/10.1007/978-3-319-10602-1_48"
]
},
"num": null,
"urls": [],
"raw_text": "Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European confer- ence on computer vision, pages 740-755. Springer.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Neural relation extraction with selective attention over instances",
"authors": [
{
"first": "Yankai",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Shiqi",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Huanbo",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2124--2133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 2124-2133.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Learning character-level compositionality with visual features",
"authors": [
{
"first": "Frederick",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Han",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Chieh",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2059--2068",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1188"
]
},
"num": null,
"urls": [],
"raw_text": "Frederick Liu, Han Lu, Chieh Lo, and Graham Neubig. 2017a. Learning character-level compositionality with visual features. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2059- 2068, Vancouver, Canada. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Exploiting argument information to improve event detection via supervised attention mechanisms",
"authors": [
{
"first": "Shulin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yubo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1789--1798",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1164"
]
},
"num": null,
"urls": [],
"raw_text": "Shulin Liu, Yubo Chen, Kang Liu, and Jun Zhao. 2017b. Exploiting argument information to im- prove event detection via supervised attention mech- anisms. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1789-1798, Van- couver, Canada. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "12-in-1: Multi-task vision and language representation learning",
"authors": [
{
"first": "Jiasen",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Vedanuj",
"middle": [],
"last": "Goswami",
"suffix": ""
},
{
"first": "Marcus",
"middle": [],
"last": "Rohrbach",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "10437--10446",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiasen Lu, Vedanuj Goswami, Marcus Rohrbach, Devi Parikh, and Stefan Lee. 2020. 12-in-1: Multi-task vision and language representation learning. In Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition, pages 10437- 10446.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Trends in integration of vision and language research: A survey of tasks, datasets, and methods",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Mogadala",
"suffix": ""
},
{
"first": "Marimuthu",
"middle": [],
"last": "Kalimuthu",
"suffix": ""
},
{
"first": "Dietrich",
"middle": [],
"last": "Klakow",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.09358"
]
},
"num": null,
"urls": [],
"raw_text": "Aditya Mogadala, Marimuthu Kalimuthu, and Dietrich Klakow. 2019. Trends in integration of vision and language research: A survey of tasks, datasets, and methods. arXiv preprint arXiv:1907.09358.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Image position prediction in multimodal documents",
"authors": [
{
"first": "Masayasu",
"middle": [],
"last": "Muraoka",
"suffix": ""
},
{
"first": "Ryosuke",
"middle": [],
"last": "Kohita",
"suffix": ""
},
{
"first": "Etsuko",
"middle": [],
"last": "Ishii",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "4265--4274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masayasu Muraoka, Ryosuke Kohita, and Etsuko Ishii. 2020. Image position prediction in multimodal doc- uments. In Proceedings of the 12th Language Re- sources and Evaluation Conference, pages 4265- 4274, Marseille, France. European Language Re- sources Association.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Event detection and domain adaptation with convolutional neural networks",
"authors": [
{
"first": "Huu",
"middle": [],
"last": "Thien",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "365--371",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thien Huu Nguyen and Ralph Grishman. 2015. Event detection and domain adaptation with convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Lin- guistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 365-371.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Auto-captions on gif: A largescale video-sentence dataset for vision-language pretraining",
"authors": [
{
"first": "Yingwei",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Yehao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jianjie",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Mei",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2007.02375"
]
},
"num": null,
"urls": [],
"raw_text": "Yingwei Pan, Yehao Li, Jianjie Luo, Jun Xu, Ting Yao, and Tao Mei. 2020. Auto-captions on gif: A large- scale video-sentence dataset for vision-language pre- training. arXiv preprint arXiv:2007.02375.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Flickr30k entities: Collecting region-to-phrase correspondences for richer imageto-sentence models",
"authors": [
{
"first": "A",
"middle": [],
"last": "Bryan",
"suffix": ""
},
{
"first": "Liwei",
"middle": [],
"last": "Plummer",
"suffix": ""
},
{
"first": "Chris",
"middle": [
"M"
],
"last": "Wang",
"suffix": ""
},
{
"first": "Juan",
"middle": [
"C"
],
"last": "Cervantes",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Caicedo",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lazebnik",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE international conference on computer vision",
"volume": "",
"issue": "",
"pages": "2641--2649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. 2015. Flickr30k entities: Collecting region-to-phrase correspondences for richer image- to-sentence models. In Proceedings of the IEEE international conference on computer vision, pages 2641-2649.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Lxmert: Learning cross-modality encoder representations from transformers",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.07490"
]
},
"num": null,
"urls": [],
"raw_text": "Hao Tan and Mohit Bansal. 2019. Lxmert: Learning cross-modality encoder representations from trans- formers. arXiv preprint arXiv:1908.07490.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Wikidata: a free collaborative knowledgebase",
"authors": [
{
"first": "Denny",
"middle": [],
"last": "Vrande\u010di\u0107",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Kr\u00f6tzsch",
"suffix": ""
}
],
"year": 2014,
"venue": "Communications of the ACM",
"volume": "57",
"issue": "10",
"pages": "78--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denny Vrande\u010di\u0107 and Markus Kr\u00f6tzsch. 2014. Wiki- data: a free collaborative knowledgebase. Commu- nications of the ACM, 57(10):78-85.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Guesswhat?! visual object discovery through multi-modal dialogue",
"authors": [
{
"first": "Florian",
"middle": [],
"last": "Harm De Vries",
"suffix": ""
},
{
"first": "Sarath",
"middle": [],
"last": "Strub",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Chandar",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Pietquin",
"suffix": ""
},
{
"first": "Aaron",
"middle": [
"C"
],
"last": "Larochelle",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Courville",
"suffix": ""
}
],
"year": 2017,
"venue": "Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harm de Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, and Aaron C. Courville. 2017. Guesswhat?! visual object discovery through multi-modal dialogue. In Conference on Computer Vision and Pattern Recognition (CVPR).",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Visual question answering: A survey of methods and datasets. Computer Vision and Image Understanding",
"authors": [
{
"first": "Qi",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Damien",
"middle": [],
"last": "Teney",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chunhua",
"middle": [],
"last": "Shen",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "163",
"issue": "",
"pages": "21--40",
"other_ids": {
"DOI": [
"10.1016/j.cviu.2017.05.001"
]
},
"num": null,
"urls": [],
"raw_text": "Qi Wu, Damien Teney, Peng Wang, Chunhua Shen, An- thony Dick, and Anton van den Hengel. 2017. Vi- sual question answering: A survey of methods and datasets. Computer Vision and Image Understand- ing, 163:21-40. Language in Vision.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Knowledgeable storyteller: A commonsense-driven generative model for visual storytelling",
"authors": [
{
"first": "Pengcheng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Fuli",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Zhiyi",
"middle": [],
"last": "Yin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19)",
"volume": "",
"issue": "",
"pages": "5356--5362",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengcheng Yang, Fuli Luo, Peng Chen, Lei Li, Zhiyi Yin, Xiaodong He, and Xu Sun. 2019. Knowledge- able storyteller: A commonsense-driven generative model for visual storytelling. In Proceedings of the Twenty-Eighth International Joint Conference on Ar- tificial Intelligence (IJCAI-19), pages 5356-5362.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Ernie: Enhanced language representation with informative entities",
"authors": [
{
"first": "Zhengyan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1905.07129"
]
},
"num": null,
"urls": [],
"raw_text": "Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. Ernie: En- hanced language representation with informative en- tities. arXiv preprint arXiv:1905.07129.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "First few paragraphs of the Holstentor article. Colour highlighting added for illustrating (approximate) text-image correspondences. Best viewed in colour."
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "(a) Generic: Two chronologically arranged generic images in the Schwerin Castle article (b) Related Entity: Portrait of an aristocrat in the article of Edinburgh Castle (c) Event: Flight through the Arc de Triomphe (d) Event: Destruction of the Zwinger, DresdenFigure 2: Examples of Semantic Referential Relations"
}
}
}
}