ACL-OCL / Base_JSON /prefixQ /json /Q14 /Q14-1006.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q14-1006",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:11:10.399433Z"
},
"title": "From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Young",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": {}
},
"email": "pyoung2@illinois.edu"
},
{
"first": "Alice",
"middle": [],
"last": "Lai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": {}
},
"email": "aylai2@illinois.edu"
},
{
"first": "Micah",
"middle": [],
"last": "Hodosh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": {}
},
"email": "mhodosh2@illinois.edu"
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": {}
},
"email": "juliahmr@illinois.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We propose to use the visual denotations of linguistic expressions (i.e. the set of images they describe) to define novel denotational similarity metrics, which we show to be at least as beneficial as distributional similarities for two tasks that require semantic inference. To compute these denotational similarities, we construct a denotation graph, i.e. a subsumption hierarchy over constituents and their denotations, based on a large corpus of 30K images and 150K descriptive captions.",
"pdf_parse": {
"paper_id": "Q14-1006",
"_pdf_hash": "",
"abstract": [
{
"text": "We propose to use the visual denotations of linguistic expressions (i.e. the set of images they describe) to define novel denotational similarity metrics, which we show to be at least as beneficial as distributional similarities for two tasks that require semantic inference. To compute these denotational similarities, we construct a denotation graph, i.e. a subsumption hierarchy over constituents and their denotations, based on a large corpus of 30K images and 150K descriptive captions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The ability to draw inferences from text is a prerequisite for language understanding. These inferences are what makes it possible for even brief descriptions of everyday scenes to evoke rich mental images. For example, we would expect an image of people shopping in a supermarket to depict aisles of produce or other goods, and we would expect most of these people to be customers who are either standing or walking around. But such inferences require a great deal of commonsense world knowledge. Standard distributional approaches to lexical similarity (Section 2.1) are very effective at identifying which words are related to the same topic, and can provide useful features for systems that perform semantic inferences (Mirkin et al., 2009) , but are not suited to capture precise entailments between complex expressions. In this paper, we propose a novel approach for the automatic acquisition of denotational similarities between descriptions of everyday situations (Section 2). We define the (visual) denotation of a linguistic expression as the set of images it describes. We create a corpus of images of everyday activities (each paired with multiple captions; Section 3) to construct a large scale visual denotation graph which associates image descriptions with their denotations (Section 4). The algorithm that constructs the denotation graph uses purely syntactic and lexical rules to produce simpler captions (which have a larger denotation). But since each image is originally associated with several captions, the graph can also capture similarities between syntactically and lexically unrelated descriptions. We apply these similarities to two different tasks (Sections 6 and 7): an approximate entailment recognition task for our domain, where the goal is to decide whether the hypothesis (a brief image caption) refers to the same image as the premises (four longer captions), and the recently introduced Semantic Textual Similarity task (Agirre et al., 2012) , which can be viewed as a graded (rather than binary) version of paraphrase detection. Both tasks require semantic inference, and our results indicate that denotational similarities are at least as effective as standard approaches to similarity. Our code and data set, as well as the denotation graph itself and the lexical similarities we define over it are available for research purposes at http://nlp.cs.illinois.edu/ Denotation.html.",
"cite_spans": [
{
"start": 723,
"end": 744,
"text": "(Mirkin et al., 2009)",
"ref_id": "BIBREF25"
},
{
"start": 999,
"end": 1007,
"text": "(visual)",
"ref_id": null
},
{
"start": 1957,
"end": 1978,
"text": "(Agirre et al., 2012)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The distributional hypothesis posits that linguistic expressions that appear in similar contexts have a Gray haired man in black suit and yellow tie working in a financial environment. A graying man in a suit is perplexed at a business meeting. A businessman in a yellow tie gives a frustrated look. A man in a yellow tie is rubbing the back of his neck. A man with a yellow tie looks concerned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional Similarities",
"sec_num": "2.1"
},
{
"text": "A butcher cutting an animal to sell. A green-shirted man with a butcher's apron uses a knife to carve out the hanging carcass of a cow. A man at work, butchering a cow. A man in a green t-shirt and long tan apron hacks apart the carcass of a cow while another man hoses away the blood. Two men work in a butcher shop; one cuts the meat from a butchered cow, while the other hoses the floor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional Similarities",
"sec_num": "2.1"
},
{
"text": "Figure 1: Two images from our data set and their five captions similar meaning (Harris, 1954) . This has led to the definition of vector-based distributional similarities, which represent each word w as a vector w derived from counts of w's co-occurrence with other words. These vectors can be used directly to compute the lexical similarities of words, either via the cosine of the angle between them, or via other, more complex metrics (Lin, 1998) . More recently, asymmetric similarities have been proposed as more suitable for semantic inference tasks such as entailment (Weeds and Weir, 2003; Szpektor and Dagan, 2008; Clarke, 2009; Kotlerman et al., 2010) . Distributional word vectors can also be used to define the compositional similarity of longer strings (Mitchell and Lapata, 2010) . To compute the similarity of two strings, the lexical vectors of the words in each string are first combined into a single vector (e.g. by element-wise addition or multiplication), and then an appropriate vector similarity (e.g. cosine) is applied to the resulting pair of vectors.",
"cite_spans": [
{
"start": 79,
"end": 93,
"text": "(Harris, 1954)",
"ref_id": "BIBREF14"
},
{
"start": 438,
"end": 449,
"text": "(Lin, 1998)",
"ref_id": "BIBREF22"
},
{
"start": 575,
"end": 597,
"text": "(Weeds and Weir, 2003;",
"ref_id": "BIBREF35"
},
{
"start": 598,
"end": 623,
"text": "Szpektor and Dagan, 2008;",
"ref_id": "BIBREF33"
},
{
"start": 624,
"end": 637,
"text": "Clarke, 2009;",
"ref_id": "BIBREF8"
},
{
"start": 638,
"end": 661,
"text": "Kotlerman et al., 2010)",
"ref_id": "BIBREF18"
},
{
"start": 766,
"end": 793,
"text": "(Mitchell and Lapata, 2010)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional Similarities",
"sec_num": "2.1"
},
{
"text": "Our approach is inspired by truth-conditional semantic theories in which the denotation of a declarative sentence is assumed to be the set of all situations or possible worlds in which the sentence is true (Montague, 1974; Dowty et al., 1981; Barwise and Perry, 1980) . Restricting our attention to visually descriptive sentences, i.e. non-negative, episodic (Carlson, 2005) sentences that can be used to describe an image (Figure 1 ), we propose to instantiate the abstract notions of possible worlds or situations with concrete sets of images. The interpretation function \u2022 maps sentences to their visual denotations s , which is the set of images i \u2208 U s \u2286 U in a 'universe' of images U that s describes:",
"cite_spans": [
{
"start": 206,
"end": 222,
"text": "(Montague, 1974;",
"ref_id": "BIBREF28"
},
{
"start": 223,
"end": 242,
"text": "Dowty et al., 1981;",
"ref_id": "BIBREF10"
},
{
"start": 243,
"end": 267,
"text": "Barwise and Perry, 1980)",
"ref_id": "BIBREF2"
},
{
"start": 359,
"end": 374,
"text": "(Carlson, 2005)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 423,
"end": 432,
"text": "(Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Visual Denotations",
"sec_num": "2.2"
},
{
"text": "s = {i \u2208 U | s is a truthful description of i} (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Visual Denotations",
"sec_num": "2.2"
},
{
"text": "Similarly, we map nouns and noun phrases to the set of images that depict the objects they describe, and verbs and verb phrases to the set of images that depict the events they describe.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Visual Denotations",
"sec_num": "2.2"
},
{
"text": "Denotations induce a partial ordering over descriptions: if s (e.g. \"a poodle runs on the beach\") entails a description s (e.g. \"a dog runs\"), its denotation is a subset of the denotation of s ( s \u2286 s ), and we say that s subsumes the more specific s (s s). In our domain of descriptive sentences, we can obtain more generic descriptions by simple syntactic and lexical operations \u03c9 \u2208 O \u2282 S \u00d7 S that preserve upward entailment, so that if \u03c9(s) = s , s \u2286 s . We consider three types of operations: the removal of optional material (e.g PPs like on the beach), the extraction of simpler constituents (NPs, VPs, or simple Ss), and lexical substitutions of nouns by their hypernyms (poodle \u2192 dog). These operations are akin to the atomic edits of MacCartney and Manning (2008)'s NatLog system, and allow us to construct large subsumption hierarchies over image descriptions, which we call denotation graphs. Given a set of (upward entailment-preserving) operations O \u2282 S \u00d7 S, the denotation graph DG = E, V of a set of images I and a set of strings S represents a subsumption hierarchy in which each node V = s, s corresponds to a string s \u2208 S and its denotation s \u2286 I. Directed edges e = (s, s ) \u2208 E \u2286 V \u00d7 V indicate a subsumption relation s s between a more generic expression s and its child s . An edge from s to s exists if there is an operation \u03c9 \u2208 O that reduces the string s to s (i.e. \u03c9(s ) = s) and its inverse \u03c9 \u22121 expands the string s to s (i.e. \u03c9 \u22121 (s) = s ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Denotation Graphs",
"sec_num": "2.3"
},
{
"text": "Given a denotation graph over N images, we estimate the denotational probability of an expression s with a denotation of size | s | as P (s) = | s |/N , and the joint probability of two expressions analogously as P (s, s ) = | s \u2229 s |/N . The conditional probability P (s | s ) indicates how likely s is to be true when s holds, and yields a simple directed denotational similarity. The (normalized) pointwise mutual information (PMI) (Church and Hanks, 1990 ) defines a symmetric similarity:",
"cite_spans": [
{
"start": 435,
"end": 458,
"text": "(Church and Hanks, 1990",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Denotational Similarities",
"sec_num": "2.4"
},
{
"text": "nPMI (s, s ) = log P (s,s ) P (s)P (s ) \u2212 log(P (s, s ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Denotational Similarities",
"sec_num": "2.4"
},
{
"text": "We set P (s|s) = nPMI (s, s) = 1, and, if s or s are not in the denotation graph, nPMI (s, s ) = P (s, s ) = 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Denotational Similarities",
"sec_num": "2.4"
},
{
"text": "Our data set ( Figure 1 ) consists of 31,783 photographs of everyday activities, events and scenes (all harvested from Flickr) and 158,915 captions (obtained via crowdsourcing). It contains and extends Hodosh et al. (2013) 's corpus of 8,092 images. We followed Hodosh et al. (2013) 's approach to collect images. We also use their annotation guidelines, and use similar quality controls to correct spelling mistakes, eliminate ungrammatical or non-descriptive sentences. Almost all of the images that we add to those collected by Hodosh et al. (2013) have been made available under a Creative Commons license. Each image is described independently by five annotators who are not familiar with the specific entities and circumstances depicted in them, resulting in captions such as \"Three people setting up a tent\", rather than the kind of captions people provide for their own images (\"Our trip to the Olympic Peninsula\"). Moreover, different annotators use different levels of specificity, from describing the overall situation (performing a musical piece) to specific actions (bowing on a violin). This variety of descriptions associated with the same image is what allows us to induce denotational similari-ties between expressions that are not trivially related by syntactic rewrite rules.",
"cite_spans": [
{
"start": 202,
"end": 222,
"text": "Hodosh et al. (2013)",
"ref_id": "BIBREF16"
},
{
"start": 262,
"end": 282,
"text": "Hodosh et al. (2013)",
"ref_id": "BIBREF16"
},
{
"start": 531,
"end": 551,
"text": "Hodosh et al. (2013)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 15,
"end": 23,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Our Data Set",
"sec_num": "3"
},
{
"text": "The construction of the denotation graph consists of the following steps: preprocessing and linguistic analysis of the captions, identification of applicable transformations, and generation of the graph itself.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing the Denotation Graph",
"sec_num": "4"
},
{
"text": "Preprocessing and Linguistic Analysis We use the Linux spell checker, the OpenNLP tokenizer, POS tagger and chunker (http://opennlp. apache.org), and the Malt parser (Nivre et al., 2006) to analyze the captions. Since the vocabulary of our corpus differs significantly from the data these tools are trained on, we resort to a number of heuristics to improve the analyses they provide. Since some heuristics require us to identify different entity types, we developed a lexicon of the most common entity types in our domain (people, clothing, bodily appearance (e.g. hair or body parts), containers of liquids, food items and vehicles).",
"cite_spans": [
{
"start": 166,
"end": 186,
"text": "(Nivre et al., 2006)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing the Denotation Graph",
"sec_num": "4"
},
{
"text": "After spell-checking, we normalize certain words and compounds with several spelling variations, e.g. barbecue (barbeque, BBQ), gray (grey), waterski (water ski), brown-haired (brown haired), and tokenize the captions using the OpenNLP tokenizer. The OpenNLP POS tagger makes a number of systematic errors on our corpus (e.g. mistagging main verbs as nouns). Since these errors are highly systematic, we are able to correct them automatically by applying deterministic rules (e.g. climbs is never a noun in our corpus, stand is a noun if it is preceded by vegetable but a verb when preceded by a noun that refers to people). These fixes apply to 27,784 (17% of the 158,915 image captions). Next, we use the OpenNLP chunker to create a shallow parse. Fixing its (systematic) errors affects 28,587 captions. We then analyze the structure of each NP chunk to identify heads, determiners and prenominal modifiers. The head may include more than a single token if WordNet (or our hypernym lexicon, described below) contains a corresponding entry (e.g. little girl). Determiners include phrases such as a couple or a few. Although we use the Malt parser (Nivre et al., 2006) to identify subjectverb-object dependencies, we have found it more accurate to develop deterministic heuristics and lexi-cal rules to identify the boundaries of complex (e.g. conjoined) NPs, allowing us to treat \"a man with red shoes and a white hat\" as an NP followed by a single PP, but \"a man with red shoes and a white-haired woman\" as two NPs, and to transform e.g. \"standing by a man and a woman\" into \"standing\" and not \"standing and a woman\" when dropping the PP.",
"cite_spans": [
{
"start": 1148,
"end": 1168,
"text": "(Nivre et al., 2006)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing the Denotation Graph",
"sec_num": "4"
},
{
"text": "We use our corpus and Word-Net to construct a hypernym lexicon that allows us to replace head nouns with more generic terms. We only consider hypernyms that occur themselves with sufficient frequency in the original captions (replacing \"adult\" with \"person\", but not with \"organism\"). Since the language in our corpus is very concrete, each noun tends to have a single sense, allowing us to always replace it with the same hypernyms. 1 But since WordNet provides us with multiple senses for most nouns, we first have to identify which sense is used in our corpus. To do this, we use the heuristic cross-caption coreference algorithm of Hodosh et al. (2010) to identify coreferent NP chunks among the original five captions of each image. 2 For each ambiguous head noun, we consider every non-singleton coreference chains it appears in, and reduce its synsets to those that stand in a hypernym-hyponym relation with at least one other head noun in the chain. Finally, we apply a greedy majority voting algorithm to iteratively narrow down each term's senses to a single synset that is compatible with the largest number of coreference chains it occurs in.",
"cite_spans": [
{
"start": 636,
"end": 656,
"text": "Hodosh et al. (2010)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hypernym Lexicon",
"sec_num": null
},
{
"text": "Caption Normalization In order to increase the recall of the denotations we capture, we drop all punctuation marks, and lemmatize nouns, verbs, and adjectives that end in \"-ed\" or \"-ing\" before gener- 1 Descriptions of people that refer to both age and gender (e.g. \"man\") can have multiple distinct hypernyms (\"adult\"/'\"male\"). Because our annotators never describe young children or babies as \"persons\", we only allow terms that are likely to describe adults or teenagers (including occupations) to be replaced by the term \"person\". This means that the term \"girl\" has two senses: a female child (the default) or a younger woman. We distinguish the two senses in a preprocessing step: if the other captions of the same image do not mention children, but refer to teenaged or adult women, we assign girl the woman-sense. Some nouns that end in -er (e.g. \"diner\", \"pitcher\" also violate our monosemy assumption.",
"cite_spans": [
{
"start": 201,
"end": 202,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hypernym Lexicon",
"sec_num": null
},
{
"text": "2 Coreference resolution has also been used for word sense disambiguation by Preiss (2001) and Hu and Liu (2011) .",
"cite_spans": [
{
"start": 77,
"end": 90,
"text": "Preiss (2001)",
"ref_id": "BIBREF31"
},
{
"start": 95,
"end": 112,
"text": "Hu and Liu (2011)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hypernym Lexicon",
"sec_num": null
},
{
"text": "ating the denotation graph. In order to distinguish between frequently occurring homonyms where the noun is unrelated to the verb, we change all forms of the verb dress to dressed, all forms of the verb stand to standing and all forms of the verb park to parking. Finally, we drop sentence-initial there/here/this is/are (as in there is a dog splashing in the water), and normalize the expressions in X and dressed (up) in X (where X is an article of clothing or a color) to wear X. We reduce plural determiners to {two, three, some}, and drop singular determiners except for no.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hypernym Lexicon",
"sec_num": null
},
{
"text": "The denotation graph contains a directed edge from s to s if there is a rule \u03c9 that reduces s to s, with an inverse \u03c9 \u22121 that expands s to s . Reduction rules can drop optional material, extract simpler constituents, or perform lexical substitutions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Templates",
"sec_num": "4.1"
},
{
"text": "Drop Pre-Nominal Modifiers: \"red shirt\" \u2192 \"shirt\" In an NP of the form \"X Y Z\", where X and Y both modify the head Z, we only allow X and Y to be dropped separately if \"X Z\" and \"Y Z\" both occur elsewhere in the corpus. Since \"white building\" and \"stone building\" occur elsewhere in the corpus, we generate both \"white building\" and \"stone building\" from the NP \"white stone building\". But since \"ice player\" is not used, we replace \"ice hockey player\" only with \"hockey player\" (which does occur) and then \"player\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Templates",
"sec_num": "4.1"
},
{
"text": "Drop Other Modifiers \"run quickly\" \u2192 \"run\" We drop ADVP chunks and adverbs in VP chunks. We also allow a prepositional phrase (a preposition followed by a possibly conjoined NP chunk) to be dropped if the preposition is locational (\"in\", \"on\", \"above\", etc.), directional (\"towards\", \"through\", \"across\", etc.), or instrumental (\"by\", \"for\", \"with\"). Similarly, we also allow the dropping of all \"wear NP\" constructions. Since the distinction between particles and prepositions is often difficult, we also use a predefined list of phrasal verbs that commonly occur in our corpus to identify constructions such as \"climb up a mountain\", which is transformed into \"climb a mountain\" or \"walk down a street\", which is transformed into \"walk\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Templates",
"sec_num": "4.1"
},
{
"text": "Replace Nouns by Hypernyms: \"red shirt\" \u2192 \"red clothing\" We iteratively use our hypernym GENERATEGRAPH():",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Templates",
"sec_num": "4.1"
},
{
"text": "Q, Captions, Rules \u2190 \u2205 for all c \u2208 ImageCorpus do Rules(c) \u2190 GenerateRules(sc) pushAll(Q, {c} \u00d7 RootNodes(sc, Rules(c))) while \u00acempty(Q) do (c, s) \u2190 pop(Q) Captions(s) \u2190 Captions(s) \u222a {c} if |Captions(s)| = 2 then for all c \u2208 Captions(s) do pushAll(Q, {c } \u00d7 Children(s, Rules(c ))) else if |Captions(s)| > 2 then pushAll(Q, {c} \u00d7 Children(s, Rules(c)))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Templates",
"sec_num": "4.1"
},
{
"text": "Figure 2: Generating the graph lexicon to make head nouns more generic. We only allow head nouns to be replaced by their hypernyms if any age based modifiers have already been removed: \"toddler\" can be replaced with \"child\", but not \"older toddler\" with \"older child\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Templates",
"sec_num": "4.1"
},
{
"text": "Handle Partitive NPs: cup of tea \u2192 \"cup\", \"tea\" In most partitive NP 1 -of-NP 2 constructions (\"cup of tea\", \"a team of football players\") the corresponding entity can be referred to by both the first or the second NP. Exceptions include the phrase \"body of water\", and expressions such as \"a kind/type/sort of\", which we treat similar to determiners.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Templates",
"sec_num": "4.1"
},
{
"text": "Handle VP 1 -to-VP 2 Cases Depending on the first verb, we replace VPs of the form X to Y with both X and Y if X is a movement or posture (jump to catch, etc.). Otherwise we distinguish between cases we can only replace with X (wait to jump) and those we can only replace with Y (seem to jump).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Templates",
"sec_num": "4.1"
},
{
"text": "Extract Simpler Constituents Any noun phrase or verb phrase can also be used as a node in the graph and simplified further. We use the Malt dependencies (and the person terms in the entity type lexicon) to identify and extract subject-verb-object chunks which correspond to simpler sentences that we would otherwise not be able to obtain: from \"man laugh(s) while drink(ing)\", we extract \"man laugh\" and \"man drink\", and then further split those into \"man\", \"laugh(s)\", and \"drink\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Templates",
"sec_num": "4.1"
},
{
"text": "The naive approach to graph generation would be to generate all possible strings for each caption. However, this would produce far more strings than can be processed in a reasonable amount of time, and most of these strings would have uninformative denotations, consisting of only a single image. To make graph generation tractable, we use a top-down algorithm which generates the graph from the most generic (root) nodes, and stops at nodes that have a singleton denotation (Figure 2 ). We first identify the set of rules that can apply to each original caption (GenerateRules). These rules are then used to reduce each caption as much as possible. The resulting (maximally generic) strings are added as root nodes to the graph (RootNodes), and added to the queue Q. Q keeps track of all currently possible node expansions. It contains items c, s , which pair the ID of an original caption and its image (c) with a string (s) that corresponds to an existing node in the graph and can be derived from c's caption. When c, s is processed, we check how many captions have generated s so far (Captions(s)). If s has more than a single caption, we use each of the applicable rewrite rules of c's caption to create new strings s that correspond to the children of s in the graph, and push all resulting c, s onto Q. If c is the second caption of s, we also use all of the applicable rewrite rules from the first caption c to create its children. A post-processing step (not shown in Figure 2 ) attaches each original caption to all leaf nodes of the graph to which it can be reduced. Finally, we obtain the denotation of each node s from the set of images whose captions are in Captions(s).",
"cite_spans": [],
"ref_spans": [
{
"start": 475,
"end": 484,
"text": "(Figure 2",
"ref_id": null
},
{
"start": 1478,
"end": 1486,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Graph Generation",
"sec_num": "4.2"
},
{
"text": "Size and Coverage On our corpus of 158,439 unique captions and 31,783 images, the denotation graph contains 1,749,097 captions, out of which 230,811 describe more than a single image. Table 1 provides the distribution of the size of denotations. It is perhaps surprising that the 161 captions which describe each over 1,000 images do not just consist of nouns such as person, but also contain simple sentences such as woman standing, adult work, person walk street, or person play instrument. Since the graph is derived from the original captions by very simple syntactic operations, the denotations it captures are most likely incomplete: soccer player contains 251 images, play soccer contains 234 images, and soccer game contains 119 images. We have not yet attempted to identify variants in word order (\"stick tongue out\" vs. \"stick out tongue\") or equivalent choices of preposition (\"look into mirror\" vs. \"look in mirror\"). Despite this brittleness, the current graph already gives us a large number of semantic associations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Denotation Graph",
"sec_num": "5"
},
{
"text": "Size of denotations | s | \u2265 1 | s | \u2265 2 | s | \u2265 5 | s | \u2265 10 | s | \u2265 100 | s | \u2265 1000",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Denotation Graph",
"sec_num": "5"
},
{
"text": "Denotational Similarities The following examples of the similarities found by nPMI and P show that denotational similarities do not simply find topically related events, but instead find events that are related by entailment: If someone is eating lunch, it is likely that they are sitting, and people who sit in a classroom are likely to be listening to somebody. These entailments can be very precise: \"walk up stair\" entails \"ascend\", but not \"descend\"; the reverse is true for \"walk down stair\": Comparing the expressions that are most similar to \"play baseball\" or \"play football\" according to the denotational nPMI and the compositional \u03a3 similarities reveals that the denotational similarity finds a number of actions that are part of the particular sport, while the compositional similarity finds events that are similar to playing baseball (football): A caption never provides a complete description of the depicted scene, but commonsense knowledge often allows us to draw implicit inferences: when somebody mentions a bride, it is quite likely that the picture shows a woman in a wedding dress; a picture of a parent most likely also has a child or baby, etc. In order to compare the utility of denotational and distributional similarities for drawing these inferences, we apply them to an approximate entailment task, which is loosely modeled after the Recognizing Textual Entailment problem (Dagan et al., 2006) , and consists of deciding whether a brief caption h (the hypothesis) can describe the same image as a set of captions P = {p 1 , ..., p N } known to describe the same image (the premises).",
"cite_spans": [
{
"start": 1402,
"end": 1422,
"text": "(Dagan et al., 2006)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Denotation Graph",
"sec_num": "5"
},
{
"text": "P (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Denotation Graph",
"sec_num": "5"
},
{
"text": "P (x|y) x =ascend x =descend y =walk",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Denotation Graph",
"sec_num": "5"
},
{
"text": "Data We generate positive and negative items P, h, \u00b1 (Figure 3) as follows: Given an image, any subset of four of its captions form a set of premises. A hypothesis is either a short verb phrase or sentence that corresponds to a node in the denotation graph. By focusing on short hypotheses, we minimize the possibility that they contain extraneous details that cannot be inferred from the premises. Positive examples are generated by choosing a node h as hypothesis and an image i \u2208 h such that exactly one caption of i generates h and the other four captions of i are not descendants of h and hence do not trivially entail h, giving an unfair advantage to denotational approaches. Negative examples are generated by choosing a node h as hypothesis and selecting four of the captions of an image i \u2208 h .",
"cite_spans": [],
"ref_spans": [
{
"start": 53,
"end": 63,
"text": "(Figure 3)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "The Denotation Graph",
"sec_num": "5"
},
{
"text": "A woman with dark hair in bending, open mouthed, towards the back of a dark headed toddler's head. A dark-haired woman has her mouth open and is hugging a little girl while sitting on a red blanket. A grown lady is snuggling on the couch with a young girl and the lady has a frightened look. A mom holding her child on a red sofa while they are both having fun. VP Hypothesis: make face",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Premises:",
"sec_num": null
},
{
"text": "A man editing a black and white photo at a computer with a pencil in his ear. A man in a white shirt is working at a computer. A guy in white t-shirt on a mac computer. A young main is using an Apple computer. S Hypothesis:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Premises:",
"sec_num": null
},
{
"text": "man sit Since our items are created automatically, a positive hypothesis is not necessarily logically entailed by its premises. We have performed a small-scale human evaluation on 300 items (200 positive, 100 negative), each judged independently by the same three judges (inter-annotator agreement: Fleiss-\u03ba = 0.74). Our results indicate that over half (55%) of the positive hypotheses can be inferred from their premises alone without looking at the original image, while almost none of the negative hypotheses (100% for sentences, 96% for verb phrases) can be inferred from their premises. The training items are generated from the captions of 25,000 images, and the test items are generated from a disjoint set of 3,000 images. The VP data set consists of 290,000 training items and 16,000 test items, while the S data set consists of 400,000 training items and 22,000 test items. Half of the items in each set are positive, and the other half are negative.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Premises:",
"sec_num": null
},
{
"text": "Models All of our models are binary MaxEnt classifiers, trained using MALLET (McCallum, 2002) . We have two baseline models: a plain bag-of-words model (BOW) and a bag-of-words model where we add all hypernyms in our lexicon to the captions before computing their overlap (BOW-H). This is intended to minimize the advantage the denotational features obtain from the hypernym lexicon used to construct the denotation graph. In both cases, a global BOW feature captures the fraction of tokens in the hypothesis that are contained in the premises. Word-specific BOW features capture the product of the frequencies of each word in h and P. All other models extend the BOW-H model.",
"cite_spans": [
{
"start": 77,
"end": 93,
"text": "(McCallum, 2002)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Premises:",
"sec_num": null
},
{
"text": "We compute denotational similarities nPMI and P (Sec-tion 2.4) over the pairs of nodes in a denotation graph that is restricted to the training images. We only consider pairs of nodes n, n if their denotations contain at least 10 images and their intersection contains at least 2 images.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Denotational Similarity Features",
"sec_num": null
},
{
"text": "To map an item P, h to denotational similarity features, we represent the premises as the set of all nodes P that are ancestors of its captions. A sentential hypothesis is represented as the set of nodes H = {h S , h sbj , h V P , h v , h dobj } that correspond to the sentence (h itself), its subject, its VP and its direct object. A VP hypothesis has only the nodes H = {h V P , hv, h dobj }. In both cases, h dobj may be empty. Both of the denotational similarities nPMI (h, p) and P (h|p) for h \u2208 H, p \u2208 P lead to two constituentspecific features, sum x and max x , (e.g. sum sbj = p sim(h sbj , p), max dobj = max p sim(h dobj , p)) and two global features sum p,h = p,h sim(h, p) and max p,h = max p,h sim(h, p). Each constituent type also has a set of node-specific sum x,s and max x,s features that are on when constituent x in h is equal to the string s and whose value is equal to the constituent-based feature. For P , each constituent (and each constituent-node pair) has an additional feature P (h|P ) = 1 \u2212 n (1 \u2212 P (h|p n )) that estimates the probability that h is generated by some node in the premise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Denotational Similarity Features",
"sec_num": null
},
{
"text": "We use two symmetric lexical similarities: standard cosine distance (cos), and Lin (1998)'s similarity (Lin) :",
"cite_spans": [
{
"start": 103,
"end": 108,
"text": "(Lin)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Similarity Features",
"sec_num": null
},
{
"text": "cos(w, w ) = w\u2022w w w Lin(w, w ) = i:w(i)>0\u2227w (i)>0 w(i)+w (i) i w(i)+ i w (i)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Similarity Features",
"sec_num": null
},
{
"text": "We use two directed lexical similarities: Clarke (2009)'s similarity (Clk), and Szpektor and Dagan (2008) 's balanced precision (Bal), which builds on Lin and on Weeds and Weir (2003) 's similarity (W):",
"cite_spans": [
{
"start": 80,
"end": 105,
"text": "Szpektor and Dagan (2008)",
"ref_id": "BIBREF33"
},
{
"start": 162,
"end": 183,
"text": "Weeds and Weir (2003)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Similarity Features",
"sec_num": null
},
{
"text": "Clk(w | w ) = i:w(i)>0\u2227w (i)>0 min(w(i), w (i)) i w(i) Bal(w | w ) = W(w | w ) \u00d7 Lin(w, w ) W(w | w ) = i:w(i)>0\u2227w (i)>0 w(i) i w(i)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Similarity Features",
"sec_num": null
},
{
"text": "We also use two publicly available resources that provide precomputed similarities, Kotlerman et al. (2010) 's DIRECT noun and verb rules and Chklovski and Pantel (2004) 's VERBOCEAN rules. Both are motivated by the need for numerically quantifiable semantic inferences between predicates. We only use entries that correspond to single tokens (ignoring e.g. phrasal verbs). Each lexical similarity results in the following features: words in the output are represented by a max-sim w feature which captures its maximum similarity with any word in the premises (max-sim w = max w \u2208P sim(w, w )) and by a sum-sim w feature which captures the sum of its similarities to the words in the premises (sum-sim w = w \u2208P sim(w, w )). Global max sim and sum sim features capture the maximal (resp. total) similarity of any word in the hypothesis to the premise.",
"cite_spans": [
{
"start": 84,
"end": 107,
"text": "Kotlerman et al. (2010)",
"ref_id": "BIBREF18"
},
{
"start": 142,
"end": 169,
"text": "Chklovski and Pantel (2004)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Similarity Features",
"sec_num": null
},
{
"text": "We compute distributional and compositional similarities (cos, Lin, Bal, Clk, \u03a3, \u03a0) on our image captions (\"cap\"), the BNC and Gigaword. For each corpus C, we map each word w that appears at least 10 times in C to a vector w C of the nonnegative normalized pointwise mutual information scores (Section 2.4) of w and the 1,000 words (excluding stop words) that occur in the most sentences of C. We generally define P (w) (and P (w, w )) as the fraction of sentences in C in which w (and w ) occur. To allow a direct comparison between distributional and denotational similarities, we first define P (w) (and P (w, w )) over individual captions (\"cap\"), and then, to level the playing field, we redefine P (w) (and P (w, w )) as the fraction of images in whose captions w (and w ) occur (\"img\"), and then we use our lexicon to augment captions with all hypernyms (\"+hyp\"). Finally, we include BNC and Gigaword similarity features (\"all\"). ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Similarity Features",
"sec_num": null
},
{
"text": "We use two standard compositional baselines to combine the word vectors of a sentence into a single vector: addition (s = w 1 + ... + w n , which can be interpreted as a disjunctive operation), and element-wise (Hadamard) multiplication (s = w 1 ... w n , which can be seen as a conjunctive operation). In both cases, we represent the premises (which consist of four captions) as a the sum of each caption's vector p = p 1 + ...p 4 . This gives two compositional similarity features: \u03a3 = cos(p \u03a3 , h \u03a3 ), and \u03a0 = cos(p \u03a0 , h \u03a0 ). Table 2 provides the test accuracy of our models on the VP and S tasks. Adding hypernyms (BOW-H) yields a slight improvement over the basic BOW model. Among the external resources, VERBOCEAN is more beneficial than DIRECT, but neither help as much as in-domain distributional similarities (this may be due to sparsity). Table 2 shows only the simplest (\"Cap\") and the most complex (\"all\") distributional and compositional models, but Table 3 provides accuracies of these models as we go from standard sentencebased co-occurrence counts towards more denotation graph-like co-occurrence counts that are based on all captions describing the same image (\"Img\"), include hypernyms (\"+Hyp\"), and add information from other corpora (\"All\"). The \"+Hyp\" column in Table 3 shows that the denotational metrics clearly outperform any distributional metric when both have access to the same information. Although the distributional models benefit from the BNC and Gigaword-based similarities (\"All\"), their performance is still below that of the denotational models. Among the distributional model, the simple cos performs better than Lin, or the directed Clk and Bal similarities. In all cases, giving models access to different similarity features improves performance. Table 4 shows the results by hypothesis length. As the length of h increases, classifiers that use similarities between pairs of words (BOW-H and cos) continue to improve in performance relative to the classifiers that use similarities between phrases and sentences (\u03a3 and nPMI ). Most likely, this is due to the lexical similarities having a larger set of features to work with for longer h. nPMI does especially well on shorter h, likely due to the shorter h having larger denotations.",
"cite_spans": [],
"ref_spans": [
{
"start": 530,
"end": 537,
"text": "Table 2",
"ref_id": "TABREF6"
},
{
"start": 850,
"end": 857,
"text": "Table 2",
"ref_id": "TABREF6"
},
{
"start": 964,
"end": 971,
"text": "Table 3",
"ref_id": "TABREF8"
},
{
"start": 1285,
"end": 1292,
"text": "Table 3",
"ref_id": "TABREF8"
},
{
"start": 1789,
"end": 1796,
"text": "Table 4",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Compositional Similarity Features",
"sec_num": null
},
{
"text": "To assess how the denotational similarities perform on a more established task and domain, we apply them to the 1500 sentence pairs from the MSR Video Description Corpus (Chen and Dolan, 2011) that were annotated for the SemEval 2012 Semantic Textual Similarity (STS) task (Agirre et al., 2012) . The goal of this task is to assign scores between 0 and 5 to a pair of sentences, where 5 indicates equivalence, and 0 unrelatedness. Since this is a symmetric task, we do not consider directed similarities. And because the goal of this experiment is not to achieve the best possible performance on this task, but to compare the effectiveness of denotational and more established similarities, we only compare the impact of denotational similarities with compositional similarities computed on our own corpus. Since the MSR Video corpus associates each video with multiple sentences, it is in principle also amenable to a denotational treatment, but the STS task description explicitly forbids its use.",
"cite_spans": [
{
"start": 170,
"end": 192,
"text": "(Chen and Dolan, 2011)",
"ref_id": "BIBREF5"
},
{
"start": 273,
"end": 294,
"text": "(Agirre et al., 2012)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task 2: Semantic Textual Similarity",
"sec_num": "7"
},
{
"text": "Baseline and Compositional Features Our starting point is B\u00e4r et al. (2013) 's DKPro Similarity, one of the top-performing models from the 2012 STS shared task, which is available and easily modified. It consists of a log-linear regression model trained on multiple text features (word and character n-grams, longest common substring and longest common subsequence, Gabrilovich and Markovitch (2007)'s Explicit Semantic Analysis, and Resnik (1995) 's WordNet-based similarity). We investigate the effects of adding compositional (computed on the vectors obtained from the image-caption training data) and denotational similarity features to this state-of-the-art system.",
"cite_spans": [
{
"start": 58,
"end": 75,
"text": "B\u00e4r et al. (2013)",
"ref_id": "BIBREF1"
},
{
"start": 366,
"end": 381,
"text": "Gabrilovich and",
"ref_id": "BIBREF11"
},
{
"start": 382,
"end": 433,
"text": "Markovitch (2007)'s Explicit Semantic Analysis, and",
"ref_id": null
},
{
"start": 434,
"end": 447,
"text": "Resnik (1995)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "7.1"
},
{
"text": "Since the STS task is symmetric, we only consider nPMI similarities. We again represent each sentence s by features based on 5 types of constituents:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Denotational Features",
"sec_num": null
},
{
"text": "S = {s S , s sbj , s V P , s v , s dobj }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Denotational Features",
"sec_num": null
},
{
"text": "Since sentences might be complex, they might contain multiple constituents of the same type, and we therefore think of each feature as a feature over sets of nodes. For each constituent C we consider two sets of nodes in the denotation graph: C itself (typically leaf nodes), Table 5 : Performance on the STS MSRvid task: DKPro (B\u00e4r et al., 2013) plus compositional (\u03a3, \u03a0) and/or denotational similarities (nPMI ) from our corpus and C anc , their parents and grandparents. For each pair of sentences, C-C similarities compute the similarity of the constituents of the same type, while C-all similarities compute the similarity of a C constituent in one sentence against all constituents in the other sentence. For each pair of constituents we consider three similarity features:",
"cite_spans": [
{
"start": 328,
"end": 346,
"text": "(B\u00e4r et al., 2013)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 276,
"end": 283,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Denotational Features",
"sec_num": null
},
{
"text": "sim(C 1 , C 2 ), max(sim(C 1 C anc 2 ), sim(C anc 1 , C 2 )), sim(C anc 1 , C anc 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Denotational Features",
"sec_num": null
},
{
"text": ". The similarity of two sets of nodes is determined by the maximal similarity of any pair of their elements: sim(C 1 , C 2 ) = max c 1 \u2208C 1 ,c 2 \u2208C 2 nPMI (c 1 , c 2 ). This gives us 15 C-C features and 15 C-all features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Denotational Features",
"sec_num": null
},
{
"text": "We use the STS 2012 train/test data, normalized in the same way as the image captions for the denotation graph (i.e. we re-tokenize, lemmatize, and remove determiners). Table 5 shows experimental results for four models: DKPro is the off-the-shelf DKProSimilarity model (B\u00e4r et al., 2013) . From our corpus, we either add additive and multiplicative compositional features (\u03a3, \u03a0) from Section 6 (img), the C-C and C-All denotational features based on nPMI , or both compositional and denotational features. Systems are evaluated by the Pearson correlation (r) of their predicted similarity scores to the human-annotated ones. We see that the denotational similarities outperform the compositional similarities, and that including compositional similarity features in addition to denotational similarity features has little effect. For additional comparison, the published numbers for the TakeLab Semantic Text Similarity System (\u0160ari\u0107 et al., 2012), another topperforming model from the 2012 shared task, are r = 0.880 on this dataset.",
"cite_spans": [
{
"start": 270,
"end": 288,
"text": "(B\u00e4r et al., 2013)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 169,
"end": 176,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7.2"
},
{
"text": "Summary of Contributions We have defined novel denotational metrics of linguistic similarity (Section 2), and have shown them to be at least competitive with, if not superior to, distributional similarities for two tasks that require simple semantic inferences (Sections 6, 7), even though our current method of computing them is somewhat brittle (Section 5). We have also introduced two new resources: a large data set of images paired with descriptive captions, and a denotation graph that pairs generalized versions of these captions with their visual denotations, i.e. the sets of images they describe. Both of these resources are freely available (http://nlp.cs.illinois.edu/ Denotation.html) Although the aim of this paper is to show their utility for a purely linguistic task, we believe that they should also be of great interest for people who aim to build systems that automatically associate image with sentences that describe them (Farhadi et al., 2010; Yang et al., 2011; Mitchell et al., 2012; Kuznetsova et al., 2012; Gupta et al., 2012; Hodosh et al., 2013) .",
"cite_spans": [
{
"start": 943,
"end": 965,
"text": "(Farhadi et al., 2010;",
"ref_id": "BIBREF10"
},
{
"start": 966,
"end": 984,
"text": "Yang et al., 2011;",
"ref_id": "BIBREF36"
},
{
"start": 985,
"end": 1007,
"text": "Mitchell et al., 2012;",
"ref_id": "BIBREF27"
},
{
"start": 1008,
"end": 1032,
"text": "Kuznetsova et al., 2012;",
"ref_id": "BIBREF20"
},
{
"start": 1033,
"end": 1052,
"text": "Gupta et al., 2012;",
"ref_id": "BIBREF13"
},
{
"start": 1053,
"end": 1073,
"text": "Hodosh et al., 2013)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "We believe that the work reported in this paper has the potential to open up promising new research directions. There are other data sets that pair images or video with descriptive language, but we have not yet applied our approach to them. Chen and Dolan (2011)'s MSR Video Description Corpus (of which the STS data is a subset) is most similar to ours, but its curated part is significantly smaller. Instead of several independent captions, Grubinger et al. (2006) 's IAPR TC-12 data set contains longer descriptions. Ordonez et al. (2011) harvested 1 million images and their user-generated captions from Flickr to create the SBU Captioned Photo Dataset. These captions tend to be less descriptive of the image. The denotation graph is similar to Berant et al. (2012)'s 'entailment graph', but differs from it in two ways: first, entailment relations in the denotation graph are defined extensionally in terms of the images described by the expressions at each node, and second, nodes in Berant et al.'s entailment graph correspond to generic propositional templates (X treats Y), whereas nodes in our denotation graph correspond to complete propositions (a dog runs).",
"cite_spans": [
{
"start": 443,
"end": 466,
"text": "Grubinger et al. (2006)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work and Resources",
"sec_num": null
},
{
"text": "Transactions of the Association for Computational Linguistics, 2 (2014) 67-78. Action Editor: Lillian Lee. Submitted 6/2013; Revised 10/2013; Published 2/2014. c 2014 Association for Computational Linguistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We gratefully acknowledge the support of the National Science Foundation under NSF awards 0803603 \"INT2-Medium: Understanding the meaning of images\", 1053856 \"CAREER: Bayesian Models for Lexicalized Grammars\", and 1205627 \"CI-P:Collaborative Research: Visual entailment data set and challenge for the Language and Vision Community\", as well as via an NSF Graduate Research Fellowship to Alice Lai.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "SemEval-2012 task 6: a pilot on semantic textual similarity",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Gonzalez-Agirre",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Sixth International Workshop on Semantic Evaluation, SemEval '12",
"volume": "1",
"issue": "",
"pages": "385--393",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Mona Diab, Daniel Cer, and Aitor Gonzalez-Agirre. 2012. SemEval-2012 task 6: a pilot on semantic textual similarity. In Proceedings of the First Joint Conference on Lexical and Computational Semantics -Volume 1: Proceedings of the main confer- ence and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Eval- uation, SemEval '12, pages 385-393.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "DKPro Similarity: An Open Source Framework for Text Similarity",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "B\u00e4r",
"suffix": ""
},
{
"first": "Torsten",
"middle": [],
"last": "Zesch",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "121--126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel B\u00e4r, Torsten Zesch, and Iryna Gurevych. 2013. DKPro Similarity: An Open Source Framework for Text Similarity. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguis- tics: System Demonstrations, pages 121-126, Sofia, Bulgaria, August.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Situations and attitudes",
"authors": [
{
"first": "Jon",
"middle": [],
"last": "Barwise",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Perry",
"suffix": ""
}
],
"year": 1980,
"venue": "Journal of Philosophy",
"volume": "78",
"issue": "",
"pages": "668--691",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jon Barwise and John Perry. 1980. Situations and atti- tudes. Journal of Philosophy, 78:668-691.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning entailment relations by global graph structure optimization",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Goldberger",
"suffix": ""
}
],
"year": 2012,
"venue": "Computational Linguistics",
"volume": "38",
"issue": "1",
"pages": "73--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant, Ido Dagan, and Jacob Goldberger. 2012. Learning entailment relations by global graph structure optimization. Computational Linguistics, 38(1):73-111.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The Encyclopedia of Language and Linguistics, chapter Generics, Habituals and Iteratives",
"authors": [
{
"first": "Greg",
"middle": [],
"last": "Carlson",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Greg Carlson, 2005. The Encyclopedia of Language and Linguistics, chapter Generics, Habituals and Iteratives. Elsevier, 2nd edition.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Collecting highly parallel data for paraphrase evaluation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "190--200",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chen and William Dolan. 2011. Collecting highly parallel data for paraphrase evaluation. In Pro- ceedings of the 49th Annual Meeting of the Associa- tion for Computational Linguistics: Human Language Technologies, pages 190-200, Portland, Oregon, USA, June.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Verbocean: Mining the web for fine-grained semantic verb relations",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Chklovski",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "33--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Chklovski and Patrick Pantel. 2004. Verbo- cean: Mining the web for fine-grained semantic verb relations. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 33-40, Barcelona, Spain, July.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Word association norms, mutual information, and lexicography",
"authors": [
{
"first": "Kenneth",
"middle": [
"Ward"
],
"last": "Church",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Hanks",
"suffix": ""
}
],
"year": 1990,
"venue": "Computational Linguistics",
"volume": "16",
"issue": "1",
"pages": "22--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicogra- phy. Computational Linguistics, 16(1):22-29.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Context-theoretic semantics for natural language: an overview",
"authors": [
{
"first": "Daoud",
"middle": [],
"last": "Clarke",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Workshop on Geometrical Models of Natural Language Semantics",
"volume": "",
"issue": "",
"pages": "112--119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daoud Clarke. 2009. Context-theoretic semantics for natural language: an overview. In Proceedings of the Workshop on Geometrical Models of Natural Lan- guage Semantics, pages 112-119, Athens, Greece, March.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The PASCAL Recognising Textual Entailment challenge",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Ido Dagan",
"suffix": ""
},
{
"first": "Bernardo",
"middle": [],
"last": "Glickman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Magnini",
"suffix": ""
}
],
"year": 2006,
"venue": "Machine Learning Challenges",
"volume": "3944",
"issue": "",
"pages": "177--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL Recognising Textual Entailment challenge. In Machine Learning Challenges, volume 3944 of Lecture Notes in Computer Science, pages 177-190. Springer.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Every picture tells a story: Generating sentences from images",
"authors": [
{
"first": "David",
"middle": [],
"last": "Dowty",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Wall",
"suffix": ""
},
{
"first": "Stanley",
"middle": [],
"last": "Peters ; Reidel",
"suffix": ""
},
{
"first": "Dordrecht",
"middle": [
"Ali"
],
"last": "Farhadi",
"suffix": ""
},
{
"first": "Mohsen",
"middle": [],
"last": "Hejrati",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [
"Amin"
],
"last": "Sadeghi",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Cyrus",
"middle": [],
"last": "Rashtchian",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Forsyth",
"suffix": ""
}
],
"year": 1981,
"venue": "Proceedings of the European Conference on Computer Vision (ECCV), Part IV",
"volume": "",
"issue": "",
"pages": "15--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Dowty, Robert Wall, and Stanley Peters. 1981. In- troduction to Montague Semantics. Reidel, Dordrecht. Ali Farhadi, Mohsen Hejrati, Mohammad Amin Sadeghi, Peter Young, Cyrus Rashtchian, Julia Hockenmaier, and David Forsyth. 2010. Every picture tells a story: Generating sentences from images. In Proceed- ings of the European Conference on Computer Vision (ECCV), Part IV, pages 15-29, Heraklion, Greece, September.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Computing semantic relatedness using wikipedia-based explicit semantic analysis",
"authors": [
{
"first": "Evgeniy",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Shaul",
"middle": [],
"last": "Markovitch",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 20th international joint conference on Artifical intelligence, IJCAI'07",
"volume": "",
"issue": "",
"pages": "1606--1611",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evgeniy Gabrilovich and Shaul Markovitch. 2007. Com- puting semantic relatedness using wikipedia-based ex- plicit semantic analysis. In Proceedings of the 20th international joint conference on Artifical intelligence, IJCAI'07, pages 1606-1611.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The IAPR benchmark: A new evaluation resource for visual information systems",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Grubinger",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Clough",
"suffix": ""
},
{
"first": "Henning",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Deselaers",
"suffix": ""
}
],
"year": 2006,
"venue": "OntoImage 2006, Workshop on Language Resources for Content-based Image Retrieval during LREC 2006",
"volume": "",
"issue": "",
"pages": "13--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Grubinger, Paul Clough, Henning M\u00fcller, and Thomas Deselaers. 2006. The IAPR benchmark: A new evaluation resource for visual information sys- tems. In OntoImage 2006, Workshop on Language Resources for Content-based Image Retrieval during LREC 2006, pages 13-23, Genoa, Italy, May.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Choosing linguistics over vision to describe images",
"authors": [
{
"first": "Ankush",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Yashaswi",
"middle": [],
"last": "Verma",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Jawahar",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ankush Gupta, Yashaswi Verma, and C. Jawahar. 2012. Choosing linguistics over vision to describe images. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, Toronto, Ontario, Canada, July.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Distributional structure. Word",
"authors": [
{
"first": "S",
"middle": [],
"last": "Zellig",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Harris",
"suffix": ""
}
],
"year": 1954,
"venue": "",
"volume": "10",
"issue": "",
"pages": "146--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zellig S Harris. 1954. Distributional structure. Word, 10:146-162.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Cross-caption coreference resolution for automatic image understanding",
"authors": [
{
"first": "Micah",
"middle": [],
"last": "Hodosh",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Cyrus",
"middle": [],
"last": "Rashtchian",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Fourteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "162--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Micah Hodosh, Peter Young, Cyrus Rashtchian, and Julia Hockenmaier. 2010. Cross-caption coreference reso- lution for automatic image understanding. In Proceed- ings of the Fourteenth Conference on Computational Natural Language Learning, pages 162-171, Uppsala, Sweden, July.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Framing image description as a ranking task: Data, models and evaluation metrics",
"authors": [
{
"first": "Micah",
"middle": [],
"last": "Hodosh",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of Artificial Intelligence Research (JAIR)",
"volume": "47",
"issue": "",
"pages": "853--899",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Micah Hodosh, Peter Young, and Julia Hockenmaier. 2013. Framing image description as a ranking task: Data, models and evaluation metrics. Journal of Arti- ficial Intelligence Research (JAIR), 47:853-899.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Incorporating coreference resolution into word sense disambiguation",
"authors": [
{
"first": "Shangfeng",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Chengfei",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2011,
"venue": "Computational Linguistics and Intelligent Text Processing",
"volume": "6608",
"issue": "",
"pages": "265--276",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shangfeng Hu and Chengfei Liu. 2011. Incorporating coreference resolution into word sense disambigua- tion. In Alexander F. Gelbukh, editor, Computational Linguistics and Intelligent Text Processing, volume 6608 of Lecture Notes in Computer Science, pages 265-276. Springer Berlin Heidelberg.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Directional distributional similarity for lexical inference",
"authors": [
{
"first": "Lili",
"middle": [],
"last": "Kotlerman",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Idan",
"middle": [],
"last": "Szpektor",
"suffix": ""
},
{
"first": "Maayan",
"middle": [],
"last": "Zhitomirsky-Geffet",
"suffix": ""
}
],
"year": 2010,
"venue": "Natural Language Engineering",
"volume": "16",
"issue": "4",
"pages": "359--389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lili Kotlerman, Ido Dagan, Idan Szpektor, and Maayan Zhitomirsky-Geffet. 2010. Directional distributional similarity for lexical inference. Natural Language En- gineering, 16(4):359-389.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Baby talk: Understanding and generating simple image descriptions",
"authors": [
{
"first": "Girish",
"middle": [],
"last": "Kulkarni",
"suffix": ""
},
{
"first": "Visruth",
"middle": [],
"last": "Premraj",
"suffix": ""
},
{
"first": "Sagnik",
"middle": [],
"last": "Dhar",
"suffix": ""
},
{
"first": "Siming",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"C"
],
"last": "Berg",
"suffix": ""
},
{
"first": "Tamara",
"middle": [
"L"
],
"last": "Berg",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "1601--1608",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Girish Kulkarni, Visruth Premraj, Sagnik Dhar, Siming Li, Yejin Choi, Alexander C. Berg, and Tamara L. Berg. 2011. Baby talk: Understanding and generat- ing simple image descriptions. In Proceedings of the 2011 IEEE Conference on Computer Vision and Pat- tern Recognition (CVPR), pages 1601-1608.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Collective generation of natural image descriptions",
"authors": [
{
"first": "Polina",
"middle": [],
"last": "Kuznetsova",
"suffix": ""
},
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Berg",
"suffix": ""
},
{
"first": "Tamara",
"middle": [],
"last": "Berg",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "359--368",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Polina Kuznetsova, Vicente Ordonez, Alexander Berg, Tamara Berg, and Yejin Choi. 2012. Collective gener- ation of natural image descriptions. In Proceedings of the 50th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 359-368, Jeju Island, Korea, July.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Composing simple image descriptions using web-scale n-grams",
"authors": [
{
"first": "Siming",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Girish",
"middle": [],
"last": "Kulkarni",
"suffix": ""
},
{
"first": "Tamara",
"middle": [
"L"
],
"last": "Berg",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"C"
],
"last": "Berg",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "220--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siming Li, Girish Kulkarni, Tamara L. Berg, Alexan- der C. Berg, and Yejin Choi. 2011. Composing sim- ple image descriptions using web-scale n-grams. In Proceedings of the Fifteenth Conference on Compu- tational Natural Language Learning (CoNLL), pages 220-228, Portland, OR, USA, June.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "An information-theoretic definition of similarity",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the Fifteenth International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "296--304",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin. 1998. An information-theoretic defini- tion of similarity. In Proceedings of the Fifteenth In- ternational Conference on Machine Learning (ICML), pages 296-304, Madison, WI, USA, July.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Modeling semantic containment and exclusion in natural language inference",
"authors": [
{
"first": "Bill",
"middle": [],
"last": "Maccartney",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "521--528",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bill MacCartney and Christopher D. Manning. 2008. Modeling semantic containment and exclusion in nat- ural language inference. In Proceedings of the 22nd International Conference on Computational Linguis- tics (Coling 2008), pages 521-528, Manchester, UK, August.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Mallet: A machine learning for language toolkit",
"authors": [
{
"first": "Andrew Kachites",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Kachites McCallum. 2002. Mal- let: A machine learning for language toolkit. http://www.cs.umass.edu/ mccallum/mallet.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Evaluating the inferential utility of lexical-semantic resources",
"authors": [
{
"first": "Ido",
"middle": [],
"last": "Shachar Mirkin",
"suffix": ""
},
{
"first": "Eyal",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shnarch",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009)",
"volume": "",
"issue": "",
"pages": "558--566",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shachar Mirkin, Ido Dagan, and Eyal Shnarch. 2009. Evaluating the inferential utility of lexical-semantic resources. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 558-566, Athens, Greece, March.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Composition in distributional models of semantics",
"authors": [
{
"first": "Jeff",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2010,
"venue": "Cognitive Science",
"volume": "34",
"issue": "8",
"pages": "1388--1429",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive Science, 34(8):1388-1429.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Midge: Generating image descriptions from computer vision detections",
"authors": [
{
"first": "Margaret",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Dodge",
"suffix": ""
},
{
"first": "Amit",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Kota",
"middle": [],
"last": "Yamaguchi",
"suffix": ""
},
{
"first": "Karl",
"middle": [],
"last": "Stratos",
"suffix": ""
},
{
"first": "Xufeng",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Alyssa",
"middle": [],
"last": "Mensch",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Berg",
"suffix": ""
},
{
"first": "Tamara",
"middle": [],
"last": "Berg",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daume",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics (EACL)",
"volume": "",
"issue": "",
"pages": "747--756",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Margaret Mitchell, Jesse Dodge, Amit Goyal, Kota Ya- maguchi, Karl Stratos, Xufeng Han, Alyssa Mensch, Alex Berg, Tamara Berg, and Hal Daume III. 2012. Midge: Generating image descriptions from computer vision detections. In Proceedings of the 13th Confer- ence of the European Chapter of the Association for Computational Linguistics (EACL), pages 747-756, Avignon, France, April.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Formal philosophy: papers of Richard Montague",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Montague",
"suffix": ""
}
],
"year": 1974,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Montague. 1974. Formal philosophy: papers of Richard Montague. Yale University Press, New Haven. Edited by Richmond H. Thomason.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Maltparser: A data-driven parser-generator for dependency parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Nilsson",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the International Conference on Language Resources and Evaluation (LREC)",
"volume": "",
"issue": "",
"pages": "2216--2219",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Johan Hall, and Jens Nilsson. 2006. Malt- parser: A data-driven parser-generator for dependency parsing. In Proceedings of the International Confer- ence on Language Resources and Evaluation (LREC), pages 2216-2219.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Im2text: Describing images using 1 million captioned photographs",
"authors": [
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Girish",
"middle": [],
"last": "Kulkarni",
"suffix": ""
},
{
"first": "Tamara",
"middle": [
"L"
],
"last": "Berg",
"suffix": ""
}
],
"year": 2011,
"venue": "Advances in Neural Information Processing Systems",
"volume": "24",
"issue": "",
"pages": "1143--1151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vicente Ordonez, Girish Kulkarni, and Tamara L. Berg. 2011. Im2text: Describing images using 1 million captioned photographs. In Advances in Neural Infor- mation Processing Systems 24, pages 1143-1151.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Anaphora resolution with word sense disambiguation",
"authors": [
{
"first": "Judita",
"middle": [],
"last": "Preiss",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of SENSEVAL-2 Second International Workshop on Evaluating Word Sense Disambiguation Systems",
"volume": "",
"issue": "",
"pages": "143--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Judita Preiss. 2001. Anaphora resolution with word sense disambiguation. In Proceedings of SENSEVAL- 2 Second International Workshop on Evaluating Word Sense Disambiguation Systems, pages 143-146, Toulouse, France, July.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Using information content to evaluate semantic similarity in a taxonomy",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 14th international joint conference on Artificial intelligence",
"volume": "1",
"issue": "",
"pages": "448--453",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Resnik. 1995. Using information content to evalu- ate semantic similarity in a taxonomy. In Proceedings of the 14th international joint conference on Artificial intelligence -Volume 1, IJCAI'95, pages 448-453.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Learning entailment rules for unary templates",
"authors": [
{
"first": "Idan",
"middle": [],
"last": "Szpektor",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "849--856",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Idan Szpektor and Ido Dagan. 2008. Learning entailment rules for unary templates. In Proceedings of the 22nd International Conference on Computational Linguis- tics (Coling 2008), pages 849-856, Manchester, UK, August. Coling 2008 Organizing Committee.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Takelab: Systems for measuring semantic text similarity",
"authors": [
{
"first": "Goran",
"middle": [],
"last": "Frane\u0161ari\u0107",
"suffix": ""
},
{
"first": "Mladen",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "Jan\u0161najder",
"middle": [],
"last": "Karan",
"suffix": ""
},
{
"first": "Bojana Dalbelo",
"middle": [],
"last": "Ba\u0161i\u0107",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Sixth International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "7--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frane\u0160ari\u0107, Goran Glava\u0161, Mladen Karan, Jan\u0160najder, and Bojana Dalbelo Ba\u0161i\u0107. 2012. Takelab: Sys- tems for measuring semantic text similarity. In Pro- ceedings of the Sixth International Workshop on Se- mantic Evaluation (SemEval 2012), pages 441-448, Montr\u00e9al, Canada, 7-8 June.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "A general framework for distributional similarity",
"authors": [
{
"first": "Julie",
"middle": [],
"last": "Weeds",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weir",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "81--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julie Weeds and David Weir. 2003. A general frame- work for distributional similarity. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 81-88.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Corpus-guided sentence generation of natural images",
"authors": [
{
"first": "Yezhou",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Ching",
"middle": [],
"last": "Teo",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daume",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Yiannis",
"middle": [],
"last": "Aloimonos",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "444--454",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yezhou Yang, Ching Teo, Hal Daume III, and Yiannis Aloimonos. 2011. Corpus-guided sentence genera- tion of natural images. In Proceedings of the 2011 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 444-454, Edin- burgh, UK, July.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Positive examples from the Approximate Entailment tasks.",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF1": {
"num": null,
"html": null,
"text": "Distribution of the size of denotations in our graph",
"type_str": "table",
"content": "<table/>"
},
"TABREF6": {
"num": null,
"html": null,
"text": "Test accuracy on Approximate Entailment.",
"type_str": "table",
"content": "<table/>"
},
"TABREF7": {
"num": null,
"html": null,
"text": "All Cap Img +Hyp All cos 67.5 69.3 69.8 71.9 76.1 76.8 77.5 78.9 Lin 62.6 63.4 61.3 70.0 75.4 74.8 75.2 77.8 Bal 62.3 61.9 62.8 69.6 74.7 75.5 75.1 75.3 Clk 62.4 67.3 68.0 69.2 75.4 75.5 76.0 77.5 \u03a0 68.4 70.5 70.5 70.3 75.3 76.6 77.1 77.3 \u03a3 67.8 71.4 71.6 71.4 76.9 78.1 79.1 79.2",
"type_str": "table",
"content": "<table><tr><td colspan=\"3\">VP task Cap Img +Hyp \u03a0, \u03a3 69.8 72.7 72.9 72.7 77.0 78.6 79.3 79.6 S task</td></tr><tr><td>nPMI</td><td>74.9</td><td>80.2</td></tr><tr><td>P nPMI , P</td><td>73.8 75.5</td><td>79.5 81.2</td></tr></table>"
},
"TABREF8": {
"num": null,
"html": null,
"text": "Accuracy on hypotheses as various additions are made to the vector corpora. Cap is the image corpus with caption co-occurrence. Img is the image corpus with image co-occurrence. +Hyp augments the image corpus with hypernyms and uses image co-occurrence. All adds the BNC and Gigaword corpora to +Hyp.",
"type_str": "table",
"content": "<table><tr><td>Words in h % of items</td><td>VP task 2 72.8 13.9 13.3 65.3 22.8 11.9 S task 1 3+ 2 3 4+</td></tr><tr><td>BoW-H</td><td>52.0 75.0 80.1 69.1 80.8 84.4</td></tr><tr><td>cos (All)</td><td>68.8 79.4 81.1 75.9 83.9 85.7</td></tr><tr><td>(All) nPMI</td><td>68.1 80.8 79.5 76.5 83.9 85.1 72.0 82.9 82.2 77.3 85.4 86.2</td></tr></table>"
},
"TABREF9": {
"num": null,
"html": null,
"text": "Accuracy on hypotheses of varying length.",
"type_str": "table",
"content": "<table/>"
}
}
}
}