ACL-OCL / Base_JSON /prefixE /json /eacl /2021.eacl-demos.15.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:47:38.934165Z"
},
"title": "COCO-EX: A Tool for Linking Concepts from Texts to ConceptNet",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Becker",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Heidelberg University",
"location": {
"country": "Germany"
}
},
"email": "mbecker@cl.uni-heidelberg.de"
},
{
"first": "Katharina",
"middle": [],
"last": "Korfhage",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Heidelberg University",
"location": {
"country": "Germany"
}
},
"email": "korfhage@cl.uni-heidelberg.de"
},
{
"first": "Anette",
"middle": [],
"last": "Frank",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Heidelberg University",
"location": {
"country": "Germany"
}
},
"email": "frank@cl.uni-heidelberg.de"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we present COCO-EX, a tool for Extracting Concepts from texts and linking them to the ConceptNet knowledge graph. COCO-EX extracts meaningful concepts from natural language texts and maps them to conjunct concept nodes in ConceptNet, utilizing the maximum of relational information stored in the ConceptNet knowledge graph. COCO-EX takes into account the challenging characteristics of ConceptNet, namely that-unlike conventional knowledge graphs-nodes are represented as non-canonicalized, free-form text. This means that i) concepts are not normalized; ii) they often consist of several different, nested phrase types; and iii) many of them are uninformative, over-specific, or misspelled. A commonly used shortcut to circumvent these problems is to apply string matching. We compare COCO-EX to this method and show that COCO-EX enables the extraction of meaningful, important rather than overspecific or uninformative concepts, and allows to assess more relational information stored in the knowledge graph. 1",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we present COCO-EX, a tool for Extracting Concepts from texts and linking them to the ConceptNet knowledge graph. COCO-EX extracts meaningful concepts from natural language texts and maps them to conjunct concept nodes in ConceptNet, utilizing the maximum of relational information stored in the ConceptNet knowledge graph. COCO-EX takes into account the challenging characteristics of ConceptNet, namely that-unlike conventional knowledge graphs-nodes are represented as non-canonicalized, free-form text. This means that i) concepts are not normalized; ii) they often consist of several different, nested phrase types; and iii) many of them are uninformative, over-specific, or misspelled. A commonly used shortcut to circumvent these problems is to apply string matching. We compare COCO-EX to this method and show that COCO-EX enables the extraction of meaningful, important rather than overspecific or uninformative concepts, and allows to assess more relational information stored in the knowledge graph. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "ConceptNet ) is a semantic network which contains general commonsense facts about the world, e.g. Birds can fly or Computers are used for sending e-mails (Liebermann, 2008) . It originates from the crowdsourcing project Open Mind Common Sense (Speer et al., 2008) that acquired commonsense knowledge from contributions over the web. The current version also includes expert-created resources such as Word-Net (Fellbaum, 1998) and JMDict (Breen, 2004) , other crowdsourced resources such as Wiktionary, knowledge obtained through games with a purpose such as Verbosity, and automatically extracted knowledge (cf. Speer et al. (2008) ). Knowledge facts in ConceptNet are represented as triples, e.g. [dog,ISA,domestic animal] . The current version, ConceptNet 5, comprises 37 relations, such as USEDFOR, ISA, PARTOF, or LOCATEDAT.",
"cite_spans": [
{
"start": 154,
"end": 172,
"text": "(Liebermann, 2008)",
"ref_id": "BIBREF13"
},
{
"start": 243,
"end": 263,
"text": "(Speer et al., 2008)",
"ref_id": "BIBREF28"
},
{
"start": 409,
"end": 425,
"text": "(Fellbaum, 1998)",
"ref_id": "BIBREF7"
},
{
"start": 437,
"end": 450,
"text": "(Breen, 2004)",
"ref_id": "BIBREF4"
},
{
"start": 612,
"end": 631,
"text": "Speer et al. (2008)",
"ref_id": "BIBREF28"
},
{
"start": 698,
"end": 723,
"text": "[dog,ISA,domestic animal]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "ConceptNet has been proven a useful resource of background knowledge for various NLP downstream tasks, and is thus widely used, e.g., for reading comprehension (Mihaylov and Frank, 2018) , machine comprehension (Wang et al., 2018; Gonz\u00e1lez et al., 2018) , dialog modelling (Young et al., 2018) , argument classification (Paul et al., 2020) , textual entailment (Weissenborn et al., 2018) , question answering (Ostermann et al., 2018) or for explaining sentiment (Paul and Frank, 2019) .",
"cite_spans": [
{
"start": 160,
"end": 186,
"text": "(Mihaylov and Frank, 2018)",
"ref_id": "BIBREF18"
},
{
"start": 211,
"end": 230,
"text": "(Wang et al., 2018;",
"ref_id": "BIBREF33"
},
{
"start": 231,
"end": 253,
"text": "Gonz\u00e1lez et al., 2018)",
"ref_id": "BIBREF8"
},
{
"start": 273,
"end": 293,
"text": "(Young et al., 2018)",
"ref_id": "BIBREF36"
},
{
"start": 320,
"end": 339,
"text": "(Paul et al., 2020)",
"ref_id": "BIBREF23"
},
{
"start": 361,
"end": 387,
"text": "(Weissenborn et al., 2018)",
"ref_id": "BIBREF35"
},
{
"start": 409,
"end": 433,
"text": "(Ostermann et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 462,
"end": 484,
"text": "(Paul and Frank, 2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As opposed to conventional knowledge bases such as NELL (Carlson et al., 2010) , Freebase (Bollacker et al., 2008) , or YAGO (Nickel et al., 2012) , the nodes in ConceptNet are represented as non-canonicalized, free-form text. This means that (I) concept nodes are not normalized: e.g. bake cake, bake cakes, baking cake, and baking cakes are represented as distinct nodes; likewise bin bag, binbag, bin bags, and bin-bag are separate nodes in ConceptNet. (II) concept nodes often consist of multi-word expressions, which can be very long and complex. Often they consist of several nested phrase types, e.g., buying the ingredients of the recipe, or a friend was celebrating a birthday. (III) Since large parts of ConceptNet have been crowdsourced, it contains noise (e.g., typos), uninformative concepts (e.g., there, it's), or very specific concepts (e.g., the second concept in the triple: [compute,HASPROP,more complex than pencil]).",
"cite_spans": [
{
"start": 56,
"end": 78,
"text": "(Carlson et al., 2010)",
"ref_id": "BIBREF5"
},
{
"start": 90,
"end": 114,
"text": "(Bollacker et al., 2008)",
"ref_id": "BIBREF1"
},
{
"start": 125,
"end": 146,
"text": "(Nickel et al., 2012)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "These specific properties lead to a larger amount of nodes and a substantially sparser graph compared to conventional knowledge bases. This in turn is challenging for tasks such as knowledge base completion (cf. Li et al. (2016) ; Saito et al. (2018) ; Bosselut et al. (2019) ; Malaviya et al. (2020) ); the semantic representation of nodes and edges (Speer and Lowry-Duda, 2017) ; or the learning of new relations (dos Santos et al., 2015; Becker et al., 2019; Trisedya et al., 2019) .",
"cite_spans": [
{
"start": 212,
"end": 228,
"text": "Li et al. (2016)",
"ref_id": "BIBREF12"
},
{
"start": 231,
"end": 250,
"text": "Saito et al. (2018)",
"ref_id": "BIBREF25"
},
{
"start": 253,
"end": 275,
"text": "Bosselut et al. (2019)",
"ref_id": "BIBREF3"
},
{
"start": 278,
"end": 300,
"text": "Malaviya et al. (2020)",
"ref_id": "BIBREF15"
},
{
"start": 351,
"end": 379,
"text": "(Speer and Lowry-Duda, 2017)",
"ref_id": "BIBREF29"
},
{
"start": 415,
"end": 440,
"text": "(dos Santos et al., 2015;",
"ref_id": "BIBREF26"
},
{
"start": 441,
"end": 461,
"text": "Becker et al., 2019;",
"ref_id": "BIBREF0"
},
{
"start": 462,
"end": 484,
"text": "Trisedya et al., 2019)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Moreover, non-canonicalized nodes become challenging when merging knowledge bases, as in Faralli et al. (2020) , who introduce a graph database merging multiple hypernymy graphs extracted from ConceptNet, DBpedia, WebIsAGraph, WordNet, and Wikipedia. They find that only 25% of the edges connect nodes from ConceptNet to other databases, which can be traced back to the fact that ConceptNet nodes are non-canonicalized, as opposed to common knowledge bases.",
"cite_spans": [
{
"start": 89,
"end": 110,
"text": "Faralli et al. (2020)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Finally, free-form concept nodes become problematic when we aim to project a ConceptNet subgraph from natural language texts by mapping phrases from natural language text to nodes in Con-ceptNet. In recent approaches, simple string matching has been applied to perform such a mapping (e.g. Lin et al. 2019; Wang et al. (2020) ). Given the non-normalized nature of the concepts in Con-ceptNet, this can, however, result in an incomplete and noisy mapping: e.g., if the word \"brains\" occurs in a text, it can be mapped to the Concept-Net node brains (which is connected by 131 edges within ConceptNet), but not to brain (which is connected by 1799 edges). Therefore, a lot of relational knowledge stored in ConceptNet gets lost when mapping natural language text to concepts in ConceptNet via string matching. Moreover, since ConceptNet contains many nodes that don't represent meaningful concepts (e.g. yes, there, it's, the), simple string matching can lead to the extraction of concepts that will most likely be useless for downstream applications.",
"cite_spans": [
{
"start": 307,
"end": 325,
"text": "Wang et al. (2020)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Motivated by these observations, we built a Concept Extraction Tool for ConceptNet, CoCo-Ex, which we present in this paper. COCO-EX is a tool written in Python 3.6 that selects meaningful concepts, possibly consisting of multiple tokens from natural language texts; it maps them to a collection of concept nodes in ConceptNet, utilizing the maximum of relational information stored in the knowledge graph. It is thus perfectly suited for identifying and extracting concepts from natural language texts and mapping them to ConceptNet, e.g., to project knowledge subgraphs from texts (Paul and Frank, 2019) , or for detecting and classi-fying knowledge relations instantiated within texts (Becker et al., 2019) .",
"cite_spans": [
{
"start": 583,
"end": 605,
"text": "(Paul and Frank, 2019)",
"ref_id": "BIBREF22"
},
{
"start": 688,
"end": 709,
"text": "(Becker et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We describe our Concept Extraction Tool COCO-EX in Section 2. In Section 3 we evaluate the benefits of COCO-EX in a practical application scenario, comparing it to simple string matching, by evaluating the retrieved concepts and their connectivity both automatically and manually. We conclude with a summary and results in Section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Text and Mapping them to ConceptNet COCO-EX is a pipeline implementation comprising several stages as shown in Figure 1 . In",
"cite_spans": [],
"ref_spans": [
{
"start": 111,
"end": 119,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "COCO-EX: Extracting Concepts from",
"sec_num": "2"
},
{
"text": "Step 1, we extract candidate phrases from a given text, which we preprocess in Step 2. In Step 3, we map the preprocessed phrases to ConceptNet concepts, which we preprocess in the same manner: We first create a dictionary based on ConceptNet, where we gather all concepts that are conceptually related (that is, referring to a similar or the same entity or event), but represented as distinct nodes. In this dictionary we then look up the preprocessed candidate phrases and get all ConceptNet nodes which contain them. In order to avoid obtaining conceptually unrelated nodes, in Step 4 we establish a method that allows us to filter out nodes that are not similar enough to the candidate phrase using similarity metrics and vector space representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COCO-EX: Extracting Concepts from",
"sec_num": "2"
},
{
"text": "Step 1: Extracting Candidate Phrase Types. We start by extracting candidate phrases from a given text using the Stanford Constituency parser (Mi and Huang, 2015) . We extract noun phrases, verb phrases and adjective phrases. 2 We find that some verb phrases are very long and specific and therefore unlikely to find exact matches in Concept-Net (e.g., \"be sorted into different wheelie bins\"). Yet, ConceptNet concepts often consist of general verb-object phrases, such as walk the dog; cook dinner; bake a cake. To accommodate for this, we create, for every verbal phrase we extract from the text, additional versions (i.e., chunks) that exclude subordinated prepositional phrases and/or noun phrases (e.g., for \"be sorted into different wheelie bins\" we additionally extract \"be sorted into\" and \"be sorted\"). Addressing the fact that nodes in Con-ceptNet are of different lengths and often consist of several nested phrases, we keep all the original complex verbal phrases; the reduced chunks; and the split-off nested, subordinated phrases, which we again split into chunks (here: \"different wheelie bins\", \"wheelie bins\", and \"bins\").",
"cite_spans": [
{
"start": 141,
"end": 161,
"text": "(Mi and Huang, 2015)",
"ref_id": "BIBREF16"
},
{
"start": 225,
"end": 226,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "COCO-EX: Extracting Concepts from",
"sec_num": "2"
},
{
"text": "Step 2: Preprocessing Candidate Phrase Types and ConceptNet Nodes. Next, we preprocess the candidate phrases we extracted from the text to prepare the mapping in Step 3. We apply spacy (Honnibal and Montani, 2017) to lemmatize the candidate phrases extracted from the texts, and remove articles, pronouns, adverbs, conjunctions, interjections and punctuation. The very same process we apply in Step 3 to nodes in ConceptNet, which are not normalized, in order to build a dictionary from ConceptNet.",
"cite_spans": [
{
"start": 185,
"end": 213,
"text": "(Honnibal and Montani, 2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "COCO-EX: Extracting Concepts from",
"sec_num": "2"
},
{
"text": "Step 3: Matching Candidate Phrase Types to a Dictionary Based on ConceptNet. We then map the preprocessed phrases to the preprocessed ConceptNet concepts as follows: We create a dictionary based on ConceptNet where we collect all concepts that are conceptually related -in the sense that they involve at least one common content word -but are represented as distinct nodes in Concept-Net. I.e., we aim to subsume, e.g., dog, dogs, nice dog, and my neighbour's dog under one entry in the dictionary (cf. Figure 2) . In our dictionary, keys are lemmatized words contained in concept node phrases (e.g. dog for the concept my dog), and the corresponding value assigned to a key is a list of all ConcepNet nodes that contain this lemma (e.g. dog, dogs, my dog, my neigbor's dog), as determined by the lemmatization of the nodes (see Step 2 for the applied process). Therefore, in our dictionary all ConceptNet nodes that contain the same lemma, the lemma of the key, are clustered together in one entry. Note that we lemmatize the ConcepNet nodes only for the purpose of mapping and clustering, while they remain unchanged (in their original form and inflection) as values in the dictionary. I.e., we compare a key (lemma) to the lemmatized version of the concepts, and include all nodes, or concept phrases in their original, inflected form, that contain this lemma.",
"cite_spans": [],
"ref_spans": [
{
"start": 503,
"end": 512,
"text": "Figure 2)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "COCO-EX: Extracting Concepts from",
"sec_num": "2"
},
{
"text": "An example of how we create an entry in the dictionary is given in Figure 2 and Figure 3 : for the key dog , all conceptually related nodes are retrieved from ConceptNet ( Figure 2 ) by matching the (lemmatized) key and the lemmatized Concept-Net concepts (Figure 3 , left side). All the retrieved ConceptNet nodes that contain the key lemma in their lemmatized form are stored as the key's values (middle of Figure 3) . In case the lemmatized candidate phrase from the text contains further lemmas, we apply the same procedure for each of these, and construct additional entries, if they have not yet been created and stored.",
"cite_spans": [],
"ref_spans": [
{
"start": 67,
"end": 75,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 80,
"end": 88,
"text": "Figure 3",
"ref_id": null
},
{
"start": 172,
"end": 180,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 256,
"end": 265,
"text": "(Figure 3",
"ref_id": null
},
{
"start": 409,
"end": 418,
"text": "Figure 3)",
"ref_id": null
}
],
"eq_spans": [],
"section": "COCO-EX: Extracting Concepts from",
"sec_num": "2"
},
{
"text": "Using this dictionary we are now able to assess the maximum of relational information stored in the ConceptNet knowledge graph for a given candidate phrase from a text, since it allows us to jointly look up the in-and outgoing edges of all values (nodes) assigned to the same key, e.g., [dogs,ISA,domestic animal]; [dog,HASPROPERTY,nice]; ..) ( Figure 3 , right-hand side). In case a candidate phrase contains multiple lemmas, we collect the union of ConceptNet nodes defined for the respective lemmas (keys) as their values, and apply a filtering step, which we describe below, to select the concept nodes that best correspond to the complex phrase.",
"cite_spans": [],
"ref_spans": [
{
"start": 345,
"end": 354,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "COCO-EX: Extracting Concepts from",
"sec_num": "2"
},
{
"text": "Specifically, when looking up extracted candidate phrases that contain a single lemma (e.g. dog ), we consider the complete list of nodes stored in the dictionary for that lemma (key)that is, all concepts containing (inflected versions of) dog , including also multiword phrases which are linked with other keys. When looking up extracted candidate phrases that contain more than one lemma (e.g. \"walk the dog\"), we obtain sets of ConceptNet nodes (values) that are defined for each (non-stopword) lemma (key) -here: dog and walk -and retrieve all ConceptNet nodes from their respective list of values. From these sets, instead of building their union, we construct their intersection, which yields the set of phrases from all keys' values that contain the maximum of lemmas contained in the candidate phrase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COCO-EX: Extracting Concepts from",
"sec_num": "2"
},
{
"text": "For our example \"walk the dog\", we would obtain the two lemmas walk and dog , together with their values:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COCO-EX: Extracting Concepts from",
"sec_num": "2"
},
{
"text": "walk \u2192 walk, walks, walking, walking home, walking a dog, long walk, walk the dog, ... ; and dog \u2192 dog, dogs, nice dog, my neigbor's dog, walking a dog, walk the dog, ...;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COCO-EX: Extracting Concepts from",
"sec_num": "2"
},
{
"text": "and extract walking a dog and walk the dog that are contained as values in both keys.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COCO-EX: Extracting Concepts from",
"sec_num": "2"
},
{
"text": "During the mapping process that collects values (ConceptNet concepts) for the lemmatized keys of candidate phrases, we are also resolving ambiguities. E.g., the forms fly or flies can be either a noun or a verb. We resolve this ambiguity by comparing the POS tags obtained during preprocessing the extracted candidate phrases to the POS tags that are associated with concepts in ConceptNet. 3 Specifically, we retrieve POS information for the extracted candidate phrases by applying the POS tagger implemented in spacy (Honnibal and Montani, 2017) on the sentence level, while for ConceptNet nodes we assess the POS labels available as metadata. In case we find several concepts with the same surface form but different POS tags in ConceptNet (e.g. fly/noun and fly/verb), we use the POS annotations from the extracted candidate phrases and from ConceptNet tags to restrict the mapping to matching POS, hence we do not include any concepts with conflicting POS information in the list Figure 3 : Example of the ConceptNet Dictionary entry for dog . Left: lemmatized ConceptNet nodes (grey) that contain dog (underlined); middle: CN dictionary entry (containing the original CN nodes); right: relational knowledge (in-and outgoing edges for each value (CN node) assigned to the key) which can be retrieved from ConceptNet based on the dictionary entry.",
"cite_spans": [
{
"start": 519,
"end": 547,
"text": "(Honnibal and Montani, 2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 985,
"end": 993,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "COCO-EX: Extracting Concepts from",
"sec_num": "2"
},
{
"text": "of values for the phrase's keys.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COCO-EX: Extracting Concepts from",
"sec_num": "2"
},
{
"text": "To summarize, the dictionary we obtain from Step 3 allows us to look up concepts for any preprocessed candidate phrases, and obtain from it all ConceptNet nodes which contain them or inflected versions of them. In case of multiple lemmas contained in a candidate phrase, we retrieve all nodes that contain all lemmas included in the given phrase, by computing an intersection over the values associated with all keys (lemmas) evoked by the phrase. 4 Since we lemmatize both the Con-ceptNet nodes and the extracted candidate phrases as described above, we maximize the number of matches, and hence, the associated ConceptNet relation tuples, while selecting maximally specific nodes. At the same time, since we construct chunked phrases from the extracted concepts, we also allow for more constrained matches (limited, e.g., to single lemmata) with equally constrained Concept-Net concepts, preventing over-specific phrases and an ensuing loss of recall. Finally, we apply POS filtering, and hence avoid the retrieval of ConceptNet concepts that do not match the POS category of the concepts mentioned in the candidate phrase, relying on the sentential context of the phrase candidate for disambiguation.",
"cite_spans": [
{
"start": 448,
"end": 449,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "COCO-EX: Extracting Concepts from",
"sec_num": "2"
},
{
"text": "Step 4: Constraining the Mapping to Concept-Net Concepts. While in Step 3 we constrain the selected concept nodes by intersection in case the phrase candidate contains multiple lemmata, we still obtain many ConceptNet nodes when mapping short phrases containing a single content word to ConceptNet, since we retrieve all nodes that include the lemma of the candidate phrase. In practice, this yields a huge set of concepts that contain not only this lemma, but many other content words not present in the candidate phrase -possibly conceptually unrelated nodes that we want to omit. For example, if the candidate phrase is \"dog\", we map it to the ConceptNet nodes dog and dogs, but also conceptually not strictly related nodes such as feeding my dogs, dogs are my favourite animals, it's raining cats and dogs, etc. We therefore establish a method that allows us to filter out nodes that are not similar enough to the candidate phrase, and hence are assumed to be conceptually unrelated, which we describe in the following.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COCO-EX: Extracting Concepts from",
"sec_num": "2"
},
{
"text": "We filter the nodes (values) for each lemma (key) by calculating the similarity between the Concept-Net concepts and the extracted candidate phrase. We calculate similarity in terms of length (by token or char length) and in terms of semantic similarity (using word embeddings and similarity metrics). We experimented with different similarity metrics: we tried Dice Coefficient (S\u00f8rensen, 1948) , Jaccard Coefficient (Jaccard, 1902) , Minimum Edit Distance, Word Mover's Distance (Kusner et al., 2015) , and Cosine Distance, with different similarity thresholds. For the metrics that require word representations in vector space (Word Mover's Distance and and Cosine Distance), we tried different embeddings (Numberbatch , Word2Vec trained on GoogleNews (Mikolov et al., 2013) , and GloVe (Pennington et al., 2014) ), where we compute representations for multiword terms by averaging their embeddings. We also consider differences in phrase lengths: here we compare the length of the ConceptNet nodes' concept phrases to the length of the candidate phrase -by number of tokens and of characters. E.g. when comparing the candidate phrase \"my dog\" to the nodes (a) dogs and (b) many dogs, we obtain for (a) a difference in the number of tokens by 1 and of characters by 1, and for (b) in the number of tokens by 0 and of characters by 3.",
"cite_spans": [
{
"start": 379,
"end": 395,
"text": "(S\u00f8rensen, 1948)",
"ref_id": "BIBREF30"
},
{
"start": 418,
"end": 433,
"text": "(Jaccard, 1902)",
"ref_id": "BIBREF10"
},
{
"start": 481,
"end": 502,
"text": "(Kusner et al., 2015)",
"ref_id": "BIBREF11"
},
{
"start": 755,
"end": 777,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF19"
},
{
"start": 790,
"end": 815,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "COCO-EX: Extracting Concepts from",
"sec_num": "2"
},
{
"text": "We evaluated the output of several configurations manually in terms of how well the filtered nodes fit the extracted candidate phrase, and found the following configurations to yield the highest coverage and lowest noise: we allow for a maximum token length difference of 1 and/or a maximum character difference of 10, and a minimum Dice coefficient of 0.85. The other configurations are implemented as well (as command line parameters), so users can experiment with different settings easily. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COCO-EX: Extracting Concepts from",
"sec_num": "2"
},
{
"text": "Recent approaches that map natural language text to nodes in ConceptNet apply simple string matching. Wang et al. (2020) for example use Concept-Net in order to retrieve multi-hop knowledge paths as background information for improving the task of question answering. They map concepts that appear in questions and answers from the two benchmark datasets, CommonsenseQA (Talmor et al., 2019) and OpenBookQA , to ConceptNet using plain string matching. Irrespective of the question answering task, we want to evaluate the two methods of linking concepts from texts to ConceptNet (plain string matching vs. COCO-EX) by comparing the number of concepts that could be retrieved from ConceptNet by both methods, respectively; and by evaluating the quality of the retrieved concepts, with regard to their coverage and informativity, as well as the amount of utilized relational knowledge from the ConceptNet knowledge graph.",
"cite_spans": [
{
"start": 102,
"end": 120,
"text": "Wang et al. (2020)",
"ref_id": "BIBREF34"
},
{
"start": 370,
"end": 391,
"text": "(Talmor et al., 2019)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Applications",
"sec_num": "3"
},
{
"text": "We reimplement the string matching method and make it comparable to COCO-EX by retrieving all noun phrases, verb phrases and adjective phrases and their nested phrases (as we do for COCO-EX). Additionally, as in COCO-EX, we filter these phrases by removing articles, pronouns, adverbs, conjunctions, interjections and punctuation, and keep the original phrases and the chunked versions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applications",
"sec_num": "3"
},
{
"text": "The counts of concepts retrieved by simple string matching vs. using COCO-EX are displayed in Table 1 . We find that for the CommonsenseQA dataset, more concepts are linked to ConceptNet from the questions when using string matching, while with COCO-EX we can link more concepts from the answers (Table 1) . For OpenBookQA, the number of extracted concepts for the questions are similar for both methods, while again we can link more concepts from the answers with COCO-EX.",
"cite_spans": [],
"ref_spans": [
{
"start": 94,
"end": 101,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 296,
"end": 305,
"text": "(Table 1)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Applications",
"sec_num": "3"
},
{
"text": "For evaluating concept quality, we set up a small annotation experiment where we provided our annotators with 50 questions randomly sampled from CommonsenseQA and OpenBookQA. For each question, our annotators evaluated whether all meaningful concepts were extracted (coverage, in a binary setting (yes/no)); and if/how many informative (and thus, wanted) concepts are among the extracted concepts (which can be interpreted as reverse precision). 5 For each dataset, two annotators with linguistic background performed annotations. We measure annotator agreement in terms of Cohen's Kappa and achieve an agreement of 78%. Remaining conflicts were resolved by an expert annotator (one of the authors). The number of concepts that could be accessed in ConceptNet we evaluate automatically, in terms of the number of in-and outgoing edges connecting the node(s) which have been annotated as informative (wanted), identified by simple string matching vs. all nodes obtained by COCO-EX through keys and values. The results of our manual evaluation experiment are displayed in Table 2 . We find that the coverage (if all meaningful concepts were extracted, evaluated in a binary setting: yes/no) is higher for Com-monsenseQA when using COCO-EX and higher for OpenBooksQA when applying string matching.",
"cite_spans": [],
"ref_spans": [
{
"start": 1070,
"end": 1077,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Applications",
"sec_num": "3"
},
{
"text": "Next, we evaluate the informativeness of the extracted concepts. We find that the ratio between informative (wanted) and uninformative concepts (unwanted) is much better when using COCO-EX opposed to simple string matching on both datasets (cf. Table 2 ). Finally, we also evaluate the amount of relational information stored in the ConceptNet knowledge graph which can be retrieved by looking 5 Our annotation manual can be found here: https:// github.com/Heidelberg-NLP/CoCo-Ex/blob/ master/CoCo-Ex_Annotation_Manual.pdf up in-and outgoing nodes from the nodes rated as informative. Here we find that with COCO-EX, much more relational information of ConceptNet can be accessed, indicating again the superiority of this method compared to simple string matching.",
"cite_spans": [
{
"start": 394,
"end": 395,
"text": "5",
"ref_id": null
}
],
"ref_spans": [
{
"start": 245,
"end": 252,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Applications",
"sec_num": "3"
},
{
"text": "In this paper we presented COCO-EX, a tool for Extracting Concepts from texts and linking them to the ConceptNet knowledge graph. As opposed to the common shortcut method of simply matching strings from texts to ConceptNet nodes, COCO-EX extracts meaningful concepts from texts and maps them to collections of concept nodes in Con-ceptNet, which enables us to assess the maximum of relational information stored in the ConceptNet knowledge graph. COCO-EX takes into account that concepts in ConceptNet are represented as noncanonicalized, free-form text and are often complex, noisy, uninformative, and/or over-specific. We evaluated COCO-EX against the method of simple string matching, which confirmed our hypotheses that (i) COCO-EX improves the precision of mapping by enabling the extraction of meaningful, important rather than overspecific or uninformative concepts, and (ii) allows to utilize the maximum of relational information stored in the knowledge graph, a step towards overcoming the well-known sparsity issue of commonsense knowledge graphs such as ConceptNet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "We provide a demo video (https://www. youtube.com/watch?v=bgqVhE2vR9A&feature= youtu.be) and the code (https://github.com/ Heidelberg-NLP/CoCo-Ex) for COCO-EX.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We extract leaves (= tokens) of all subtrees that have one of the following phrase types or POS-tags: 'NP', 'VP', 'ADJP','JJ', 'JJR', 'JJS', 'NN', 'NNS', 'NNP', 'NNPS', 'VB', 'VBG', 'VBD', 'VBN', 'VBP', 'VBZ'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We find POS information for a majority of concepts contained in ConceptNet, as used in specific tuples. In cases where this information is not given, we do not apply any filtering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This holds as long as the lemmas identified in the textual phrases can be identified within ConceptNet's concept nodes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was funded by the DFG within the project ExpLAIN as part of the Priority Program RATIO (SPP-1999). We thank our annotators for their contribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Assessing the difficulty of classifying ConceptNet relations in a multi-label classification setting",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Becker",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Staniek",
"suffix": ""
},
{
"first": "Vivi",
"middle": [],
"last": "Nastase",
"suffix": ""
},
{
"first": "Anette",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2019,
"venue": "RELATIONS -Workshop on meaning relations between phrases and sentences",
"volume": "",
"issue": "",
"pages": "1--15",
"other_ids": {
"DOI": [
"10.18653/v1/W19-0801"
]
},
"num": null,
"urls": [],
"raw_text": "Maria Becker, Michael Staniek, Vivi Nastase, and Anette Frank. 2019. Assessing the difficulty of clas- sifying ConceptNet relations in a multi-label classifi- cation setting. In RELATIONS -Workshop on mean- ing relations between phrases and sentences, pages 1-15, Gothenburg, Sweden. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Freebase: A Collaboratively Created Graph Database for Structuring Human Knowledge",
"authors": [
{
"first": "Kurt",
"middle": [],
"last": "Bollacker",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "Praveen",
"middle": [],
"last": "Paritosh",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Sturge",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Taylor",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/1376616.1376746"
]
},
"num": null,
"urls": [],
"raw_text": "Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: A Col- laboratively Created Graph Database for Structuring Human Knowledge. In Proceedings of the 2008",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "ACM SIGMOD International Conference on Management of Data, SIGMOD '08",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "1247--1250",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "ACM SIGMOD International Conference on Man- agement of Data, SIGMOD '08, pages 1247-1250, New York, NY, USA. ACM.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "COMET: Commonsense transformers for automatic knowledge graph construction",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bosselut",
"suffix": ""
},
{
"first": "Hannah",
"middle": [],
"last": "Rashkin",
"suffix": ""
},
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Chaitanya",
"middle": [],
"last": "Malaviya",
"suffix": ""
},
{
"first": "Asli",
"middle": [],
"last": "Celikyilmaz",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4762--4779",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1470"
]
},
"num": null,
"urls": [],
"raw_text": "Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chai- tanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: Commonsense transformers for au- tomatic knowledge graph construction. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4762-4779, Florence, Italy. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "JMdict: a Japanese-multilingual dictionary",
"authors": [
{
"first": "Jim",
"middle": [],
"last": "Breen",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Workshop on Multilingual Linguistic Resources",
"volume": "",
"issue": "",
"pages": "65--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jim Breen. 2004. JMdict: a Japanese-multilingual dic- tionary. In Proceedings of the Workshop on Multi- lingual Linguistic Resources, pages 65-72, Geneva, Switzerland. COLING.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Toward an architecture for never-ending language learning",
"authors": [
{
"first": "A",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Betteridge",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Kisiel",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Settles",
"suffix": ""
},
{
"first": "E",
"middle": [
"R"
],
"last": "Hruschka",
"suffix": ""
},
{
"first": "T",
"middle": [
"M"
],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Conference on Artificial Intelligence (AAAI)",
"volume": "",
"issue": "",
"pages": "1306--1313",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Carlson, J. Betteridge, B. Kisiel, B. Settles, E.R. Hr- uschka Jr., and T.M. Mitchell. 2010. Toward an ar- chitecture for never-ending language learning. In Proceedings of the Conference on Artificial Intelli- gence (AAAI), pages 1306-1313. AAAI Press.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Multiple knowledge GraphDB (MKGDB)",
"authors": [
{
"first": "Stefano",
"middle": [],
"last": "Faralli",
"suffix": ""
},
{
"first": "Paola",
"middle": [],
"last": "Velardi",
"suffix": ""
},
{
"first": "Farid",
"middle": [],
"last": "Yusifli",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "2325--2331",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefano Faralli, Paola Velardi, and Farid Yusifli. 2020. Multiple knowledge GraphDB (MKGDB). In Pro- ceedings of The 12th Language Resources and Eval- uation Conference, pages 2325-2331, Marseille, France. European Language Resources Association.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "WordNet: An Electronic Lexical Database",
"authors": [
{
"first": "Christiane",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. Bradford Books.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "ELiRF-UPV at SemEval-2019 task 3: Snapshot ensemble of hierarchical convolutional neural networks for contextual emotion detection",
"authors": [
{
"first": "Jos\u00e9-\u00c1ngel",
"middle": [],
"last": "Gonz\u00e1lez",
"suffix": ""
},
{
"first": "-",
"middle": [
"F"
],
"last": "Llu\u00eds",
"suffix": ""
},
{
"first": "Ferran",
"middle": [],
"last": "Hurtado",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pla",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "195--199",
"other_ids": {
"DOI": [
"10.18653/v1/S19-2031"
]
},
"num": null,
"urls": [],
"raw_text": "Jos\u00e9-\u00c1ngel Gonz\u00e1lez, Llu\u00eds-F. Hurtado, and Ferran Pla. 2018. ELiRF-UPV at SemEval-2019 task 3: Snap- shot ensemble of hierarchical convolutional neural networks for contextual emotion detection. In Pro- ceedings of the 13th International Workshop on Semantic Evaluation, pages 195-199, Minneapo- lis, Minnesota, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Honnibal",
"suffix": ""
},
{
"first": "Ines",
"middle": [],
"last": "Montani",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embed- dings, convolutional neural networks and incremen- tal parsing.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Lois de distribution florale dans la zone alpine",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Jaccard",
"suffix": ""
}
],
"year": 1902,
"venue": "Bulletin de la Soci\u00e9t\u00e9 Vaudoise des Sciences Naturelles",
"volume": "38",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Jaccard. 1902. Lois de distribution florale dans la zone alpine, volume 38. Bulletin de la Soci\u00e9t\u00e9 Vaudoise des Sciences Naturelles.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "From Word Embeddings To Document Distances",
"authors": [
{
"first": "Matt",
"middle": [
"J"
],
"last": "Kusner",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Kolkin",
"suffix": ""
},
{
"first": "Kilian",
"middle": [],
"last": "Weinberger",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 32nd International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "957--966",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt J. Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. 2015. From Word Embeddings To Doc- ument Distances. In Proceedings of the 32nd Inter- national Conference on Machine Learning (ICML), pages 957-966.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Commonsense Knowledge Base Completion",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Aynaz",
"middle": [],
"last": "Taheri",
"suffix": ""
},
{
"first": "Lifu",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1445--1455",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiang Li, Aynaz Taheri, Lifu Tu, and Kevin Gimpel. 2016. Commonsense Knowledge Base Completion. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1445-1455, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Usable AI Requires Commonsense Knowledge",
"authors": [
{
"first": "Henry",
"middle": [],
"last": "Liebermann",
"suffix": ""
}
],
"year": 2008,
"venue": "Workshop on Usable artificial intelligence, held in conjunction with the Conference on Human Factors in Computing Systems (CHI)",
"volume": "",
"issue": "",
"pages": "1--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Henry Liebermann. 2008. Usable AI Requires Com- monsense Knowledge. In Workshop on Usable arti- ficial intelligence, held in conjunction with the Con- ference on Human Factors in Computing Systems (CHI), pages 1-5.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "KagNet: Knowledge-aware graph networks for commonsense reasoning",
"authors": [
{
"first": "Xinyue",
"middle": [],
"last": "Bill Yuchen Lin",
"suffix": ""
},
{
"first": "Jamin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ren",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2829--2839",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1282"
]
},
"num": null,
"urls": [],
"raw_text": "Bill Yuchen Lin, Xinyue Chen, Jamin Chen, and Xi- ang Ren. 2019. KagNet: Knowledge-aware graph networks for commonsense reasoning. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 2829-2839, Hong Kong, China. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Commonsense knowledge base completion with structural and semantic context",
"authors": [
{
"first": "Chaitanya",
"middle": [],
"last": "Malaviya",
"suffix": ""
},
{
"first": "Chandra",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bosselut",
"suffix": ""
},
{
"first": "Choi",
"middle": [],
"last": "Yejin",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "2925--2933",
"other_ids": {
"DOI": [
"10.1609/aaai.v34i03.5684"
]
},
"num": null,
"urls": [],
"raw_text": "Chaitanya Malaviya, Chandra Bhagavatula, Antoine Bosselut, and Choi Yejin. 2020. Commonsense knowledge base completion with structural and se- mantic context. In Proceedings of the AAAI Confer- ence on Artificial Intelligence, pages 2925-2933.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Shift-reduce constituency parsing with dynamic programming and POS tag lattice",
"authors": [
{
"first": "Haitao",
"middle": [],
"last": "Mi",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1030--1035",
"other_ids": {
"DOI": [
"10.3115/v1/N15-1108"
]
},
"num": null,
"urls": [],
"raw_text": "Haitao Mi and Liang Huang. 2015. Shift-reduce con- stituency parsing with dynamic programming and POS tag lattice. In Proceedings of the 2015 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 1030-1035, Denver, Col- orado. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Can a suit of armor conduct electricity? a new dataset for open book question answering",
"authors": [
{
"first": "Todor",
"middle": [],
"last": "Mihaylov",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Tushar",
"middle": [],
"last": "Khot",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Sabharwal",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2381--2391",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1260"
]
},
"num": null,
"urls": [],
"raw_text": "Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct elec- tricity? a new dataset for open book question an- swering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2381-2391, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Knowledgeable reader: Enhancing cloze-style reading comprehension with external commonsense knowledge",
"authors": [
{
"first": "Todor",
"middle": [],
"last": "Mihaylov",
"suffix": ""
},
{
"first": "Anette",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "821--832",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1076"
]
},
"num": null,
"urls": [],
"raw_text": "Todor Mihaylov and Anette Frank. 2018. Knowledge- able reader: Enhancing cloze-style reading compre- hension with external commonsense knowledge. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 821-832, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, G.s Corrado, Kai Chen, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. In International Conference on Learning Representations, pages 1-12.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Factorizing yago: Scalable machine learning for linked data",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "Nickel",
"suffix": ""
},
{
"first": "Hans-Peter",
"middle": [],
"last": "Volker Tresp",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kriegel",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 21st International Conference on World Wide Web, WWW '12",
"volume": "",
"issue": "",
"pages": "271--280",
"other_ids": {
"DOI": [
"10.1145/2187836.2187874"
]
},
"num": null,
"urls": [],
"raw_text": "Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2012. Factorizing yago: Scalable machine learning for linked data. In Proceedings of the 21st International Conference on World Wide Web, WWW '12, page 271-280, New York, NY, USA. As- sociation for Computing Machinery.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "SemEval-2018 task 11: Machine comprehension using commonsense knowledge",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Ostermann",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Ashutosh",
"middle": [],
"last": "Modi",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Thater",
"suffix": ""
},
{
"first": "Manfred",
"middle": [],
"last": "Pinkal",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of The 12th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "747--757",
"other_ids": {
"DOI": [
"10.18653/v1/S18-1119"
]
},
"num": null,
"urls": [],
"raw_text": "Simon Ostermann, Michael Roth, Ashutosh Modi, Ste- fan Thater, and Manfred Pinkal. 2018. SemEval- 2018 task 11: Machine comprehension using com- monsense knowledge. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 747-757, New Orleans, Louisiana. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Ranking and Selecting Multi-Hop Knowledge Paths to Better Predict Human Needs",
"authors": [
{
"first": "Debjit",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "Anette",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "3671--3681",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Debjit Paul and Anette Frank. 2019. Ranking and Se- lecting Multi-Hop Knowledge Paths to Better Pre- dict Human Needs. In Proceedings of the Annual Conference of the North American Chapter of the As- sociation for Computational Linguistics, volume 1, pages 3671-3681, Minneapolis, Minnesota, USA.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Argumentative Relation Classification with Background Knowledge",
"authors": [
{
"first": "Debjit",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "Juri",
"middle": [],
"last": "Opitz",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Becker",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Kobbe",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Hirst",
"suffix": ""
},
{
"first": "Anette",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 8th International Conference on Computational Models of Argument (COMMA 2020)",
"volume": "",
"issue": "",
"pages": "319--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Debjit Paul, Juri Opitz, Maria Becker, Jonathan Kobbe, Graeme Hirst, and Anette Frank. 2020. Argu- mentative Relation Classification with Background Knowledge. In Proceedings of the 8th International Conference on Computational Models of Argument (COMMA 2020), pages 319-330.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word rep- resentation. In Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Commonsense knowledge base completion and generation",
"authors": [
{
"first": "Itsumi",
"middle": [],
"last": "Saito",
"suffix": ""
},
{
"first": "Kyosuke",
"middle": [],
"last": "Nishida",
"suffix": ""
},
{
"first": "Hisako",
"middle": [],
"last": "Asano",
"suffix": ""
},
{
"first": "Junji",
"middle": [],
"last": "Tomita",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 22nd Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "141--150",
"other_ids": {
"DOI": [
"10.18653/v1/K18-1014"
]
},
"num": null,
"urls": [],
"raw_text": "Itsumi Saito, Kyosuke Nishida, Hisako Asano, and Junji Tomita. 2018. Commonsense knowledge base completion and generation. In Proceedings of the 22nd Conference on Computational Natural Lan- guage Learning, pages 141-150, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Classifying relations by ranking with convolutional neural networks",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "C\u00edcero Dos Santos",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "626--634",
"other_ids": {
"DOI": [
"10.3115/v1/P15-1061"
]
},
"num": null,
"urls": [],
"raw_text": "C\u00edcero dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying relations by ranking with convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Lin- guistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 626-634, Beijing, China. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "ConceptNet 5.5: An Open Multilingual Graph of General Knowledge",
"authors": [
{
"first": "Robyn",
"middle": [],
"last": "Speer",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Chin",
"suffix": ""
},
{
"first": "Catherine",
"middle": [],
"last": "Havasi",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of 31St AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "444--451",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. ConceptNet 5.5: An Open Multilingual Graph of General Knowledge. In Proceedings of 31St AAAI Conference on Artificial Intelligence, pages 444- 451.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "AnalogySpace: Reducing the Dimensionality of Common Sense Knowledge",
"authors": [
{
"first": "Robyn",
"middle": [],
"last": "Speer",
"suffix": ""
},
{
"first": "Catherine",
"middle": [],
"last": "Havasi",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Lieberman",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 23rd National Conference on Artificial Intelligence",
"volume": "1",
"issue": "",
"pages": "548--553",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robyn Speer, Catherine Havasi, and Henry Lieberman. 2008. AnalogySpace: Reducing the Dimensional- ity of Common Sense Knowledge. In Proceedings of the 23rd National Conference on Artificial Intelli- gence -Volume 1, pages 548-553. AAAI Press.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Concept-Net at SemEval-2017 Task 2: Extending Word Embeddings with Multilingual Relational Knowledge",
"authors": [
{
"first": "Robyn",
"middle": [],
"last": "Speer",
"suffix": ""
},
{
"first": "Joanna",
"middle": [],
"last": "Lowry-Duda",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "85--89",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robyn Speer and Joanna Lowry-Duda. 2017. Concept- Net at SemEval-2017 Task 2: Extending Word Em- beddings with Multilingual Relational Knowledge. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 85- 89, Vancouver, Canada. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A method of establishing groups of equal amplitude in plant sociology based on similarity of species and its application to analyses of the vegetation on Danish commons. Kongelige Danske Videnskabernes Selskab",
"authors": [
{
"first": "Thorvald",
"middle": [],
"last": "S\u00f8rensen",
"suffix": ""
}
],
"year": 1948,
"venue": "",
"volume": "5",
"issue": "",
"pages": "1--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorvald S\u00f8rensen. 1948. A method of establishing groups of equal amplitude in plant sociology based on similarity of species and its application to anal- yses of the vegetation on Danish commons. Kon- gelige Danske Videnskabernes Selskab. 5 (4): 1-34.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "CommonsenseQA: A question answering challenge targeting commonsense knowledge",
"authors": [
{
"first": "Alon",
"middle": [],
"last": "Talmor",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Herzig",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Lourie",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4149--4158",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1421"
]
},
"num": null,
"urls": [],
"raw_text": "Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A ques- tion answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149-4158, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Neural relation extraction for knowledge base enrichment",
"authors": [
{
"first": "Gerhard",
"middle": [],
"last": "Bayu Distiawan Trisedya",
"suffix": ""
},
{
"first": "Jianzhong",
"middle": [],
"last": "Weikum",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "229--240",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1023"
]
},
"num": null,
"urls": [],
"raw_text": "Bayu Distiawan Trisedya, Gerhard Weikum, Jianzhong Qi, and Rui Zhang. 2019. Neural relation extrac- tion for knowledge base enrichment. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 229-240, Flo- rence, Italy. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Yuanfudao at SemEval-2018 task 11: Three-way attention and relational knowledge for commonsense machine comprehension",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Kewei",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Jingming",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of The 12th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "758--762",
"other_ids": {
"DOI": [
"10.18653/v1/S18-1120"
]
},
"num": null,
"urls": [],
"raw_text": "Liang Wang, Meng Sun, Wei Zhao, Kewei Shen, and Jingming Liu. 2018. Yuanfudao at SemEval-2018 task 11: Three-way attention and relational knowl- edge for commonsense machine comprehension. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 758-762, New Orleans, Louisiana. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Connecting the dots: A knowledgeable path generator for commonsense question answering",
"authors": [
{
"first": "Peifeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ilievski",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Szekely",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Ren",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "4129--4140",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.369"
]
},
"num": null,
"urls": [],
"raw_text": "Peifeng Wang, Nanyun Peng, Filip Ilievski, Pedro Szekely, and Xiang Ren. 2020. Connecting the dots: A knowledgeable path generator for commonsense question answering. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4129-4140, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Dynamic Integration of Background Knowledge in Neural NLU Systems",
"authors": [
{
"first": "Dirk",
"middle": [],
"last": "Weissenborn",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Kocisky",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dirk Weissenborn, Tomas Kocisky, and Chris Dyer. 2018. Dynamic Integration of Background Knowl- edge in Neural NLU Systems. In International Con- ference on Learning Representations (ICLR) 2018.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Augmenting End-to-End Dialog Systems with Commonsense Knowledge",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Iti",
"middle": [],
"last": "Chaturvedi",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Subham",
"middle": [],
"last": "Biswas",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "4970--4977",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Young, Erik Cambria, Iti Chaturvedi, Hao Zhou, Subham Biswas, and Minlie Huang. 2018. Aug- menting End-to-End Dialog Systems with Common- sense Knowledge. Proceedings of the AAAI Confer- ence on Artificial Intelligence, pages 4970-4977.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Our pipeline for extracting and mapping phrases from texts to nodes in ConceptNet.",
"type_str": "figure",
"num": null
},
"FIGREF1": {
"uris": null,
"text": "Collecting conceptually related nodes in Con-ceptNet, here: for the phrase \"the dog\".",
"type_str": "figure",
"num": null
},
"TABREF1": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>: Number of concepts linked to ConceptNet</td></tr><tr><td>by simple string matching vs. using COCO-EX. Com-</td></tr><tr><td>monsenseQA contains 12,247 questions with 5 answer</td></tr><tr><td>choices each, and OpenBookQA provides 6,000 4-way</td></tr><tr><td>multiple-choice questions.</td></tr></table>",
"text": ""
},
"TABREF3": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "Manual evaluation of linked concepts from 25 questions for each dataset. For each question, our annotators evaluated if all meaningful concepts were extracted (Coverage; in a binary evaluation setup yes/no); and how many of the extracted concepts are informative (wanted) (Ratio wanted/wanted+unwanted) . For all informative (wanted) concepts, we then looked up the number of edges connecting these nodes in ConceptNet (in-and outgoing edges)."
}
}
}
}