ACL-OCL / Base_JSON /prefixC /json /cogalex /2020.cogalex-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:27:03.520169Z"
},
"title": "Definition Extraction Feature Analysis: From Canonical to Naturally-Occurring Definitions",
"authors": [
{
"first": "Mireia",
"middle": [],
"last": "Roig",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cardiff University",
"location": {
"country": "United Kingdom"
}
},
"email": "roigmirapeixm@cardiff.ac.uk"
},
{
"first": "Mirapeix",
"middle": [],
"last": "Luis",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cardiff University",
"location": {
"country": "United Kingdom"
}
},
"email": ""
},
{
"first": "Jose",
"middle": [],
"last": "Camacho-Collados",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cardiff University",
"location": {
"country": "United Kingdom"
}
},
"email": "camachocolladosj@cardiff.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Textual definitions constitute a fundamental source of knowledge when seeking the meaning of words, and they are the cornerstone of lexical resources like glossaries, dictionaries, encyclopedia or thesauri. In this paper, we present an in-depth analytical study on the main features relevant to the task of definition extraction. Our main goal is to study whether linguistic structures from canonical (the Aristotelian or genus et differentia model) can be leveraged to retrieve definitions from corpora in different domains of knowledge and textual genres alike. To this end, we develop a simple linear classifier and analyze the contribution of several (sets of) linguistic features. Finally, as a result of our experiments, we also shed light on the particularities of existing benchmarks as well as the most challenging aspects of the task.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Textual definitions constitute a fundamental source of knowledge when seeking the meaning of words, and they are the cornerstone of lexical resources like glossaries, dictionaries, encyclopedia or thesauri. In this paper, we present an in-depth analytical study on the main features relevant to the task of definition extraction. Our main goal is to study whether linguistic structures from canonical (the Aristotelian or genus et differentia model) can be leveraged to retrieve definitions from corpora in different domains of knowledge and textual genres alike. To this end, we develop a simple linear classifier and analyze the contribution of several (sets of) linguistic features. Finally, as a result of our experiments, we also shed light on the particularities of existing benchmarks as well as the most challenging aspects of the task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Definition Extraction (DE) is the task to extract textual definitions from naturally occurring texts . The development of models able to identify definitions in freely occurring text has many applications such as the automatic generation of dictionaries, thesauri and glossaries, as well as e-learning materials and lexical taxonomies (Westerhout, 2009; Del Gaudio et al., 2014; Jurgens and Pilehvar, 2015; Espinosa-Anke et al., 2016) . Moreover, definitional knowledge has proven to be a useful signal for improving language models in downstream NLP tasks (Joshi et al., 2020) . The task of DE is currently approached almost unanimously as a supervised classification problem, and the latest methods have demonstrated an outstanding performance, to the point of reducing the error rate to less than 2% in some datasets (Veyseh et al., 2019) . However, the high performance of these models could be mainly due to artifacts in the data, and thus they may not generalize to different domains.",
"cite_spans": [
{
"start": 335,
"end": 353,
"text": "(Westerhout, 2009;",
"ref_id": "BIBREF22"
},
{
"start": 354,
"end": 378,
"text": "Del Gaudio et al., 2014;",
"ref_id": "BIBREF1"
},
{
"start": 379,
"end": 406,
"text": "Jurgens and Pilehvar, 2015;",
"ref_id": "BIBREF9"
},
{
"start": 407,
"end": 434,
"text": "Espinosa-Anke et al., 2016)",
"ref_id": "BIBREF5"
},
{
"start": 557,
"end": 577,
"text": "(Joshi et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 820,
"end": 841,
"text": "(Veyseh et al., 2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main aim of this paper is to analyze to what extent is possible to learn a universal definition extraction system from canonical definitions, and to understand the core differences that currently exist in standard evaluation testbeds. In particular, we propose experiments where we develop a machine learning model able to distinguish definitions with high accuracy in a corpus of canonical definitions, and later evaluate such model in different (pertaining to different domains and genres) datasets. Our evaluation datasets are two, namely: the Word-Class Lattices (WCL) dataset from , and DEFT, from the SemEval 2020 Task 6 -Subtask 1 (Spala et al., 2019) . The former provides an annotated set of definitions and non-definitions with syntactic patterns similar to those of definition sentences from Wikipedia (what the authors call syntactically plausible false definitions). The latter presents a robust English corpus that explores the less straightforward cases of term-definition structures in free and semistructured text from different domains (i.e., biology, history and government), and which is not limited to well-defined, structured, and narrow conditions.",
"cite_spans": [
{
"start": 642,
"end": 662,
"text": "(Spala et al., 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We include a detailed descriptive analysis of both corpora that identifies similarities and differences between definitions and non-definitions, later used for feature selection and analysis. We come to conclusions regarding the discriminative power of certain linguistic features. Interestingly, these features alone do not have a strong effect on the results, but, the combining feature sets of different nature can improve performance, even in target corpora having heterogeneous domains and non-canonical definitions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To train and first evaluate the model, we use the annotated WCL dataset. This dataset contains sentences from a sample of the WCL corpus that includes both definitions and non-definitions with syntactic patterns very similar to those found in definitions (e.g. \"Snowcap is unmistakable\"). The syntactic patterns are simple and represent what we could refer to as canonical definitions. We will test the performance of a model trained on this dataset, and evaluate on the DEFT dataset, which contains a set of definitions and non-definitions from various topics such as biology, history and government.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Over the last years, DE has received notorious attention for its applications in Natural Language Processing, Computational Linguistics and Computational Lexicography (Espinosa-Anke and Saggion, 2014), as it has been proven to be applicable to glossary generation (Muresan and Klavans, 2002; Park et al., 2002) , terminological databases (Nakamura and Nagao, 1988) or question answering systems (Saggion and Gaizauskas, 2004; Cui et al., 2005) , among many others.",
"cite_spans": [
{
"start": 264,
"end": 291,
"text": "(Muresan and Klavans, 2002;",
"ref_id": "BIBREF10"
},
{
"start": 292,
"end": 310,
"text": "Park et al., 2002)",
"ref_id": "BIBREF14"
},
{
"start": 338,
"end": 364,
"text": "(Nakamura and Nagao, 1988)",
"ref_id": "BIBREF11"
},
{
"start": 395,
"end": 425,
"text": "(Saggion and Gaizauskas, 2004;",
"ref_id": "BIBREF16"
},
{
"start": 426,
"end": 443,
"text": "Cui et al., 2005)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Research on DE has seen contributions where the task is typically proposed as a binary classification problem (whether a sentence is a definition or not), although with exceptions (Jin et al., 2013) . DE has also been studied in languages other than English, e.g., Slavic languages (Przepi\u00f3rkowski et al., 2007) , Spanish (Sierra et al., 2008) or Portuguese (Del Gaudio et al., 2014) . Many of these approaches use symbolic methods depending on manually crafted or semi-automatically learned lexico-syntactic patterns (Hovy et al., 2003; Westerhout and Monachesi, 2007 ) such as 'refers to' or 'is a'.",
"cite_spans": [
{
"start": 180,
"end": 198,
"text": "(Jin et al., 2013)",
"ref_id": "BIBREF7"
},
{
"start": 282,
"end": 311,
"text": "(Przepi\u00f3rkowski et al., 2007)",
"ref_id": "BIBREF15"
},
{
"start": 322,
"end": 343,
"text": "(Sierra et al., 2008)",
"ref_id": "BIBREF17"
},
{
"start": 358,
"end": 383,
"text": "(Del Gaudio et al., 2014)",
"ref_id": "BIBREF1"
},
{
"start": 518,
"end": 537,
"text": "(Hovy et al., 2003;",
"ref_id": "BIBREF6"
},
{
"start": 538,
"end": 568,
"text": "Westerhout and Monachesi, 2007",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "A notable contribution to DE is the Word Class Lattices model , which explores DE on the WCL dataset, a set of encyclopedic definitions and distractors, and which we use in this paper. In a subsequent contribution, Espinosa-Anke and Saggion (2014) present a supervised approach in which only syntactic features derived from dependency relations are used, and whose results are reported higher to the WCL method. For identifying definitions with higher linguistic variability, a weakly supervised approach is presented in Espinosa-Anke et al. (2015). And finally, models based on neural networks have been leveraged for exploiting both long and short-range dependencies, either combining CNNs and LSTMS (Espinosa-Anke and Schockaert, 2018) or BERT (Veyseh et al., 2019) , and which are currently the highest performing models on WCL.",
"cite_spans": [
{
"start": 742,
"end": 768,
"text": "BERT (Veyseh et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section we present the datasets utilized for our analysis, namely WCL (Section 3.1) and DEFT (Section 3.2), and provide a descriptive analysis comparing both datasets (Section 3.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "The WCL dataset contains 1,772 definitions and 2,847 non-definitions. Each instance is extracted from Wikipedia, and definitions follow a canonical structure following the genus et differentia model (i.e., 'X is a Y which Z'). A preliminary (and shallow) analysis that can be performed without any linguistic detail revolves around comparing the length of definitions vs non definitions. Specifically, definitions have 27.5 words on average, while non-definitions have an average length of 27.2 words. The median for definitions and non-definitions, respectively, is 25 and 24. Although the difference is quite small, it seems that encyclopedic definitions are in general slightly longer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WCL dataset",
"sec_num": "3.1"
},
{
"text": "A particular feature of the WCL dataset is that each candidate is composed of a sentence with part-ofspeech and phrase chunking annotation. For definitional sentences, an additional set of tags is provided, which identify core components in definitions such as DEFINIENDUM (term defined), DEFINITOR (definition trigger), DEFINIENS (cluster of words that define the definiendum) and REST (rest of the sentence).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WCL dataset",
"sec_num": "3.1"
},
{
"text": "Let us look now at the average length of each of these definition components (see Table 1 ). The DEFINIENS is typically the most important part of definition sentences (where the definition actually happens), however, it is also the shortest one, followed by the DEFINIENDUM. Moreover, REST is generally the longest but also the one with the highest variance, which fits in with the fact that it is a non-essential part of the definition that can contain varying amounts of information. These results seem to suggest that the part of the sentence that actually makes it a definition (definiens and definiendum) is, in many occasions, quite short compared to the overall length of the sentence. Table 1 : Summary statistics of the length of definiendum, definiens and rest.",
"cite_spans": [],
"ref_spans": [
{
"start": 82,
"end": 89,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 694,
"end": 701,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "WCL dataset",
"sec_num": "3.1"
},
{
"text": "The original annotation of the WCL dataset also identifies the main verb of the definition, i.e. that are not in the REST part (Table 1(a) lists the frequent ones). As expected, the verb \"to be\" tops the list, with four different conjugations taking up the top 5 verbs. Note that these 5 verbs together appear in 1,670 of the 1,772 definitions in the WCL corpus, which could be a sign that the appearance of one of these verbs is a relevant feature to identify definitions. We can also find the most common hypernyms in Table 2 : 5 most common main verbs and hypernyms in definitions in the WCL dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 520,
"end": 527,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "WCL dataset",
"sec_num": "3.1"
},
{
"text": "The DEFT dataset (Spala et al., 2019) contains 853 sentences, of which 279 are definitions and 574 are non-definitions. It presents a corpus of natural language term-definition pairs embracing different topics such as biology, history, physics, psychology, economics, sociology and government. Sentences have been classified following a new schema that explores how explicit in-text definitions and glosses work in free and semi-structured text, especially those whose term-definition pairs span crosses a sentence boundary and those lacking explicit definition phrases. Thus, they identify as definitions sentences where the relation between a term and a definition requires more deduction than finding a definition verb phrase. Their focus is to identify terms and definitions, but not necessarily the verb that may or may not connect them two, which identifies as definitions a broather variety of structures. In this case, the average length of definitions is 27.38 and non-definitions have an average length of 23.84. The median length for definitions and non-definitions is 26 and 22 respectively.",
"cite_spans": [
{
"start": 17,
"end": 37,
"text": "(Spala et al., 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DEFT dataset",
"sec_num": "3.2"
},
{
"text": "In this section we perform a short descriptive analysis comparing the two datasets. Continuing with the instance length analysis, Table 3 shows statistics for both datasets, this time comparing length of positive (definition) and negative (non definition) sentences. As can be observed, definitions generally tend to be longer than non-definitions, although the main part of the definition is quite short compared to its overall length. Moreover, while the distribution of definitions/non-definitions is similar, the number of instances is considerably larger in the WCL corpus, which is improtant to note, as we will use it as our training set in our experiments (cf. Section 4.1.)",
"cite_spans": [],
"ref_spans": [
{
"start": 130,
"end": 137,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Descriptive analysis",
"sec_num": "3.3"
},
{
"text": "Regarding frequency of specific POS tags, in Section 3.1 we have seen how some verbs are extremely abundant in definitions in the WCL corpus. However, these are quite common verbs in general in these datasets, as Figures 1(a) and 1(b) show. Note that, for instance, 'is' is more frequent in definitions in both datasets, with an average frequency greater than 1 in both datasets (1.4% and 1.1% in WCL and DEFT, respectively). However, 'was' is actually the opposite and is more frequent in non-definitions while the others are much less common and do not seem to be as present in both types of sentences. Concerning hypernyms (a.k.a genus in Aristotelian definitions), although the counts are much lower for hypernyms than for verbs (Table 2) , in Figure 2 we illustrate how the hypernyms that appear at least 5 times in the WCL dataset are usually more common in definitions in both datasets. The presence of such hypernyms is likely to be more related to the topics defined than the structure of the sentence, but having any kind of hypernym is probably a relevant feature of definitions, as canonical or lexicographic definitions have (or should have) at least one.",
"cite_spans": [],
"ref_spans": [
{
"start": 733,
"end": 742,
"text": "(Table 2)",
"ref_id": null
},
{
"start": 748,
"end": 756,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Descriptive analysis",
"sec_num": "3.3"
},
{
"text": "We observed that definitions and non-definitions present different frequencies of POS and chunk patterns. In the WCL dataset it seems that definitions have a higher frequency of noun phrases (denoted as 'NP' or 'NP NN', for instance), while non-definitions have more prepositional phrases ('PP' or 'PP IN'). However, we do not observe these similarities in the DEFT dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Descriptive analysis",
"sec_num": "3.3"
},
{
"text": "Finally, we computed the most PoS-based patterns structures 1 (occurring at least 5 times) in the main part of definitions from the WCL dataset. We have observed that these structures are much more common in definitions than in non-definitions in both corpora, which seems to indicate definitions tend to use a particular morphosyntactic set of structures which can be strong indicators of definitional knowledge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Descriptive analysis",
"sec_num": "3.3"
},
{
"text": "In this section we explain our experiments in definition extraction. In particular, we train a supervised model on the WCL corpus of canonical definitions, and tested on the same corpus (via cross-validation) and the DEFT corpus. With this experiment we aim at understanding relevant features for definition extraction and whether features from canonical definitions can be extrapolated to other domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "Section 4.1 describes the experimental settings and Section 4.2 presents the main results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "In the following we explain the experimental setting for our definition extraction experiments. In Section 4.1.1 we explain our supervised definition extraction model and its features inspired by our descriptive analysis. Then, we explain the data preprocessing (Section 4.1.2) and training details (Section 4.1.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setting",
"sec_num": "4.1"
},
{
"text": "As supervised model we made use of a Support Vector Machine (SVM) given its efficiency and effectiveness in handling a large set of linguistic features. The model uses an RBF kernel and a combination of different features. The main one is based on n-grams of range 1 to 3 from the tagged sentences, i.e. each word contains chunk tag, PoS tag and word separated by an underscore. The other features are based on the findings from Section 3.3. For each training set, the model computes the 5 most common definition verbs, i.e. in the main part of the definition, the 20 most common hypernyms, the 10 most common composition of chunk and PoS tags, the 6 most common chunk tags 2 , the 10 most common structures of chunk and PoS tags combined, the 10 most common structures of chunk tags and the maximum length of definitions. Using this, we obtain the following new features:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model and features",
"sec_num": "4.1.1"
},
{
"text": "\u2022 VERB: Count of common verbs present in the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model and features",
"sec_num": "4.1.1"
},
{
"text": "\u2022 HYP: Count of common hypernyms present in the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model and features",
"sec_num": "4.1.1"
},
{
"text": "\u2022 CT-Ch, CT-Ch&PoS: For each of the 6 most common chunk tags and the 10 most common combinations of chunk and PoS, number of occurrences divided by total number of tags in the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model and features",
"sec_num": "4.1.1"
},
{
"text": "\u2022 STR-Ch, STR-Ch&PoS: For each of the 10 most common structures (chunk and combination of chunk and PoS respectively), a binary variable indicating if the structure is present in the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model and features",
"sec_num": "4.1.1"
},
{
"text": "\u2022 LEN: The length of the sentence divided by the maximum length of a definition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model and features",
"sec_num": "4.1.1"
},
{
"text": "As each corpus contains different information and has a different structure, their preprocessing is slightly different, although the output has the same format: a matrix where the features are obtained from. As the WCL dataset contains all the definitions' annotations, we classify each part in a different column and also the verbs and hypernyms annotated. We later retag the sentences with PoS tags and chunk, using, respectively, the NLTK 3 pos tag function and the RegexpParser with the following grammar: It distinguishes 3 phrases: verb phrases (contain a verb sometimes preceded by a modal or 'to', with possible adverbs and another verb after a coordinating conjunction), prepositional phrases (starting with a preposition and followed by determinants, cardinal numbers, nouns or pronouns) and noun phrases (including a noun or pronoun sometimes preceded by determiners, cardinal numbers or adjectives).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data preprocessing",
"sec_num": "4.1.2"
},
{
"text": "parser =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data preprocessing",
"sec_num": "4.1.2"
},
{
"text": "The output is a matrix where each row corresponds to a sentence and each column has different information such as the sentence (tagged and not), the term being defined, the hypernyms annotated in the sentence, the main verb of the definition, the label and different columns that contain the tags (both PoS tags and chunk or only chunk) for the whole sentence and for the main part of the definition (definiendum and definiens). For non-definitions, some columns such as the verb, hypernym and tags of the main part of the definition contain NaN values, as they only exist for definitions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data preprocessing",
"sec_num": "4.1.2"
},
{
"text": "The preprocessing for the DEFT corpus is simpler: we tag the sentences using the same rules and save the sentences, tags and label in different columns. Numbers at the beginning of sentences have removed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data preprocessing",
"sec_num": "4.1.2"
},
{
"text": "As mentioned earlier, the model was trained on the WCL dataset. We used sklearn 4 for training and evaluating the SVM model. For the experiments, the SVM hyperparameters were chosen after testing the following values: [0.0001, 0.001, 0.1, 1, 5, 10, 50, 100] for C, and [0.0001, 0.001, 0.1, 1, 5, 10, 50, 100] for gamma, both in a validation set. Finally, the evaluation on the WCL dataset is performed through 10-fold cross-validation, with 10% of the corpus used for validation in each fold. Then, the model is trained on the whole WCL corpus and evaluated on the DEFT corpus. The final hyperparameters of the SVM were C = 5 and gamma= 0.1. In addition to the SVM model, as a baseline we trained a Naive Bayes with the same features. This model was trained with its standard implementation in sklearn.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training procedure",
"sec_num": "4.1.3"
},
{
"text": "The results on the WCL dataset are displayed in Table 4 . As a naive baseline we include the results of a system that would identify all sentences as definitions (referred to as Naive(all defs) in the table).",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 55,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "As can be observed, all metrics are above 0.97 and the average metrics are all close to 0.98. This proves the reliability of the SVM model with all our proposed linguistic features, which attains the highest performance of any non-linear model in the task. As a point of comparison, recent works have reported slightly worse results using highly parametrized models such as convolutional and recurrent neural networks (Espinosa-Anke and Schockaert, 2018).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "Accuracy Table 4 : Results of the SVM model on the WCL dataset using 10-fold cross validation. Precision, recall and F1 are macro metrics. The last two rows include the average results of the two baselines considered.",
"cite_spans": [],
"ref_spans": [
{
"start": 9,
"end": 16,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Fold",
"sec_num": null
},
{
"text": "When testing the model on the DEFT corpus, the results are not close to being as satisfactory as they are in the WCL dataset, as we can see in Table 5 . The model trained on the WCL dataset performs significantly worse than other recent models (Spala et al., 2020) , which could be expected given the different nature of the definitions. In the following section we provide a more extensive analysis that also attempts at explaining the performance difference between the two datasets.",
"cite_spans": [
{
"start": 244,
"end": 264,
"text": "(Spala et al., 2020)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 143,
"end": 150,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Fold",
"sec_num": null
},
{
"text": "Accuracy Table 5 : DEFT results of the SVM and baselines trained on the WCL corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 9,
"end": 16,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "5.1 Feature analysis Figure 5 shows the features of the model with highest \u03c7 2 . Some of them are compositions extremely common in definitions such as 'is a', 'is an' or 'refers', but we also find others more topic related such as 'mythology' or 'greek', which would probably be artifacts from the WCL dataset. For a detailed view of each additional feature's significance, we ran the model removing one or more features at a time. Morever, we also ran the model using the n-gram features only, with different combinations of words and tags. We can find this feature analysis in Table 6 . Although the accuracy in the 10-fold cross-validation setting does not change significantly when removing only one feature, and even improves slightly in the case of hypernyms, they do show changes when evaluating on the DEFT corpus. We observe significantly lower accuracy when removing more than one feature at a time (last two rows), decreasing regularly when removing more features and obtaining between 0.93 \u2212 0.94 using only n-gram features, which indicates that these features rely on and interact with each other to improve accuracy. The differences are more significant when evaluating the model on the DEFT corpus, the accuracy goes from around 0.70 when using all features to 0.55 when removing some of them. This proves that the additional features are relevant to identify definitions and improve the metrics significantly, especially in unseen corpora. In fact, the performance of using n-grams features only achieves a performance of 0.934 F1 in the in-domain WCL corpus, and a significantly lower 0.575 performance on the DEFT corpus. Table 6 : Results of the SVM model (trained on the WCL dataset) using different sets of features. For accuracy, * indicates when the results start to show differences that are statistically significant (p-value< 0.05 according to a t-test) with respect to the model using all features (first row).",
"cite_spans": [],
"ref_spans": [
{
"start": 21,
"end": 29,
"text": "Figure 5",
"ref_id": "FIGREF5"
},
{
"start": 579,
"end": 586,
"text": "Table 6",
"ref_id": null
},
{
"start": 1640,
"end": 1647,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "Furthermore, we can see in Table 7 how the n-gram model is significantly more accurate when using both PoS and chunk tags and words rather than only some of them, which indicates that both words and structure of the sentence determine whether it is a definition or not.",
"cite_spans": [],
"ref_spans": [
{
"start": 27,
"end": 34,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "In Table 8 we can see some examples of predictions from the model that provide a more in-depth view. We observe how the model is successful in correctly predicting sentences with unorthodox structures, such as non-definitions using the verb \"is\", and syntactically complex definitions. Moreover, some of sentences that have been predicted wrongly as definitions could be considered as definitions, but they are not defining the target word. The false negatives present complex structures probably unseen for the model. Thus, evidence suggests the model succeeds most of the times at identifying definitions and nondefinitions, and has incorporated satisfactorily the distinctive characteristics of each kind of sentence.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "5.2"
},
{
"text": "As for the DEFT dataset, as expected from the obtained accuracy, the model makes numerous mistakes. Table 7 : Results of the SVM model using different types of n-gram features only.",
"cite_spans": [],
"ref_spans": [
{
"start": 100,
"end": 107,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "5.2"
},
{
"text": "It has a large number of false negatives (23.9 %), making its predictions less reliable in this setting. The model does a good job at detecting true negatives (91.1% of all negative instances), also due to the fact that most sentences are predicted as non-definitions. However, some false negatives do not seem to contain definitional information. Something similar happens with false positives, as some of them would most likely be considered definitions under more flexible criteria. Thus, although the performance of the model on this data set seems to be relatively low overall, this is probably because of the different tagging criteria, as many sentences that appeared as incorrectly predicted, could be labelled correctly under the annotation criteria used in the WCL dataset. For instance, the sentence \"Elimination blackjack is a tournament format of blackjack.\" could be considered a definition with the criteria used in the DEFT dataset as it presents a direct-defines relation, while \"It carries the correct amino acid to the site of protein synthesis\" would not be considered a definition in the WCL corpus as it is not an actual textual definition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "5.2"
},
{
"text": "Predicted nodef * Predicted def *",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "5.2"
},
{
"text": "His death is deeply mourned by Alleycats fans as seen in the press and media. The term \"carbonate\" is also commonly used to refer to one of these salts or carbonate minerals. Covering the head is respectful in Sikhism and if a man is not wearing a turban, then a rum\u0101l must be worn before entering the Gurdwara.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WCL nodef",
"sec_num": null
},
{
"text": "Elimination blackjack is a tournament format of blackjack.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WCL nodef",
"sec_num": null
},
{
"text": "The following are links to pictures of Myddfai taken by the club. Balderton Old Boys also are a local football team.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WCL nodef",
"sec_num": null
},
{
"text": "def",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WCL nodef",
"sec_num": null
},
{
"text": "The Callitrichinae form one of the four families of New World monkeys now recognised",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WCL nodef",
"sec_num": null
},
{
"text": "The Aurochs or urus (Bos primigenius) was a very large type of cattle that was prevalent in Europe until its extinction in 1627. In everyday usage, risk is often used synonymously with the probability of a known loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WCL nodef",
"sec_num": null
},
{
"text": "In the 19th century the term anglicanism was coined to describe the common religious tradition of these churches. Both equivocation and amphiboly are fallacies arising from ambiguity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WCL nodef",
"sec_num": null
},
{
"text": "The term biotic refers to the condition of living organisms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WCL nodef",
"sec_num": null
},
{
"text": "Predicted nodef * Predicted def *",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WCL nodef",
"sec_num": null
},
{
"text": "Living things are highly organized and structured , following a hierarchy that can be examined on a scale from small to large.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DEFT nodef",
"sec_num": null
},
{
"text": "Transfer RNA ( tRNA ) is one of the smallest of the four types of RNA , usually 70 -90 nucleotides long. At its most fundamental level , life is made up of matter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DEFT nodef",
"sec_num": null
},
{
"text": "A microphyll is small and has a simple vascular system. It consists of a nucleus surrounded by electrons.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DEFT nodef",
"sec_num": null
},
{
"text": "An individual with dyslexia exhibits an inability to correctly process letters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DEFT nodef",
"sec_num": null
},
{
"text": "It carries the correct amino acid to the site of protein synthesis. The atom is the smallest and most fundamental unit of matter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "def",
"sec_num": null
},
{
"text": "The rays themselves are called nuclear radiation. A prokaryote is a simple, mostly single-celled ( unicellular ) organism that lacks a nucleus, or any other membrane-bound organelle. Herbivores eat plant material , and planktivores eat plankton.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "def",
"sec_num": null
},
{
"text": "Matter is any substance that occupies space and has mass. Table 8 : Definition (def * ) and non-definition (nodef * ) predictions on both WCL and DEFT ground truth (for def and nodef classes).",
"cite_spans": [],
"ref_spans": [
{
"start": 58,
"end": 65,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "def",
"sec_num": null
},
{
"text": "In conclusion, extracting definitions from texts is a challenging research task, which is highly dependant on the distribution and scope of the application. Nonetheless, in this paper we have shown that a simple SVM model trained on a dataset with canonical definitions using linguistic features can provide high performance while helping us understand the task better. This model has also been evaluated on a corpus with heterogeneous domains, which also provided us with insights on the qualitative difference among definitions in each setting. Our descriptive analysis discovered interesting differences and similarities between definitions and non-definitions that can be used to differentiate them automatically. The inclusion of linguistic features based on our analysis improved significantly the performance of the model. As future work it would be interesting to extend the analysis to corpora of different characteristics and languages. As an straightforward application, a model with accurate performance across corpora would allow the automatic creation of dictionaries from general or specialized domains, as well as to better understand certain topics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "PoS-based patterns are any ordered sequences of tags (PoS or chunk) such as 'NP DT' (noun phrase followed by a determiner).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "After the 6th most common, the appearances are significantly lower and hardly relevant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.nltk.org/ 4 https://scikit-learn.org/stable/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the reviewers for their feedback and Emrah Ozturk for his help in the early stages of this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Generic soft pattern models for definitional question answering",
"authors": [
{
"first": "Hang",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": ""
},
{
"first": "Tat-Seng",
"middle": [],
"last": "Chua",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '05",
"volume": "",
"issue": "",
"pages": "384--391",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hang Cui, Min-Yen Kan, and Tat-Seng Chua. 2005. Generic soft pattern models for definitional question answer- ing. In Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '05, page 384-391, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Coping with highly imbalanced datasets: A case study with definition extraction in a multilingual setting",
"authors": [
{
"first": "Rosa",
"middle": [],
"last": "Del Gaudio",
"suffix": ""
},
{
"first": "Gustavo",
"middle": [],
"last": "Batista",
"suffix": ""
},
{
"first": "Ant\u00f3nio",
"middle": [],
"last": "Branco",
"suffix": ""
}
],
"year": 2014,
"venue": "Natural Language Engineering",
"volume": "20",
"issue": "3",
"pages": "327--359",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rosa Del Gaudio, Gustavo Batista, and Ant\u00f3nio Branco. 2014. Coping with highly imbalanced datasets: A case study with definition extraction in a multilingual setting. Natural Language Engineering, 20(3):327-359.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Applying dependency relations to definition extraction",
"authors": [
{
"first": "Luis",
"middle": [],
"last": "Espinosa-Anke",
"suffix": ""
},
{
"first": "Horacio",
"middle": [],
"last": "Saggion",
"suffix": ""
}
],
"year": 2014,
"venue": "Natural Language Processing and Information Systems, NLDB 2014",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luis Espinosa-Anke and Horacio Saggion. 2014. Applying dependency relations to definition extraction. In Natu- ral Language Processing and Information Systems, NLDB 2014, pages 63-74. Springer International Publishing Switzerland 2014, Montpellier, France, 06.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Syntactically aware neural architectures for definition extraction",
"authors": [
{
"first": "Luis",
"middle": [],
"last": "Espinosa-Anke",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Schockaert",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "378--385",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luis Espinosa-Anke and Steven Schockaert. 2018. Syntactically aware neural architectures for definition extrac- tion. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 378-385.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Weakly supervised definition extraction",
"authors": [
{
"first": "Luis",
"middle": [],
"last": "Espinosa-Anke",
"suffix": ""
},
{
"first": "Horacio",
"middle": [],
"last": "Saggion",
"suffix": ""
},
{
"first": "Francesco",
"middle": [],
"last": "Ronzano",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "176--185",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luis Espinosa-Anke, Horacio Saggion, and Francesco Ronzano. 2015. Weakly supervised definition extraction. In Proceedings of the International Conference Recent Advances in Natural Language Processing, pages 176- 185, Hissar, Bulgaria, September. INCOMA Ltd. Shoumen, BULGARIA.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Extasem! extending, taxonomizing and semantifying domain terminologies",
"authors": [
{
"first": "Luis",
"middle": [],
"last": "Espinosa-Anke",
"suffix": ""
},
{
"first": "Horacio",
"middle": [],
"last": "Saggion",
"suffix": ""
},
{
"first": "Francesco",
"middle": [],
"last": "Ronzano",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2016,
"venue": "Thirtieth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luis Espinosa-Anke, Horacio Saggion, Francesco Ronzano, and Roberto Navigli. 2016. Extasem! extending, taxonomizing and semantifying domain terminologies. In Thirtieth AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Extending metadata definitions by automatically extracting and organizing glossary definitions",
"authors": [
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Philpot",
"suffix": ""
},
{
"first": "Judith",
"middle": [],
"last": "Klavans",
"suffix": ""
},
{
"first": "Ulrich",
"middle": [],
"last": "Germann",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"T"
],
"last": "Davis",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Annual National Conference on Digital Government Research, dg.o '03",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eduard Hovy, Andrew Philpot, Judith Klavans, Ulrich Germann, and Peter T. Davis. 2003. Extending metadata definitions by automatically extracting and organizing glossary definitions. In Proceedings of the 2003 Annual National Conference on Digital Government Research, dg.o '03, page 1. Digital Government Society of North America.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Mining scientific terms and their definitions: A study of the acl anthology",
"authors": [
{
"first": "Yiping",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "780--790",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yiping Jin, Min-Yen Kan, Jun Ping Ng, and Xiangnan He. 2013. Mining scientific terms and their definitions: A study of the acl anthology. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 780-790.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Contextualized representations using textual encyclopedic knowledge",
"authors": [
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.12006"
]
},
"num": null,
"urls": [],
"raw_text": "Mandar Joshi, Kenton Lee, Yi Luan, and Kristina Toutanova. 2020. Contextualized representations using textual encyclopedic knowledge. arXiv preprint arXiv:2004.12006.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Reserating the awesometastic: An automatic extension of the wordnet taxonomy for novel terms",
"authors": [
{
"first": "David",
"middle": [],
"last": "Jurgens",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Taher Pilehvar",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1459--1465",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Jurgens and Mohammad Taher Pilehvar. 2015. Reserating the awesometastic: An automatic extension of the wordnet taxonomy for novel terms. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1459-1465.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A method for automatically building and evaluating dictionary resources",
"authors": [
{
"first": "Smaranda",
"middle": [],
"last": "Muresan",
"suffix": ""
},
{
"first": "Judith",
"middle": [],
"last": "Klavans",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Third International Conference on Language Resources and Evaluation (LREC'02)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Smaranda Muresan and Judith Klavans. 2002. A method for automatically building and evaluating dictionary resources. In Proceedings of the Third International Conference on Language Resources and Evaluation (LREC'02), Las Palmas, Canary Islands -Spain, May. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Extraction of semantic information from an ordinary english dictionary and its evaluation",
"authors": [
{
"first": "Jun-Ichi",
"middle": [],
"last": "Nakamura",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Nagao",
"suffix": ""
}
],
"year": 1988,
"venue": "Proceedings of the 12th Conference on Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "459--464",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun-ichi Nakamura and Makoto Nagao. 1988. Extraction of semantic information from an ordinary english dictionary and its evaluation. In Proceedings of the 12th Conference on Computational Linguistics -Volume 2, COLING '88, page 459-464, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Learning word-class lattices for definition and hypernym extraction",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "Paola",
"middle": [],
"last": "Velardi",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1318--1327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli and Paola Velardi. 2010. Learning word-class lattices for definition and hypernym extraction. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1318-1327, Uppsala, Sweden, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "An annotated dataset for extracting definitions and hypernyms from the web",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "Paola",
"middle": [],
"last": "Velardi",
"suffix": ""
},
{
"first": "Juana",
"middle": [
"Maria"
],
"last": "Ruiz-Mart\u00ednez",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli, Paola Velardi, and Juana Maria Ruiz-Mart\u00ednez. 2010. An annotated dataset for extracting defini- tions and hypernyms from the web. In Proceedings of the Seventh International Conference on Language Re- sources and Evaluation (LREC'10), Valletta, Malta, May. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Automatic glossary extraction: Beyond terminology identification",
"authors": [
{
"first": "Youngja",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Roy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Byrd",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Branimir",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Boguraev",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 19th International Conference on Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Youngja Park, Roy J Byrd, and Branimir K Boguraev. 2002. Automatic glossary extraction: Beyond terminology identification. In Proceedings of the 19th International Conference on Computational Linguistics -Volume 1, COLING '02, page 1-7, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Towards the automatic extraction of definitions in slavic",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Przepi\u00f3rkowski",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Deg\u00f3rski",
"suffix": ""
},
{
"first": "Miroslav",
"middle": [],
"last": "Spousta",
"suffix": ""
},
{
"first": "Kiril",
"middle": [],
"last": "Simov",
"suffix": ""
},
{
"first": "Petya",
"middle": [],
"last": "Osenova",
"suffix": ""
},
{
"first": "Lothar",
"middle": [],
"last": "Lemnitzer",
"suffix": ""
},
{
"first": "Vladislav",
"middle": [],
"last": "Kubon",
"suffix": ""
},
{
"first": "Beata",
"middle": [],
"last": "W\u00f3jtowicz",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Workshop on Balto-Slavonic Natural Language Processing",
"volume": "",
"issue": "",
"pages": "43--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Przepi\u00f3rkowski, \u0141ukasz Deg\u00f3rski, Miroslav Spousta, Kiril Simov, Petya Osenova, Lothar Lemnitzer, Vladislav Kubon, and Beata W\u00f3jtowicz. 2007. Towards the automatic extraction of definitions in slavic. In Proceedings of the Workshop on Balto-Slavonic Natural Language Processing, pages 43-50.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Mining on-line sources for definition knowledge",
"authors": [
{
"first": "Horacio",
"middle": [],
"last": "Saggion",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Gaizauskas",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 17th FLAIRS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Horacio Saggion and Rob Gaizauskas. 2004. Mining on-line sources for definition knowledge. In In Proceedings of the 17th FLAIRS 2004, Miami Bearch, 01.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Definitional verbal patterns for semantic relation extraction",
"authors": [
{
"first": "Gerardo",
"middle": [],
"last": "Sierra",
"suffix": ""
},
{
"first": "Rodrigo",
"middle": [],
"last": "Alarc\u00f3n",
"suffix": ""
},
{
"first": "C\u00e9sar",
"middle": [],
"last": "Aguilar",
"suffix": ""
},
{
"first": "Carme",
"middle": [],
"last": "Bach",
"suffix": ""
}
],
"year": 2008,
"venue": "Terminology. International Journal of Theoretical and Applied Issues in Specialized Communication",
"volume": "14",
"issue": "1",
"pages": "74--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerardo Sierra, Rodrigo Alarc\u00f3n, C\u00e9sar Aguilar, and Carme Bach. 2008. Definitional verbal patterns for se- mantic relation extraction. Terminology. International Journal of Theoretical and Applied Issues in Specialized Communication, 14(1):74-98.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "DEFT: A corpus for definition extraction in free-and semi-structured text",
"authors": [
{
"first": "Sasha",
"middle": [],
"last": "Spala",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [
"A"
],
"last": "Miller",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Franck",
"middle": [],
"last": "Dernoncourt",
"suffix": ""
},
{
"first": "Carl",
"middle": [],
"last": "Dockhorn",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th Linguistic Annotation Workshop",
"volume": "",
"issue": "",
"pages": "124--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sasha Spala, Nicholas A. Miller, Yiming Yang, Franck Dernoncourt, and Carl Dockhorn. 2019. DEFT: A corpus for definition extraction in free-and semi-structured text. In Proceedings of the 13th Linguistic Annotation Workshop, pages 124-131, Florence, Italy, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A joint model for definition extraction with syntactic connection and semantic consistency",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Amir Pouran",
"suffix": ""
},
{
"first": "Franck",
"middle": [],
"last": "Veyseh",
"suffix": ""
},
{
"first": "Dejing",
"middle": [],
"last": "Dernoncourt",
"suffix": ""
},
{
"first": "Thien Huu",
"middle": [],
"last": "Dou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.01678"
]
},
"num": null,
"urls": [],
"raw_text": "Amir Pouran Ben Veyseh, Franck Dernoncourt, Dejing Dou, and Thien Huu Nguyen. 2019. A joint model for definition extraction with syntactic connection and semantic consistency. arXiv preprint arXiv:1911.01678.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Extraction of dutch definitory contexts for elearning purpose",
"authors": [
{
"first": "Eline",
"middle": [],
"last": "Westerhout",
"suffix": ""
},
{
"first": "Paola",
"middle": [],
"last": "Monachesi",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 17th Meeting of Computational Linguistics in the Netherlands (CLIN 2007)",
"volume": "",
"issue": "",
"pages": "219--253",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eline Westerhout and Paola Monachesi. 2007. Extraction of dutch definitory contexts for elearning purpose. In Peter Dirix, Ineke Schuurman, Vincent Vandeghinste, and Frank Van Eynde, editors, Proceedings of the 17th Meeting of Computational Linguistics in the Netherlands (CLIN 2007), pages 219-34. CLIN, Nijmegen, Netherlands.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Definition extraction using linguistic and structural features",
"authors": [
{
"first": "Eline",
"middle": [],
"last": "Westerhout",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 1st Workshop on Definition Extraction",
"volume": "",
"issue": "",
"pages": "61--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eline Westerhout. 2009. Definition extraction using linguistic and structural features. In Proceedings of the 1st Workshop on Definition Extraction, pages 61-67.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "(a) WCL dataset.(b) DEFT dataset.",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "Frequency of common verbs in definitions and non-definitions.",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "Average frequency of common hypernyms in definitions and non-definitions. (a) WCL dataset. (b) DEFT dataset.",
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"text": "Presence of chunk and PoS tags in definitions and non definitions.",
"uris": null
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"text": "Presence of structures of chunk tags in definitions and non-definitions.",
"uris": null
},
"FIGREF5": {
"type_str": "figure",
"num": null,
"text": "Features from the SVM model trained on WCL with highest \u03c7 2 .",
"uris": null
},
"TABREF1": {
"type_str": "table",
"html": null,
"text": "(b), although their counts are significantly lower, matching the fact that they are related to the term defined.",
"num": null,
"content": "<table><tr><td colspan=\"2\">Verb Counts</td><td colspan=\"2\">Hypernym Counts</td></tr><tr><td>is</td><td>1405</td><td>instrument</td><td>28</td></tr><tr><td>was</td><td>114</td><td>person</td><td>22</td></tr><tr><td>are</td><td>58</td><td>plants</td><td>19</td></tr><tr><td>refers</td><td>58</td><td>device</td><td>14</td></tr><tr><td>were</td><td>35</td><td>mammal</td><td>12</td></tr></table>"
},
"TABREF3": {
"type_str": "table",
"html": null,
"text": "Number of instances, mean and median length for definitions and non-definitions from both WCL and DEFT datasets.",
"num": null,
"content": "<table/>"
}
}
}
}