ACL-OCL / Base_JSON /prefixS /json /S13 /S13-1025.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S13-1025",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:42:33.153314Z"
},
"title": "DLS@CU-CORE: A Simple Machine Learning Model of Semantic Textual Similarity",
"authors": [
{
"first": "Md",
"middle": [],
"last": "Arafat",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Colorado",
"location": {
"postCode": "80309",
"settlement": "Boulder",
"region": "CO"
}
},
"email": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Colorado",
"location": {
"postCode": "80309",
"settlement": "Boulder",
"region": "CO"
}
},
"email": "steven.bethard@colorado.edu"
},
{
"first": "Tamara",
"middle": [],
"last": "Sumner",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Colorado",
"location": {
"postCode": "80309",
"settlement": "Boulder",
"region": "CO"
}
},
"email": "sumner@colorado.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a system submitted in the Semantic Textual Similarity (STS) task at the Second Joint Conference on Lexical and Computational Semantics (*SEM 2013). Given two short text fragments, the goal of the system is to determine their semantic similarity. Our system makes use of three different measures of text similarity: word n-gram overlap, character n-gram overlap and semantic overlap. Using these measures as features, it trains a support vector regression model on SemEval STS 2012 data. This model is then applied on the STS 2013 data to compute textual similarities. Two different selections of training data result in very different performance levels: while a correlation of 0.4135 with gold standards was observed in the official evaluation (ranked 63 rd among all systems) for one selection, the other resulted in a correlation of 0.5352 (that would rank 21 st).",
"pdf_parse": {
"paper_id": "S13-1025",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a system submitted in the Semantic Textual Similarity (STS) task at the Second Joint Conference on Lexical and Computational Semantics (*SEM 2013). Given two short text fragments, the goal of the system is to determine their semantic similarity. Our system makes use of three different measures of text similarity: word n-gram overlap, character n-gram overlap and semantic overlap. Using these measures as features, it trains a support vector regression model on SemEval STS 2012 data. This model is then applied on the STS 2013 data to compute textual similarities. Two different selections of training data result in very different performance levels: while a correlation of 0.4135 with gold standards was observed in the official evaluation (ranked 63 rd among all systems) for one selection, the other resulted in a correlation of 0.5352 (that would rank 21 st).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Automatically identifying the semantic similarity between two short text fragments (e.g. sentences) is an important research problem having many important applications in natural language processing, information retrieval, and digital education. Examples include automatic text summarization, question answering, essay grading, among others.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, despite having important applications, semantic similarity identification at the level of short text fragments is a relatively recent area of investigation. The problem was formally brought to attention and the first solutions were proposed in 2006 with the works reported in (Mihalcea et al., 2006) and (Li et al., 2006) . Work prior to these focused primarily on large documents (or individual words) (Mihalcea et al., 2006) . But the sentence-level granularity of the problem is characterized by factors like high specificity and low topicality of the expressed information, and potentially small lexical overlap even between very similar texts, asking for an approach different from those that were designed for larger texts.",
"cite_spans": [
{
"start": 285,
"end": 308,
"text": "(Mihalcea et al., 2006)",
"ref_id": "BIBREF5"
},
{
"start": 313,
"end": 330,
"text": "(Li et al., 2006)",
"ref_id": "BIBREF3"
},
{
"start": 412,
"end": 435,
"text": "(Mihalcea et al., 2006)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since its inception, the problem has seen a large number of solutions in a relatively small amount of time. The central idea behind most solutions is the identification and alignment of semantically similar or related words across the two sentences, and the aggregation of these similarities to generate an overall similarity score (Mihalcea et al., 2006; Islam and Inkpen, 2008; \u0160ari\u0107 et al., 2012) . The Semantic Textual Similarity task (STS) organized as part of the Semantic Evaluation Exercises (see (Agirre et al., 2012) for a description of STS 2012) provides a common platform for evaluation of such systems via comparison with humanannotated similarity scores over a large dataset.",
"cite_spans": [
{
"start": 332,
"end": 355,
"text": "(Mihalcea et al., 2006;",
"ref_id": "BIBREF5"
},
{
"start": 356,
"end": 379,
"text": "Islam and Inkpen, 2008;",
"ref_id": "BIBREF2"
},
{
"start": 380,
"end": 399,
"text": "\u0160ari\u0107 et al., 2012)",
"ref_id": "BIBREF7"
},
{
"start": 505,
"end": 526,
"text": "(Agirre et al., 2012)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present a system which was submitted in STS 2013. Our system is based on very simple measures of lexical and character-level overlap, semantic overlap between the two sentences based on word relatedness measures, and surface features like the sentences' lengths. These measures are used as features for a support vector regression model that we train with annotated data from SemEval STS 2012. Finally, the trained model is applied on the STS 2013 test pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our approach is inspired by the success of similar systems in STS 2012: systems that combine multiple measures of similarity using a machine learning model to generate an overall score (B\u00e4r et al., 2012; \u0160ari\u0107 et al., 2012) . We wanted to investigate how a minimal system of this kind, making use of very few external resources, performs on a large dataset. Our experiments reveal that the performance of such a system depends highly on the training data. While training on one dataset yielded a best correlation (among our three runs, described later in this document) of only 0.4135 with the gold scores, training on another dataset showed a considerably higher correlation of 0.5352.",
"cite_spans": [
{
"start": 185,
"end": 203,
"text": "(B\u00e4r et al., 2012;",
"ref_id": "BIBREF1"
},
{
"start": 204,
"end": 223,
"text": "\u0160ari\u0107 et al., 2012)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we present a high-level description of our system. More details on extraction of some of the measures of similarity are provided in Section 3. Given two input sentences 1 and 2 , our algorithm can be described as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computation of Text Similarity: System Overview",
"sec_num": "2"
},
{
"text": "1. Compute semantic overlap (8 features): a. Lemmatize 1 and 2 using a memorybased lemmatizer 1 and remove all stop words. b. Compute the degree to which the concepts in 1 are covered by semantically similar concepts in 2 and vice versa (see Section 3 for details). The result of this step is two different 'degree of containment' values ( 1 in 2 and vice versa). c. Compute the minimum, maximum, arithmetic mean and harmonic mean of the two values to use as features in the machine learning model. d. Repeat steps 1a through 1c for a weighted version of semantic overlap where each word in the first sentence is assigned a weight which is proportional to its specificity in a selected corpus (see Section 3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computation of Text Similarity: System Overview",
"sec_num": "2"
},
{
"text": "a. Extract -grams (for = 1, 2, 3, 4) of all words in 1 and 2 for four different setups characterized by the four different value combinations of the two following variables: lemmatization (on and off), stop-WordsRemoved (on and off). b. Compute the four measures (min, max, arithmetic and harmonic mean) for each value of n. 3. Compute character -gram overlap (16 features): a. Repeat all steps in 2 above for charactergrams ( = 2, 3, 4, 5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compute word -gram overlap (16 features):",
"sec_num": "2."
},
{
"text": "a. Compute the lengths of 1 and 2 ; and the minimum and maximum of the two values. b. Include the ratio of the maximum to the minimum and the difference between the maximum and minimum in the feature set. 5. Train a support vector regression model on the features extracted in steps 1 through 4 above using data from SemEval 2012 STS (see Section 4 for specifics on the dataset). We used the LibSVM implementation of SVR in WEKA. 6. Apply the model on STS 2013 test data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compute sentence length features (2 features):",
"sec_num": "4."
},
{
"text": "In this section, we describe the computation of the two sets of semantic overlap measures mentioned in step 1 of the algorithm in Section 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Overlap Measures",
"sec_num": "3"
},
{
"text": "We compute semantic overlap between two sentences by first computing the semantic relatedness among their constituent words. Automatically computing the semantic relatedness between words is a well-studied problem and many solutions to the problem have been proposed. We compute word relatedness in two forms: semantic relatedness and string similarity. For semantic relatedness, we utilize two web services. The first one concerns a resource named ConceptNet (Liu and Singh, 2004) , which holds a large amount of common sense knowledge concerning relationships between realworld entities. It provides a web service 2 that generates word relatedness scores based on these relationships. We will use the term ( 1 , 2 ) to denote the relatedness of the two words 1 and 2 as generated by ConceptNet.",
"cite_spans": [
{
"start": 460,
"end": 481,
"text": "(Liu and Singh, 2004)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Overlap Measures",
"sec_num": "3"
},
{
"text": "We also used the web service 3 provided by another resource named Wikipedia Miner (Milne and Witten, 2013) . While ConceptNet successfully captures common sense knowledge about words and concepts, Wikipedia Miner specializes in identifying relationships between scientific concepts powered by Wikipedia's vast repository of scientific information (for example, Einstein and relativity). We will use the term ( 1 , 2 ) to denote the relatedness of the two words 1 and 2 as generated by Wikipedia Miner. Using two systems enabled us to increase the coverage of our word similarity computation algorithm.",
"cite_spans": [
{
"start": 82,
"end": 106,
"text": "(Milne and Witten, 2013)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Overlap Measures",
"sec_num": "3"
},
{
"text": "Each of these web services return a score in the range [0, 1] where 0 represents no relatedness and 1 represents complete similarity. A manual inspection of both services indicates that in almost all cases where the services' word similarity scores deviate from what would be the human-perceived similarity, they generate lower scores (i.e. lower than the human-perceived score). This is why we take the maximum of the two services' similarity scores for any given word pair as their semantic relatedness:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Overlap Measures",
"sec_num": "3"
},
{
"text": "( 1 , 2 ) = max { ( 1 , 2 ), ( 1 , 2 )}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Overlap Measures",
"sec_num": "3"
},
{
"text": "We also compute the string similarity between the two words by taking a weighted combination of the normalized lengths of their longest common substring, subsequence and prefix (normalization is done for each of the three by dividing its length with the length of the smaller word). We will refer to the string similarity between words 1 and 2 as ( 1 , 2 ). This idea is taken from (Islam and Inkpen, 2008) ; the rationale is to be able to find the similarity between (1) words that have the same lemma but the lemmatizer failed to lemmatize at least one of the two surface forms successfully, and (2) words at least one of which has been misspelled. We take the maximum of the string similarity and the semantic relatedness between two words as the final measure of their similarity:",
"cite_spans": [
{
"start": 382,
"end": 406,
"text": "(Islam and Inkpen, 2008)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Overlap Measures",
"sec_num": "3"
},
{
"text": "( 1 , 2 ) = max { ( 1 , 2 ), ( 1 , 2 )}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Overlap Measures",
"sec_num": "3"
},
{
"text": "At the sentence level, our first set of semantic overlap measures (step 1b) is an unweighted measure that treats all content words equally. More specifically, after the preprocessing in step 1a of the algorithm, we compute the degree of semantic coverage of concepts expressed by individual content words in 1 by 2 using the following equation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Overlap Measures",
"sec_num": "3"
},
{
"text": "( 1 , 2 ) = \u2211 [max \u2208 2 { ( , )}] \u2208 1 | 1 |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Overlap Measures",
"sec_num": "3"
},
{
"text": "where ( , ) is the similarity between the two lemmas and .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Overlap Measures",
"sec_num": "3"
},
{
"text": "We also compute a weighted version of semantic coverage (step 1d in the algorithm) by incorporating the specificity of each word (measured by its information content) as shown in the equation below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Overlap Measures",
"sec_num": "3"
},
{
"text": "( 1 , 2 ) = \u2211 [max \u2208 2 { ( ). ( , )}] \u2208 1 | 1 |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Overlap Measures",
"sec_num": "3"
},
{
"text": "where ( ) stands for the information content of the word . Less common words (across a selected corpus) have high information content:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Overlap Measures",
"sec_num": "3"
},
{
"text": "( ) = ln \u2211 ( \u2032 ) \u2032 \u2208 ( )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Overlap Measures",
"sec_num": "3"
},
{
"text": "where C is the set of all words in the chosen corpus and f(w) is the frequency of the word w in the corpus. We have used the Google Unigram Corpus 4 to assign the required frequencies to these words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Overlap Measures",
"sec_num": "3"
},
{
"text": "The STS 2013 test data consists of four datasets: two datasets consisting of gloss pairs (OnWN: 561 pairs and FNWN: 189 pairs), a dataset of machine translation evaluation pairs (SMT: 750 pairs) and a dataset consisting of news headlines (headlines: 750 pairs). For each dataset, the output of a system is evaluated via comparison with human-annotated similarity scores and measured using the Pearson Correlation Coefficient. Then a weighted sum of the correlations for all datasets are taken to be the final score, where each dataset's weight is the proportion of sentence pairs in that dataset. We computed the similarity scores using three different feature sets (for our three runs) for the support vector regression model:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "1. All features mentioned in Section 2. This set of features were used in our run 1. 2. All features except word -gram overlap (experiments on STS 2012 test data revealed that using word n-grams actually lowers the performance of our model, hence this decision). These are the features that were used in our run 2. 3. Only character -gram and length features (just to test the performance of the model without any semantic features). Our run 3 was based on these features. We trained the support vector regression model on two different training datasets, both drawn from STS 2012 data:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "1. In the first setup, we chose the training datasets from STS 2012 that we considered the most similar to the test dataset. The only exception was the FNWN dataset, for which we selected the all the datasets from 2012 because no single dataset from STS 2012 seemed to have similarity with this dataset. For the OnWN test dataset, we selected the OnWN dataset from STS 2012. For both headlines and SMT, we selected SMTnews and SMTeuroparl from STS 2012. The rationale behind this selection was to train the machine learning model on a distribution similar to the test data. 2. In the second setup, we aggregated all datasets (train and test) from STS 2012 and used this combined dataset to train the three models that were later applied on each STS 2013 test data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "Here the rationale is to train on as much data as possible. Table 1 shows the results for the first setup. This is the performance of the set of scores which we actually submitted in STS 2013. The first four columns show the correlations of our system with the gold standard for all runs. The rightmost column shows the overall weighted correlations. As we can see, run 1 with all the features demonstrated the best performance among the three runs. There was a considerable drop in performance in run 3 which did not utilize any semantic similarity measure. As evident from the table, evaluation results did not indicate a particularly promising system. Our best system ranked 63 rd among the 90 systems evaluated in STS 2013. We further investigated to find out the reason: is the set of our features insufficient to capture text semantic similarity, or were the training data inappropriate for their corresponding test data? This is why we experimented with the second setup discussed above. Following are the results: As we can see in Table 2 , the correlations for all feature sets improved by more than 10% for each run. In this case, the best system with correlation 0.5352 would rank 21 st among all systems in STS 2013. These results indicate that the primary reason behind the system's previous bad performance (Table 1) was the selection of an inappropriate dataset. Although it was not clear in the beginning which of the two options would be the better, this second experiment reveals that selecting the largest possible dataset to train is the better choice for this dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 67,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 1039,
"end": 1046,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "In this paper, we have shown how simple measures of text similarity using minimal external resources can be used in a machine learning setup to compute semantic similarity between short text fragments. One important finding is that more training data, even when drawn from annotations on different sources of text and thus potentially having different feature value distributions, improve the accuracy of the model in the task. Possible future expansion includes use of more robust concept alignment strategies using semantic role labeling, inclusion of structural similarities of the sentences (e.g. word order, syntax) in the feature set, incorporating word sense disambiguation and more robust strategies of concept weighting into the process, among others.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "http://www.clips.ua.ac.be/pages/MBSP#lemmatizer 2 http://conceptnet5.media.mit.edu/data/5.1/assoc/c/en/cat? filter=/c/en/dog&limit=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://wikipedia-miner.cms.waikato.ac.nz/services/compare? term1=cat&term2=dog",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://googleresearch.blogspot.com/2006/08/all-our-ngram-are-belong-to-you.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "SemEval-2012 Task 6: a pilot on semantic textual similarity",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Gonzalez-Agirre",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics. ACL",
"volume": "",
"issue": "",
"pages": "385--393",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gonza- lez-Agirre. 2012. SemEval-2012 Task 6: a pilot on se- mantic textual similarity. In Proceedings of the First Joint Conference on Lexical and Computational Se- mantics. ACL, Stroudsburg, PA, USA, 385-393.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "UKP: computing semantic textual similarity by combining multiple content similarity measures",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "B\u00e4r",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Biemann",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
},
{
"first": "Torsten",
"middle": [],
"last": "Zesch",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics. ACL",
"volume": "",
"issue": "",
"pages": "435--440",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel B\u00e4r, Chris Biemann, Iryna Gurevych, and Torsten Zesch. 2012. UKP: computing semantic textual simi- larity by combining multiple content similarity measures. In Proceedings of the First Joint Confer- ence on Lexical and Computational Semantics. ACL, Stroudsburg, PA, USA, 435-440.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Semantic text similarity using corpus-based word similarity and string similarity",
"authors": [
{
"first": "Aminul",
"middle": [],
"last": "Islam",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Inkpen",
"suffix": ""
}
],
"year": 2008,
"venue": "ACM Trans. Knowl. Discov. Data",
"volume": "2",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aminul Islam and Diana Inkpen. 2008. Semantic text similarity using corpus-based word similarity and string similarity. ACM Trans. Knowl. Discov. Data 2, 2, Article 10 (July 2008), 25 pages.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Sentence similarity based on semantic nets and corpus statistics",
"authors": [
{
"first": "Yuhua",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mclean",
"suffix": ""
},
{
"first": "Zuhair",
"middle": [
"A"
],
"last": "Bandar",
"suffix": ""
},
{
"first": "James",
"middle": [
"D"
],
"last": "O'shea",
"suffix": ""
},
{
"first": "Keeley",
"middle": [],
"last": "Crockett",
"suffix": ""
}
],
"year": 2006,
"venue": "IEEE Transactions on Knowledge and Data Engineering",
"volume": "18",
"issue": "8",
"pages": "1138--1150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuhua Li, David Mclean, Zuhair A. Bandar, James D. O'Shea, and Keeley Crockett. 2006. Sentence similar- ity based on semantic nets and corpus statistics. IEEE Transactions on Knowledge and Data Engineering, vol.18, no.8, 1138-1150.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "ConceptNet -a practical commonsense reasoning tool-kit",
"authors": [
{
"first": "Hugo",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Push",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2004,
"venue": "BT Technology Journal",
"volume": "22",
"issue": "4",
"pages": "211--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hugo Liu and Push Singh. 2004. ConceptNet -a prac- tical commonsense reasoning tool-kit. BT Technology Journal 22, 4 (October 2004), 211-226.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Corpus-based and knowledge-based measures of text semantic similarity",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Courtney",
"middle": [],
"last": "Corley",
"suffix": ""
},
{
"first": "Carlo",
"middle": [],
"last": "Strapparava",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st national conference on Artificial intelligence",
"volume": "1",
"issue": "",
"pages": "775--780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea, Courtney Corley, and Carlo Strapparava. 2006. Corpus-based and knowledge-based measures of text semantic similarity. In Proceedings of the 21st national conference on Artificial intelligence -Volume 1 (AAAI'06), Anthony Cohn (Ed.), Vol. 1. AAAI Press 775-780.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An open-source toolkit for mining Wikipedia",
"authors": [
{
"first": "David",
"middle": [],
"last": "Milne",
"suffix": ""
},
{
"first": "Ian",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
}
],
"year": 2013,
"venue": "Artif. Intell",
"volume": "194",
"issue": "",
"pages": "222--239",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Milne and Ian H. Witten. 2013. An open-source toolkit for mining Wikipedia. Artif. Intell. 194 (Janu- ary 2013), 222-239.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "TakeLab: systems for measuring semantic text similarity",
"authors": [
{
"first": "Frane",
"middle": [],
"last": "\u0160ari\u0107",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "Mladen",
"middle": [],
"last": "Karan",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "\u0160najder",
"suffix": ""
},
{
"first": "Bojana Dalbelo",
"middle": [],
"last": "Ba\u0161i\u0107",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "\u0160ari\u0107",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics. ACL",
"volume": "",
"issue": "",
"pages": "441--448",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frane \u0160ari\u0107, Goran Glava\u0161, Mladen Karan, Jan \u0160najder, and Bojana Dalbelo Ba\u0161i\u0107.\u0160ari\u0107. 2012. TakeLab: sys- tems for measuring semantic text similarity. In Pro- ceedings of the First Joint Conference on Lexical and Computational Semantics. ACL, Stroudsburg, PA, USA, 441-448.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table><tr><td>1</td><td>.4921</td><td>.3769 .4647 .3492 .4135</td></tr><tr><td>2</td><td>.4669</td><td>.4165 .3859 .3411 .4056</td></tr><tr><td>3</td><td>.3867</td><td>.2386 .3726 .3337 .3309</td></tr></table>",
"html": null,
"num": null,
"type_str": "table",
"text": ""
},
"TABREF1": {
"content": "<table><tr><td>1</td><td>.6854</td><td>.5981 .4647 .3518 .5339</td></tr><tr><td>2</td><td>.7141</td><td>.5953 .3859 .349 .5352</td></tr><tr><td>3</td><td>.6998</td><td>.4826 .3726 .3365 .4971</td></tr></table>",
"html": null,
"num": null,
"type_str": "table",
"text": "Results for combined training data"
}
}
}
}