ACL-OCL / Base_JSON /prefixS /json /S13 /S13-1017.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S13-1017",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:42:02.732555Z"
},
"title": "ECNUCS: Measuring Short Text Semantic Equivalence Using Multiple Similarity Measurements",
"authors": [
{
"first": "Tian",
"middle": [
"Tian"
],
"last": "Zhu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Man",
"middle": [],
"last": "Lan",
"suffix": "",
"affiliation": {},
"email": "mlan@cs.ecnu.edu.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper reports our submissions to the Semantic Textual Similarity (STS) task in * SEM Shared Task 2013. We submitted three Support Vector Regression (SVR) systems in core task, using 6 types of similarity measures, i.e., string similarity, number similarity, knowledge-based similarity, corpus-based similarity, syntactic dependency similarity and machine translation similarity. Our third system with different training data and different feature sets for each test data set performs the best and ranks 35 out of 90 runs. We also submitted two systems in typed task using string based measure and Named Entity based measure. Our best system ranks 5 out of 15 runs.",
"pdf_parse": {
"paper_id": "S13-1017",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper reports our submissions to the Semantic Textual Similarity (STS) task in * SEM Shared Task 2013. We submitted three Support Vector Regression (SVR) systems in core task, using 6 types of similarity measures, i.e., string similarity, number similarity, knowledge-based similarity, corpus-based similarity, syntactic dependency similarity and machine translation similarity. Our third system with different training data and different feature sets for each test data set performs the best and ranks 35 out of 90 runs. We also submitted two systems in typed task using string based measure and Named Entity based measure. Our best system ranks 5 out of 15 runs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The task of semantic textual similarity (STS) is to measure the degree of semantic equivalence between two sentences, which plays an increasingly important role in natural language processing (NLP) applications. For example, in text categorization (Yang and Wen, 2007) , two documents which are more similar are more likely to be grouped in the same class. In information retrieval (Sahami and Heilman, 2006 ), text similarity improves the effectiveness of a semantic search engine by providing information which holds high similarity with the input query. In machine translation (Kauchak and Barzilay, 2006) , sentence similarity can be applied for automatic evaluation of the output translation and the reference translations. In question answering (Mohler and Mihalcea, 2009) , once the question and the candidate answers are treated as two texts, the answer text which has a higher relevance with the question text may have higher probability to be the right one.",
"cite_spans": [
{
"start": 248,
"end": 268,
"text": "(Yang and Wen, 2007)",
"ref_id": "BIBREF17"
},
{
"start": 382,
"end": 407,
"text": "(Sahami and Heilman, 2006",
"ref_id": "BIBREF13"
},
{
"start": 580,
"end": 608,
"text": "(Kauchak and Barzilay, 2006)",
"ref_id": "BIBREF6"
},
{
"start": 751,
"end": 778,
"text": "(Mohler and Mihalcea, 2009)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The STS task in * SEM Shared Task 2013 consists of two subtasks, i.e., core task and typed task, and we participate in both of them. The core task aims to measure the semantic similarity of two sentences, resulting in a similarity score which ranges from 5 (semantic equivalence) to 0 (no relation). The typed task is a pilot task on typed-similarity between semistructured records. The types of similarity to be measured include location, author, people involved, time, events or actions, subject and description as well as the general similarity of two texts (Agirre et al., 2013) .",
"cite_spans": [
{
"start": 561,
"end": 582,
"text": "(Agirre et al., 2013)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work we present a Support Vector Regression (SVR) system to measure sentence semantic similarity by integrating multiple measurements, i.e., string similarity, knowledge based similarity, corpus based similarity, number similarity and machine translation metrics. Most of these similarities are borrowed from previous work, e.g., (B\u00e4r et al., 2012) , (\u0160aric et al., 2012) and (de Souza et al., 2012) . We also propose a novel syntactic dependency similarity. Our best system ranks 35 out of 90 runs in core task and ranks 5 out of 15 runs in typed task.",
"cite_spans": [
{
"start": 338,
"end": 356,
"text": "(B\u00e4r et al., 2012)",
"ref_id": "BIBREF3"
},
{
"start": 359,
"end": 379,
"text": "(\u0160aric et al., 2012)",
"ref_id": null
},
{
"start": 384,
"end": 407,
"text": "(de Souza et al., 2012)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is organized as follows. Section 2 describes the similarity measurements used in this work in detail. Section 3 presents experiments and the results of two tasks. Conclusions and future work are given in Section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To compute semantic textual similarity, previous work has adopted multiple semantic similarity measurements. In this work, we adopt 6 types of measures, i.e., string similarity, number similarity, knowledge-based similarity, corpus-based similarity, syntactic dependency similarity and machine translation similarity. Most of them are borrowed from previous work due to their superior performance reported. Besides, we also propose two syntactic dependency similarity measures. Totally we get 33 similarity measures. Generally, these similarity measures are represented as numerical values and combined using regression model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Similarity Measurements",
"sec_num": "2"
},
{
"text": "Generally, we perform text preprocessing before we compute each text similarity measurement. Firstly, Stanford parser 1 is used for sentence tokenization and parsing. Specifically, the tokens n't and 'm are replaced with not and am. Secondly, Stanford POS Tagger 2 is used for POS tagging. Thirdly, Natural Language Toolkit 3 is used for WordNet based Lemmatization, which lemmatizes the word to its nearest base form that appears in WordNet, for example, was is lemmatized as is, not be.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "2.1"
},
{
"text": "Given two short texts or sentences s 1 and s 2 , we denote the word set of s 1 and s 2 as S 1 and S 2 , the length (i.e., number of words) of s 1 and s 2 as |S 1 | and |S 2 |.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "2.1"
},
{
"text": "Intuitively, if two sentences share more strings, they are considered to have higher semantic similarity. Therefore, we create 12 string based features in consideration of the common sequence shared by two texts. Longest Common sequence (LCS). The widely used LCS is proposed by (Allison and Dix, 1986) , which is to find the maximum length of a common subsequence of two strings and here the subsequence need to be contiguous. In consideration of the different length of two texts, we compute LCS similarity using Formula (1) as follows:",
"cite_spans": [
{
"start": 279,
"end": 302,
"text": "(Allison and Dix, 1986)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "String Similarity",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Sim LCS = Length of LCS min(|S 1 |, |S 2 |)",
"eq_num": "(1)"
}
],
"section": "String Similarity",
"sec_num": "2.2"
},
{
"text": "In order to eliminate the impacts of various forms of word, we also compute a Lemma LCS similarity score after sentences being lemmatized. word n-grams. Following (Lyon et al., 2001) , we calculate the word n-grams similarity using the Jaccard coefficient as shown in Formula (2), where p is the number of n-grams shared by s 1 and s 2 , q and r are the number of n-grams not shared by s 1 and s 2 , respectively.",
"cite_spans": [
{
"start": 163,
"end": 182,
"text": "(Lyon et al., 2001)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "String Similarity",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Jacc = p p + q + r",
"eq_num": "(2)"
}
],
"section": "String Similarity",
"sec_num": "2.2"
},
{
"text": "Since we focus on short texts, here only n=1,2,3,4 is used in this work. Similar with LCS, we also compute a Lemma n-grams similarity score. Weighted Word Overlap (WWO). (\u0160aric et al., 2012) pointed out that when measuring sentence similarity, different words may convey different content information. Therefore, we consider to assign more importance to those words bearing more content information. To measure the importance of each word, we use Formula (3) to calculate the information content for each word w:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "String Similarity",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "ic(w) = ln \u2211 w \u2032 \u2208C f req(w \u2032 ) f req(w)",
"eq_num": "(3)"
}
],
"section": "String Similarity",
"sec_num": "2.2"
},
{
"text": "where C is the set of words in the corpus and f req(w) is the frequency of the word w in the corpus. To compute ic(w), we use the Web 1T 5-gram Corpus 4 , which is generated from approximately one trillion word tokens of text from Web pages. Obviously, the WWO scores between two sentences is non-symmetric. The WWO of s 2 by s 1 is given by Formula (4):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "String Similarity",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Sim wwo (s 1 , s 2 ) = \u2211 w\u2208S 1 \u2229S 2 ic(w) \u2211 w \u2032 \u2208S 2 ic(w \u2032 )",
"eq_num": "(4)"
}
],
"section": "String Similarity",
"sec_num": "2.2"
},
{
"text": "Likewise, we can get Sim wwo (s 2 , s 1 ) score. Then the final WWO score is the harmonic mean of Sim wwo (s 1 , s 2 ) and Sim wwo (s 2 , s 1 ). Similarly, we get a Lemma WWO score as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "String Similarity",
"sec_num": "2.2"
},
{
"text": "Knowledge based similarity approaches rely on a semantic network of words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Based Similarity",
"sec_num": "2.3"
},
{
"text": "In this work all knowledge-based word similarity measures are computed based on WordNet. For word similarity, we employ four WordNet-based similarity metrics: the Path similarity (Banea et al., 2012) ; the WUP similarity (Wu and Palmer, 1994) ; the LCH similarity (Leacock and Chodorow, 1998) ; the Lin similarity (Lin, 1998) . We adopt the NLTK library (Bird, 2006) to compute all these word similarities.",
"cite_spans": [
{
"start": 179,
"end": 199,
"text": "(Banea et al., 2012)",
"ref_id": "BIBREF2"
},
{
"start": 221,
"end": 242,
"text": "(Wu and Palmer, 1994)",
"ref_id": "BIBREF16"
},
{
"start": 264,
"end": 292,
"text": "(Leacock and Chodorow, 1998)",
"ref_id": "BIBREF8"
},
{
"start": 314,
"end": 325,
"text": "(Lin, 1998)",
"ref_id": "BIBREF9"
},
{
"start": 354,
"end": 366,
"text": "(Bird, 2006)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Based Similarity",
"sec_num": "2.3"
},
{
"text": "In order to determine the similarity of sentences, we employ two strategies to convert the word similarity into sentence similarity, i.e., (1) the best alignment strategy (align) (Banea et al., 2012) and 2the aggregation strategy (agg) (Mihalcea et al., 2006) .",
"cite_spans": [
{
"start": 179,
"end": 199,
"text": "(Banea et al., 2012)",
"ref_id": "BIBREF2"
},
{
"start": 236,
"end": 259,
"text": "(Mihalcea et al., 2006)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Based Similarity",
"sec_num": "2.3"
},
{
"text": "The best alignment strategy is computed as below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Based Similarity",
"sec_num": "2.3"
},
{
"text": "Sim align (s 1 , s 2 ) = (\u03c9 + \u2211 |\u03c6| i=1 \u03c6 i ) * (2|S 1 ||S 2 |) |S 1 | + |S 2 | (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Based Similarity",
"sec_num": "2.3"
},
{
"text": "where \u03c9 is the number of shared terms between s 1 and s 2 , list \u03c6 contains the similarities of non-shared words in shorter text, \u03c6 i is the highest similarity score of the ith word among all words of the longer text. The aggregation strategy is calculated as below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Based Similarity",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Sim agg (s 1 , s 2 ) = \u2211 w\u2208S 1 (maxSim(w, S 2 ) * ic(w)) \u2211 w\u2208{S 1 } ic(w)",
"eq_num": "(6)"
}
],
"section": "Knowledge Based Similarity",
"sec_num": "2.3"
},
{
"text": "where maxSim(w, S 2 ) is the highest WordNetbased score between word w and all words of sentence S 2 . To compute ic(w), we use the same corpus as WWO, i.e., the Web 1T 5-gram Corpus. The final score of the aggregation strategy is the mean of Sim agg (s 1 , s 2 ) and Sim agg (s 2 , s 1 ). Finally we get 8 knowledge based features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Based Similarity",
"sec_num": "2.3"
},
{
"text": "Latent Semantic Analysis (LSA) (Landauer et al., 1997) . In LSA, term-context associations are captured by means of a dimensionality reduction operation performing singular value decomposition (SVD) on the term-by-context matrix T , where T is induced from a large corpus. We use the TASA corpus 5 to obtain the matrix and compute the word similarity using cosine similarity of the two vectors of the words. After that we transform word similarity to sentence similarity based on Formula (5). Co-occurrence Retrieval Model (CRM) (Weeds, 2003) . CRM is based on a notion of substitutability. That is, the more appropriate it is to substitute word w 1 in place of word w 2 in a suitable natural language task, the more semantically similar they are. The degree of substitutability of w 2 with w 1 is dependent on the proportion of co-occurrences of w 1 that are also the co-occurrences of w 2 , and the proportion of co-occurrences of w 2 that are also the co-occurrences of w 1 . Following (Weeds, 2003) , the CRM word similarity is computed using Formula 7:",
"cite_spans": [
{
"start": 31,
"end": 54,
"text": "(Landauer et al., 1997)",
"ref_id": "BIBREF7"
},
{
"start": 529,
"end": 542,
"text": "(Weeds, 2003)",
"ref_id": "BIBREF15"
},
{
"start": 989,
"end": 1002,
"text": "(Weeds, 2003)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Based Similarity",
"sec_num": "2.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Sim CRM (w 1 , w 2 ) = 2 * |c(w 1 ) \u2229 c(w 2 )| |c(w 1 )| + |c(w 2 )|",
"eq_num": "(7)"
}
],
"section": "Corpus Based Similarity",
"sec_num": "2.4"
},
{
"text": "where c(w) is the set of words that co-occur with w. We use the 5-gram part of the Web 1T 5-gram Corpus to obtain c(w). If two words appear in one 5-gram, we will treat one word as the co-occurring word of each other. To obtain c(w), we propose two methods. In the first CRM similarity, we only consider the word w with |c(w)| > 200, and then take the top 200 co-occurring words ranked by the cooccurrence frequency as its c(w). To relax restrictions, we also present an extended CRM (denoted by ExCRM), which extends the CRM list that all w with |c(w)| > 50 are taken into consideration, but the maximum of |c(w)| is still set to 200. Finally, these two CRM word similarity measures are transformed to sentence similarity using Formula (5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Based Similarity",
"sec_num": "2.4"
},
{
"text": "As (\u0160aric et al., 2012) pointed out that dependency relations of sentences often contain semantic information, in this work we propose two novel syntactic dependency similarity features to capture their possible semantic similarity. Simple Dependency Overlap. First we measure the simple dependency overlap between two sentences based on matching dependency relations. Stanford Parser provides 53 dependency relations, for example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Dependency Similarity",
"sec_num": "2.5"
},
{
"text": "nsubj(remain \u2212 16, leader \u2212 4) dobj(return \u2212 10, home \u2212 11)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Dependency Similarity",
"sec_num": "2.5"
},
{
"text": "where nsubj (nominal subject) and dobj (direct object) are two dependency types, remain is the governing lemma and leader is the dependent lemma. Two syntactic dependencies are considered equal when they have the same dependency type, governing lemma, and dependent lemma. Let R 1 and R 2 be the set of all dependency relations in s 1 and s 2 , we compute Simple Dependency Overlap using Formula 8:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Dependency Similarity",
"sec_num": "2.5"
},
{
"text": "Sim SimDep (s 1 , s 2 ) = 2 * |R 1 \u2229 R 2 | * |R 1 ||R 2 | |R 1 | + |R 2 | (8) Special Dependency Overlap.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Dependency Similarity",
"sec_num": "2.5"
},
{
"text": "Several types of dependency relations are believed to contain the primary content of a sentence. So we extract three roles from those special dependency relations, i.e., predicate, subject and object. For example, from above dependency relation dobj, we can extract the object of the sentence, i.e., home. For each of these three roles, we get a similarity score. For example, to calculate Sim predicate , we denote the sets of predicates of two sentences as S p1 and S p2 . We first use LCH to compute word similarity and then compute sentence similarity using Formula (5). Similarly, the Sim subj and Sim obj are obtained in the same way. In the end we average the similarity scores of the three roles as the final Special Dependency Overlap score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Dependency Similarity",
"sec_num": "2.5"
},
{
"text": "Numbers in the sentence occasionally carry similarity information. If two sentences contain different sets of numbers even though their sentence structure is quite similar, they may be given a low similarity score. Here we adopt two features following (\u0160aric et al., 2012), which are computed as follow:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number Similarity",
"sec_num": "2.6"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "log(1 + |N 1 | + |N 2 |) (9) 2 * |N 1 \u2229 N 2 |/(|N 1 | + |N 2 |)",
"eq_num": "(10)"
}
],
"section": "Number Similarity",
"sec_num": "2.6"
},
{
"text": "where N 1 and N 2 are the sets of all numbers in s 1 and s 2 . We extract the number information from sentences by checking if the POS tag is CD (cardinal number).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number Similarity",
"sec_num": "2.6"
},
{
"text": "Machine translation (MT) evaluation metrics are designed to assess whether the output of a MT system is semantically equivalent to a set of reference translations. The two given sentences can be viewed as one input and one output of a MT system, then the MT measures can be used to measure their semantic similarity. We use the following 6 lexical level metrics (de Souza et al., 2012): WER, TER, PER, NIST, ROUGE-L, GTM-1. All these measures are obtained using the Asiya Open Toolkit for Automatic Machine Translation (Meta-) Evaluation 6 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Translation Similarity",
"sec_num": "2.7"
},
{
"text": "3 Experiment and Results",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Translation Similarity",
"sec_num": "2.7"
},
{
"text": "We adopt LIBSVM 7 to build Support Vector Regression (SVR) model for regression. To obtain the optimal SVR parameters C, g, and p, we employ grid search with 10-fold cross validation on training data. Specifically, if the score returned by the regression model is bigger than 5 or less than 0, we normalize it as 5 or 0, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regression Model",
"sec_num": "3.1"
},
{
"text": "The organizers provided four different test sets to evaluate the performance of the submitted systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Core Task",
"sec_num": "3.2"
},
{
"text": "We have submitted three systems for core task, i.e., Table 2 shows the best feature sets used for each test data set, where \"+\" means the feature is selected and \"-\" means not selected. We did not use the whole feature set because in our preliminary experiments, some features performed not well on some training data sets, and they even reduced the performance of our system. To select features, we trained two SVR models for each feature, one with all features and another with all features except this feature. If the first model outperforms the second model, this feature is chosen. Besides, results on the four test data sets are quite different. Headline always gets the best result on each run and OnWN follows second. And results of FNWN and SMT are much lower than Headline and OnWN. One reason of the poor performance of FNWN may be the big length difference of sentence pairs. That is, sentence from WordNet is short while sentence from FrameNet is quite longer, and some samples even have more than one sentence (e.g. \"doing as one pleases or chooses\" VS \"there exist a number of different possible events that may happen in the future in most cases, there is an agent involved who has to consider which of the possible events will or should occur a salient entity which is deeply involved in the event may also be mentioned\"). As a result, even though the two sentences are similar in meaning, most of our measures would give low scores due to quite different sentence length.",
"cite_spans": [],
"ref_spans": [
{
"start": 53,
"end": 60,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Core Task",
"sec_num": "3.2"
},
{
"text": "In order to understand the contributions of each similarity measurement, we trained 6 SVR regression models based on 6 types on MSRpar data set. Table 4 presents the Pearson's correlation scores of the 6 types of measurements on MSRpar. We can see that the corpus-based measure achieves the best, then the knowledge-based measure and the MT measure follow. Number similarity performs surprisingly well, which benefits from the property of data set that MSRpar contains many numbers in sentences and the sentence similarity depends a lot on those numbers as well. The string similarity is not as good as the knowledge-based, the corpus-based and the MT similarity because of its disability of extracting semantic characteristics of sentence. Surprisingly, the Syntactic dependency similarity performs the worst. Since we only extract two features based on sentence dependency, they may not enough to capture the key semantic similarity information from the sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 145,
"end": 152,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Core Task",
"sec_num": "3.2"
},
{
"text": "For typed task, we also adopt a SVR model for each type. Since several previous similarity measures used for core task are not suitable for evaluation of the similarity of people involved, time pe- riod, location and event or action involved, we add two Named Entity Recognition (NER) based features. Firstly we use Stanford NER 8 to obtain person, location and date information from the whole text with NER tags of \"PERSON\", \"LOCATION\" and \"DATE\". Then for each list of entity, we get two feature values using the following two formulas: | NER pairs. N um(equalpairs) is the number of equal pairs. Here we expand the condition of equivalence: two NERs are considered equal if one is part of another (e.g. \"John Warson\" VS \"Warson\"). Features and content we used for each similarity are presented in Table 5 . For the three similarities: people involved, time period, location, we compute the two NER based features for each similarity with NER type of \"PERSON\", \"LOCATION\" and \"DATE\". And for event or action involved, we add the above 6 NER feature scores as its feature set. The NER based similarity used in description is the same as event or action involved but only based on \"dcDescription\" part of text. Besides, we add a length feature in description, which is the ratio of shorter length and longer length of descriptions. We have submitted two runs. Run 1 uses only string based and NER based features. Besides features used in Run 1, Run 2 also adds knowledge based features. Table 6 shows the performance of our two runs as well as the baseline and the best results on STS typed task in * SEM Shared Task 2013. Our Run 1 ranks 5 and Run 2 ranks 7 out of 15 runs. Run 2 performed worse than Run 1 and the possible reason may be the knowledge based method is not suitable for this kind of data. Furthermore, since we only use NER based features which involves three entities for these similarities, they are not enough to capture the relevant information for other types.",
"cite_spans": [
{
"start": 539,
"end": 568,
"text": "| NER pairs. N um(equalpairs)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 800,
"end": 807,
"text": "Table 5",
"ref_id": "TABREF9"
},
{
"start": 1487,
"end": 1494,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Typed Task",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Sim N ER N um (L1 N ER , L2 N ER ) = min(|L1 N ER |, |L2 N ER |) max(|L1 N ER |, |L2 N ER |)",
"eq_num": "(11)"
}
],
"section": "Typed Task",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Sim N ER (L1 N ER , L2 N ER ) = N um(equalpairs) |L1 N ER | * |L2 N ER |",
"eq_num": "("
}
],
"section": "Typed Task",
"sec_num": "3.3"
},
{
"text": "In this paper we described our submissions to the Semantic Textual Similarity Task in * SEM Shared Task 2013. For core task, we collect 6 types of similarity measures, i.e., string similarity, number similarity, knowledge-based similarity, corpus-based similarity, syntactic dependency similarity and machine translation similarity. And our Run 3 with different training data and different feature sets for each test data set ranks 35 out of 90 runs. For typed task, we adopt string based measure, NER based measure and knowledge based measure, our best system ranks 5 out of 15 runs. Clearly, these similarity measures are not quite enough. For the core task, in our future work we will consider the measures to evaluate the sentence difference as well. For the typed task, with the help of more advanced IE tools to extract more information regarding different types, we need to propose more methods to evaluate the similarity. Table 6 : Final results on STS typed task",
"cite_spans": [],
"ref_spans": [
{
"start": 930,
"end": 937,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "http://nlp.stanford.edu/software/lex-parser.shtml 2 http://nlp.stanford.edu/software/tagger.shtml 3 http://nltk.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.ldc.upenn.edu/Catalog/docs/LDC2006T13",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://lsa.colorado.edu/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://nlp.lsi.upc.edu/asiya/ 7 http://www.csie.ntu.edu.tw/ cjlin/libsvm/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://nlp.stanford.edu/software/CRF-NER.shtml",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors would like to thank the organizers and reviewers for this interesting task and their helpful suggestions and comments, which improved the final version of this paper. This research is supported by grants from National Natural Science Foundation of China (No.60903093), Shanghai Pujiang Talent Program (No.09PJ1404500), Doctoral Fund of Ministry of Education of China (No.20090076120029) and Shanghai Knowledge Service Platform Project (No.ZF1213).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "*sem 2013 shared task: Semantic textual similarity, including a pilot on typed-similarity",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Gonzalez-Agirre",
"suffix": ""
},
{
"first": "Weiwei",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2013,
"venue": "*SEM 2013: The Second Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez- Agirre, and Weiwei Guo. 2013. *sem 2013 shared task: Semantic textual similarity, including a pilot on typed-similarity. In *SEM 2013: The Second Joint Conference on Lexical and Computational Semantics. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A bit-string longest-common-subsequence algorithm",
"authors": [
{
"first": "Lloyd",
"middle": [],
"last": "Allison",
"suffix": ""
},
{
"first": "Trevor",
"middle": [
"I"
],
"last": "Dix",
"suffix": ""
}
],
"year": 1986,
"venue": "Information Processing Letters",
"volume": "23",
"issue": "5",
"pages": "305--310",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lloyd Allison and Trevor I Dix. 1986. A bit-string longest-common-subsequence algorithm. Information Processing Letters, 23(5):305-310.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Unt: A supervised synergistic approach to semantic text similarity",
"authors": [
{
"first": "Carmen",
"middle": [],
"last": "Banea",
"suffix": ""
},
{
"first": "Samer",
"middle": [],
"last": "Hassan",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Mohler",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2012,
"venue": "First Joint Conference on Lexical and Computational Semantics (*SEM)",
"volume": "",
"issue": "",
"pages": "635--642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carmen Banea, Samer Hassan, Michael Mohler, and Rada Mihalcea. 2012. Unt: A supervised synergistic approach to semantic text similarity. pages 635-642. First Joint Conference on Lexical and Computational Semantics (*SEM).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Ukp: Computing semantic textual similarity by combining multiple content similarity measures",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "B\u00e4r",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Biemann",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
},
{
"first": "Torsten",
"middle": [],
"last": "Zesch",
"suffix": ""
}
],
"year": 2012,
"venue": "First Joint Conference on Lexical and Computational Semantics (*SEM)",
"volume": "",
"issue": "",
"pages": "435--440",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel B\u00e4r, Chris Biemann, Iryna Gurevych, and Torsten Zesch. 2012. Ukp: Computing semantic textual sim- ilarity by combining multiple content similarity mea- sures. pages 435-440. First Joint Conference on Lex- ical and Computational Semantics (*SEM).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Nltk: the natural language toolkit",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the COLING/ACL on Interactive presentation sessions",
"volume": "",
"issue": "",
"pages": "69--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird. 2006. Nltk: the natural language toolkit. In Proceedings of the COLING/ACL on Interactive pre- sentation sessions, pages 69-72. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Fbk: Machine translation evaluation and word similarity metrics for semantic textual similarity",
"authors": [
{
"first": "Jos\u00e9",
"middle": [],
"last": "Guilherme",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "De Souza",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "Trento",
"middle": [],
"last": "Povo",
"suffix": ""
},
{
"first": "Yashar",
"middle": [],
"last": "Mehdad",
"suffix": ""
}
],
"year": 2012,
"venue": "First Joint Conference on Lexical and Computational Semantics (*SEM)",
"volume": "",
"issue": "",
"pages": "624--630",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jos\u00e9 Guilherme C de Souza, Matteo Negri, Trento Povo, and Yashar Mehdad. 2012. Fbk: Machine trans- lation evaluation and word similarity metrics for se- mantic textual similarity. pages 624-630. First Joint Conference on Lexical and Computational Semantics (*SEM).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Paraphrasing for automatic evaluation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Kauchak",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "455--462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Kauchak and Regina Barzilay. 2006. Para- phrasing for automatic evaluation. In Proceedings of the main conference on Human Language Technol- ogy Conference of the North American Chapter of the Association of Computational Linguistics, pages 455- 462. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "How well can passage meaning be derived without using word order? a comparison of latent semantic analysis and humans",
"authors": [
{
"first": "Darrell",
"middle": [],
"last": "Thomas K Landauer",
"suffix": ""
},
{
"first": "Bob",
"middle": [],
"last": "Laham",
"suffix": ""
},
{
"first": "Missy",
"middle": [
"E"
],
"last": "Rehder",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schreiner",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 19th annual meeting of the Cognitive Science Society",
"volume": "",
"issue": "",
"pages": "412--417",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas K Landauer, Darrell Laham, Bob Rehder, and Missy E Schreiner. 1997. How well can passage meaning be derived without using word order? a com- parison of latent semantic analysis and humans. In Proceedings of the 19th annual meeting of the Cog- nitive Science Society, pages 412-417.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Combining local context and wordnet similarity for word sense identification. WordNet: An electronic lexical database",
"authors": [
{
"first": "Claudia",
"middle": [],
"last": "Leacock",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Chodorow",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "49",
"issue": "",
"pages": "265--283",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claudia Leacock and Martin Chodorow. 1998. Com- bining local context and wordnet similarity for word sense identification. WordNet: An electronic lexical database, 49(2):265-283.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "An information-theoretic definition of similarity",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 15th international conference on Machine Learning",
"volume": "1",
"issue": "",
"pages": "296--304",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin. 1998. An information-theoretic defini- tion of similarity. In Proceedings of the 15th inter- national conference on Machine Learning, volume 1, pages 296-304. San Francisco.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Detecting short passages of similar text in large document collections",
"authors": [
{
"first": "Caroline",
"middle": [],
"last": "Lyon",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Malcolm",
"suffix": ""
},
{
"first": "Bob",
"middle": [],
"last": "Dickerson",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "118--125",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Caroline Lyon, James Malcolm, and Bob Dickerson. 2001. Detecting short passages of similar text in large document collections. In Proceedings of the 2001 Conference on Empirical Methods in Natural Lan- guage Processing, pages 118-125.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Corpus-based and knowledge-based measures of text semantic similarity",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Courtney",
"middle": [],
"last": "Corley",
"suffix": ""
},
{
"first": "Carlo",
"middle": [],
"last": "Strapparava",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the national conference on artificial intelligence",
"volume": "21",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea, Courtney Corley, and Carlo Strapparava. 2006. Corpus-based and knowledge-based measures of text semantic similarity. In Proceedings of the na- tional conference on artificial intelligence, volume 21, page 775. Menlo Park, CA; Cambridge, MA; London;",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Text-to-text semantic similarity for automatic short answer grading",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Mohler",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "567--575",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Mohler and Rada Mihalcea. 2009. Text-to-text semantic similarity for automatic short answer grad- ing. In Proceedings of the 12th Conference of the Eu- ropean Chapter of the Association for Computational Linguistics, pages 567-575. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A webbased kernel function for measuring the similarity of short text snippets",
"authors": [
{
"first": "Mehran",
"middle": [],
"last": "Sahami",
"suffix": ""
},
{
"first": "Timothy",
"middle": [
"D"
],
"last": "Heilman",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 15th international conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "377--386",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mehran Sahami and Timothy D Heilman. 2006. A web- based kernel function for measuring the similarity of short text snippets. In Proceedings of the 15th interna- tional conference on World Wide Web, pages 377-386. ACM.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Takelab: Systems for measuring semantic text similarity",
"authors": [
{
"first": "Goran",
"middle": [],
"last": "Frane\u0161aric",
"suffix": ""
},
{
"first": "Mladen",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "Jan\u0161najder",
"middle": [],
"last": "Karan",
"suffix": ""
},
{
"first": "Bojana Dalbelo",
"middle": [],
"last": "Ba\u0161ic",
"suffix": ""
}
],
"year": 2012,
"venue": "First Joint Conference on Lexical and Computational Semantics (*SEM)",
"volume": "",
"issue": "",
"pages": "441--448",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frane\u0160aric, Goran Glava\u0161, Mladen Karan, Jan\u0160najder, and Bojana Dalbelo Ba\u0161ic. 2012. Takelab: Systems for measuring semantic text similarity. pages 441- 448. First Joint Conference on Lexical and Compu- tational Semantics (*SEM).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Measures and applications of lexical distributional similarity",
"authors": [
{
"first": "Julie",
"middle": [
"Elizabeth"
],
"last": "Weeds",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julie Elizabeth Weeds. 2003. Measures and applications of lexical distributional similarity. Ph.D. thesis, Cite- seer.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Verbs semantics and lexical selection",
"authors": [
{
"first": "Zhibiao",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 32nd annual meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "133--138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhibiao Wu and Martha Palmer. 1994. Verbs semantics and lexical selection. In Proceedings of the 32nd an- nual meeting on Association for Computational Lin- guistics, pages 133-138. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Text categorization based on similarity approach",
"authors": [
{
"first": "Cha",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Wen",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of International Conference on Intelligence Systems and Knowledge Engineering (ISKE)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cha Yang and Jun Wen. 2007. Text categorization based on similarity approach. In Proceedings of Interna- tional Conference on Intelligence Systems and Knowl- edge Engineering (ISKE).",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>Test</td><td>Training</td></tr><tr><td>Headline</td><td>MSRpar</td></tr><tr><td>OnWN+FNWN</td><td>MSRpar+OnWN</td></tr><tr><td>SMT</td><td>SMTnews+SMTeuroparl</td></tr></table>",
"text": "lists the performance of these three systems as well as the baseline and the best results on"
},
"TABREF2": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>type</td><td>Features</td><td colspan=\"3\">Headline OnWN and FNWN SMT</td></tr><tr><td/><td>LCS</td><td>+</td><td>+</td><td>-</td></tr><tr><td/><td>Lemma LCS</td><td>+</td><td>+</td><td>-</td></tr><tr><td>String</td><td>N-gram</td><td>+</td><td>1+2gram</td><td>1gram</td></tr><tr><td>Based</td><td>Lemma N-gram</td><td>+</td><td>1+2gram</td><td>1gram</td></tr><tr><td/><td>WWO</td><td>+</td><td>+</td><td>+</td></tr><tr><td/><td>Lemma WWO</td><td>+</td><td>+</td><td>+</td></tr><tr><td/><td>Path,WUP,LCH,Lin</td><td>+</td><td>+</td><td>+</td></tr><tr><td>Knowledge</td><td>+aligh</td><td/><td/><td/></tr><tr><td>Based</td><td>Path,WUP,LCH,Lin</td><td>+</td><td>+</td><td>+</td></tr><tr><td/><td>+ic-weighted</td><td/><td/><td/></tr><tr><td>Corpus</td><td>LSA</td><td>+</td><td>+</td><td>+</td></tr><tr><td>Based</td><td>CRM,ExCRM</td><td>+</td><td>+</td><td>+</td></tr><tr><td/><td>Simple Dependency</td><td>+</td><td>+</td><td>+</td></tr><tr><td>Syntactic</td><td>Overlap</td><td/><td/><td/></tr><tr><td colspan=\"2\">Dependency Special Dependency</td><td>+</td><td>-</td><td>+</td></tr><tr><td/><td>Overlap</td><td/><td/><td/></tr><tr><td>Number</td><td>Number</td><td>+</td><td>-</td><td>-</td></tr><tr><td/><td>WER</td><td>-</td><td>+</td><td>+</td></tr><tr><td/><td>TER</td><td>-</td><td>+</td><td>+</td></tr><tr><td/><td>PER</td><td>+</td><td>+</td><td>+</td></tr><tr><td>MT</td><td>NIST</td><td>+</td><td>+</td><td>-</td></tr><tr><td/><td>ROUGE-L</td><td>+</td><td>+</td><td>+</td></tr><tr><td/><td>GTM-1</td><td>+</td><td>+</td><td>+</td></tr></table>",
"text": "Different training data sets used for each test data set"
},
"TABREF3": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>System</td><td colspan=\"3\">Mean Headline OnWN FNWN SMT</td></tr><tr><td>Best</td><td>0.6181</td><td>0.7642</td><td>0.7529 0.5818 0.3804</td></tr><tr><td>Baseline</td><td>0.3639</td><td>0.5399</td><td>0.2828 0.2146 0.2861</td></tr><tr><td>Run 1</td><td>0.3533</td><td>0.5656</td><td>0.2083 0.1725 0.2949</td></tr><tr><td>Run 2</td><td>0.4720</td><td>0.7120</td><td>0.5388 0.2013 0.2504</td></tr><tr><td colspan=\"2\">Run 3 (rank 35) 0.4967</td><td>0.6799</td><td>0.5284 0.2203 0.3595</td></tr></table>",
"text": "Best feature combination for each data set"
},
"TABREF4": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>For the</td></tr></table>",
"text": "Final results on STS core task STS core task in It shows that our feature selection process for each test data set does help im-prove the performance too. From this table, we find that different features perform different on different kinds of data sets and thus using proper feature subsets for each test data set would make improvement."
},
"TABREF6": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "Pearson correlation of features of the six aspects on MSRpar"
},
"TABREF7": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "12) where L N ER is the list of one entity type from the text, and for two lists of NERs L1 N ER and L2 N ER , there are |L1 N ER | * |L2 N ER"
},
"TABREF9": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "Feature sets and content used of 8 type similarities of Typed data"
}
}
}
}