| { |
| "paper_id": "S13-1033", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:42:50.362215Z" |
| }, |
| "title": "INAOE_UPV-CORE: Extracting Word Associations from Document Corpora to estimate Semantic Textual Similarity", |
| "authors": [ |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "S\u00e1nchez-Vega", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Manuel", |
| "middle": [], |
| "last": "Montes-Y-G\u00f3mez", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Luis", |
| "middle": [], |
| "last": "Villase\u00f1or-Pineda", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Paolo", |
| "middle": [], |
| "last": "Rosso", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "prosso@dsic.upv.es" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper presents three methods to evaluate the Semantic Textual Similarity (STS). The first two methods do not require labeled training data; instead, they automatically extract semantic knowledge in the form of word associations from a given reference corpus. Two kinds of word associations are considered: cooccurrence statistics and the similarity of word contexts. The third method was done in collaboration with groups from the Universities of Paris 13, Matanzas and Alicante. It uses several word similarity measures as features in order to construct an accurate prediction model for the STS.", |
| "pdf_parse": { |
| "paper_id": "S13-1033", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper presents three methods to evaluate the Semantic Textual Similarity (STS). The first two methods do not require labeled training data; instead, they automatically extract semantic knowledge in the form of word associations from a given reference corpus. Two kinds of word associations are considered: cooccurrence statistics and the similarity of word contexts. The third method was done in collaboration with groups from the Universities of Paris 13, Matanzas and Alicante. It uses several word similarity measures as features in order to construct an accurate prediction model for the STS.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Even with the current progress of the natural language processing, evaluating the semantic text similarity is an extremely challenging task. Due to the existence of multiple semantic relations among words, the measuring of text similarity is a multifactorial and highly complex task (Turney, 2006) .", |
| "cite_spans": [ |
| { |
| "start": 283, |
| "end": 297, |
| "text": "(Turney, 2006)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Despite the difficulty of this task, it remains as one of the most attractive research topics for the NLP community. This is because the evaluation of text similarity is commonly used as an internal module in many different tasks, such as, information retrieval, question answering, document summarization, etc. (Resnik, 1999) . Moreover, most of these tasks require determining the \"semantic\" similarity of texts showing stylistic differences or using polysemicwords (Hliaoutakis et al., 2006) .", |
| "cite_spans": [ |
| { |
| "start": 312, |
| "end": 326, |
| "text": "(Resnik, 1999)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 468, |
| "end": 494, |
| "text": "(Hliaoutakis et al., 2006)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The most popular approach to evaluate the semantic similarity of words and texts consists in using the semantic knowledge expressed in ontologies (Resnik, 1999) ; commonly, WorldNet is used for this purpose (Fellbaum, 2005) . Unfortunately, despite the great effort that has been the creation of WordNet, it is still far to cover all existing words and senses (Curran, 2003) .Therefore, the semantic similarity methods that use this resource tend to reduce their applicability to a restricted domain and to a specific language. We recognize the necessity of having and using manually-constructed semantic-knowledge sources in order to get precise assessments of the semantic similarity of texts, but, in turn, we also consider that it is possible to obtain good estimations of these similarities using less-expensive, and perhaps broader, information sources. In particular our proposal is to automatically extract the semantic knowledge from large amounts of raw data samples i.e. document corpora without labels.", |
| "cite_spans": [ |
| { |
| "start": 146, |
| "end": 160, |
| "text": "(Resnik, 1999)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 207, |
| "end": 223, |
| "text": "(Fellbaum, 2005)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 360, |
| "end": 374, |
| "text": "(Curran, 2003)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper we describe two different strategies to compute the semantic similarity of words from a reference corpus. The first strategy uses word cooccurrence statistics. It determines that two words are associated (in meaning) if they tend to be used together, in the same documents or contexts. The second strategy measures the similarity of words by taking into consideration second order word cooccurrences. It defines two words as associated if they are used in similar contexts (i.e., if they cooccur with similar words). The following section describes the implementation of these two strategies for our participation at the STS-SEM 2013 task, as well as their combination with the measures designed by the groups from the Universities of Matanzas, Alicante and Paris 13.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The Semantic Textual Similarity (STS) task consists of estimated the value of semantic similarity between two texts, 1 and 2 for now on.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Participation in STS-SEM2013", |
| "sec_num": "2" |
| }, |
| { |
| "text": "As we mentioned previously, our participation in the STS task of SEM 2013 considered two different approaches that aimed to take advantage of the language knowledge latent in a given reference corpus. By applying simple statistics we obtained a semantic similarity measure between words, and then we used this semantic word similarity (SWS) to get a sentence level similarity estimation. We explored two alternatives for measuring the semantic similarity of words, the first one, called , uses the co-occurrence of words in a limited context 1 ,and the second, , compares the contexts of the words using the vector model and cosine similarity to achieve this comparison. It is important to point out that using the vector space model directly, without any spatial transformation as those used by other approaches 2 , we could get greater control in the selection of the features used for the extraction of knowledge from the corpus. It is also worth mentioning that we applied a stemming procedure to the sentences to be compared as well as to all documents from the reference corpus. We represented the texts 1 and 2 by bags of tokens, which means that our approaches did not take into account the word order.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Participation in STS-SEM2013", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Following we present our baseline method, then, we introduce the two proposed methods as well as a method done in collaboration with other groups. The idea of this shared-method is to enhance the estimation of the semantic textual similarity by combining different and diverse strategies for computing word similarities.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Participation in STS-SEM2013", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Given texts 1 and 2 , their textual similarity is given by:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "STS-baseline method", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "\u2212 = ( 1 , 2 , ( 2 , 1 ))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "STS-baseline method", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "where", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "STS-baseline method", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": ", = 1 | | 1( \u2208 ) \u2208", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "STS-baseline method", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "This measure is based on a direct matching of tokens. It simply counts the number of tokens from one text that also exist in the other text . Because STS is a symmetrical attribute, unlike Textual Entailment (Agirre et al., 2012), we designed it as a symmetric measure. We assumed that the relationship between both texts is at least equal to their smaller asymmetric similarity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "STS-baseline method", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "These methods incorporate semantic knowledge extracted from a reference corpus. They aim to take advantage of the latent semantic knowledge from a large document collection. Because the extracted knowledge from the reference corpus is at word level, these methods for STS use the same basic -word matching-strategy for comparing the sentences like the baseline method. Nevertheless, they allow a soft matching between words by incorporating information about their semantic similarity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The proposed STS methods", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The following formula shows the proposed modification to the SIM function in order to incorporate information of the semantic word similarity (SWS). This modification allowed us not only to match words with exactly the same stem but also to link different but semantically related words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The proposed STS methods", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": ", =", |
| "eq_num": "( , ) \u2208 \u2208" |
| } |
| ], |
| "section": "The proposed STS methods", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "We propose two different strategies to compute the semantic word similarity (SWS), and . The following subsections describe in detail these two strategies.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The proposed STS methods", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "uses a reference corpus to get a numerical approximation of the semantic similarity between two terms and (when these terms have not the same stem). As shown in the following formula, takes values between 0 and 1; 0 indicates that it does not exist any text sample in the corpus that contains both terms, whereas, 1 indicates that they always occur together.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "STS based on word co-occurrence", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": ", = = 1 \u210e #( , ) (#( ), #( ))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "STS based on word co-occurrence", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "where# , is the number of times that and co-occur and # and # are the number of times that terms and occur in the reference corpus respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "STS based on word co-occurrence", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "is based on the idea that two terms are semantically closer if they tend to be used in similar contexts. This measure uses the well-known vector space model and cosine similarity to compare the terms' contexts. In a first step, we created a context vector for each term, which captures all the terms that appear around it in the whole reference corpus. Then, we computed the semantic similarity of two terms by the following formula.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "STS based on context similarity", |
| "sec_num": "2.2.2" |
| }, |
| { |
| "text": ", = = 1 \u210e ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "STS based on context similarity", |
| "sec_num": "2.2.2" |
| }, |
| { |
| "text": "where the cosine similarity, SIMCOS, is calculated on the vectors and corresponding to the vector space model representation of terms and , as indicated in the following equation:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "STS based on context similarity", |
| "sec_num": "2.2.2" |
| }, |
| { |
| "text": "( , ) = \u2022 \u2208 | | | | \u2022 | |", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "STS based on context similarity", |
| "sec_num": "2.2.2" |
| }, |
| { |
| "text": "It is important to point out that SIMCOS is calculated on a \"predefined\" vocabulary of interest; the appropriate selection of this vocabulary helps to get a better representation of terms, and, consequently, a more accurate estimation of their semantic similarities.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "STS based on context similarity", |
| "sec_num": "2.2.2" |
| }, |
| { |
| "text": "In addition to our main methods we also developed a method that combines our SWS measures with measures proposed by other two research groups, namely:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "STS based on a combination of measures", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "\uf0b7 LIPN (Laboratoire d'Informatique de Paris-", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "STS based on a combination of measures", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Nord, Universit\u00e9 Paris 13, France).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "STS based on a combination of measures", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "\uf0b7 UMCC_DLSI (Universidad de Matanzas Camilo Cienfuegos, Cuba, in conjuction with the Departamento de Lenguajes y Sistemas Inform\u00e1ticos, Universidad de Alicante, Spain).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "STS based on a combination of measures", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The main motivation for this collaboration was to investigate the relevance of using diverse strategies for computing word similarities and the effectiveness of their combination for estimating the semantic similarity of texts.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "STS based on a combination of measures", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The proposed method used a set of measures provided by each one of the groups. These measures were employed as features to obtained a prediction model for the STS. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "STS based on a combination of measures", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The extraction of knowledge for the computation of the SWS was performed over the Reuters-21578 collection. This collection was selected because it is a well-known corpus and also because it includes documents covering a wide range of topics. Due to time and space restrictions we could not consider all the vocabulary from the reference corpus; the vocabulary selection was conducted by taking the best 20,000 words according to the tran-sition point method (Pinto et al., 2006) . This method selects the terms associated to the main topics of the corpus, which presumably contain more information for estimating the semantic similarity of words. We also preserved the vocabulary from the evaluation samples, provided they also occur in the reference corpus. The size of the vocabulary used in the experiments and the size of the corpus and test set vocabularies are shown in Table 2 ", |
| "cite_spans": [ |
| { |
| "start": 459, |
| "end": 479, |
| "text": "(Pinto et al., 2006)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 877, |
| "end": 884, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Implementation considerations", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The methods proposed by our group do not require to be trained, i.e., they do not require tagged data, only a reference corpus, therefore, it was possible to evaluate them on the whole training set available this year. Table 3 shows their results on this set. Table 3 . Correlation values of the proposed methods and our baseline method with human judgments. Table 3 show that the use of the cooccurrence information improves the correlation with human judgments. It also shows that the use of context information further improves the results. One surprising finding was the competitive performance of our baseline method; it is considerably better than the previous year's baseline result (0.31).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 219, |
| "end": 226, |
| "text": "Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 260, |
| "end": 267, |
| "text": "Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 359, |
| "end": 366, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation and Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In order to evaluate the method done in collaboration with LIPN and UMCC_DLSI, we carried out several experiments using the features provided by each group independently and in conjunction with the others. The experiments were performed over the whole training set by means of two-fold cross-validation. The individual and global results are shown in Table 4 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 351, |
| "end": 358, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results in", |
| "sec_num": null |
| }, |
| { |
| "text": "As shown in Table 4 , the result corresponding to the combination of all features clearly outperformed the results obtained by using each team\u00b4s features independently. Moreover, the best combination of features, containing selected features from the three teams, obtained a correlation value very close to last year's winner result. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 12, |
| "end": 19, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results in", |
| "sec_num": null |
| }, |
| { |
| "text": "For the official runs (refer to Table 5 ) we submitted the results corresponding to the and methods. We also submitted a result from the method done in collaboration with LIPN and UMCC_DLSI. Due to time restrictions we were not able to submit the results from our best configuration; we submitted the results for the linear regression model using all the features (second best result from Table 4 ). Table 5 shows the results in the four evaluation sub-collections; Headlines comes from news headlines, OnWN and FNWN contain pair senses definitions from WordNet and other resources, finally, SMT are translations from automatic machine translations and from the reference human translations.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 32, |
| "end": 39, |
| "text": "Table 5", |
| "ref_id": null |
| }, |
| { |
| "start": 389, |
| "end": 396, |
| "text": "Table 4", |
| "ref_id": null |
| }, |
| { |
| "start": 400, |
| "end": 407, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Officials Runs", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "As shown in Table 5 , the performances of the two proposed methods by our group were very close. We hypothesize that this result could be caused by the use of a larger vocabulary for the computation of co-occurrence statistics than for the calculation of the context similarities. We had to use a smaller vocabulary for the later because its higher computational cost.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 12, |
| "end": 19, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Officials Runs", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Finally, Table 5 also shows that the method done in collaboration with the other groups ob-tained our best results, confirming that using more information about the semantic similarity of words allows improving the estimation of the semantic similarity of texts. The advantage of this approach over the two proposed methods was especially clear on the OnWN and FNWN datasets, which were created upon WordNet information. Somehow this result was predictable since several measures from this \"share-method\" use WordNet information to compute the semantic similarity of words. However, this pattern was not the same for the other two (WordNet unrelated) datasets. In these other two collections, the average performance of our two proposed methods, without using any expensive and manually constructed resource, improved by 4% the results from the share-method. Table 4 . Correlation values from our official runs over the four sub-datasets.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 9, |
| "end": 16, |
| "text": "Table 5", |
| "ref_id": null |
| }, |
| { |
| "start": 859, |
| "end": 866, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Officials Runs", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The main conclusion of this experiment is that it is possible to extract useful knowledge from raw corpora for evaluating the semantic similarity of texts. Other important conclusion is that the combination of methods (or word semantic similarity measures) helps improving the accuracy of STS. As future work we plan to carry out a detailed analysis of the used measures, with the aim of determining their complementariness and a better way for combining them. We also plan to evaluate the impact of the size and vocabulary richness of the reference corpus on the accuracy of the proposed STS methods.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In the experiments we considered a window (context) formed of 15 surrounding words. 2 Such as Latent Semantic Analysis (LSA)(Turney, 2005).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work was done under partial support of CONACyT project Grants: 134186, and Scholarship 224483. This work is the result of the collaboration in the framework of the WIQEI IRSES project (Grant No. 269180) within the FP 7 Marie Curie. The work of the last author was in the framework the DIANA-APPLICATIONS-Finding Hidden Knowledge in Texts: Applications (TIN2012-38603-C02-01) project, and the VLC/CAMPUS Microcluster on Multimodal Interaction in Intelligent Systems. We also thank the teams from the Universities of Paris 13, Matanzas and Alicante for their willingness to collaborate with us in this evalaution exercise.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "UNT: A Supervised Synergistic Approach to Semantic Text Similarity", |
| "authors": [ |
| { |
| "first": "Carmen", |
| "middle": [], |
| "last": "Banea", |
| "suffix": "" |
| }, |
| { |
| "first": "Samer", |
| "middle": [], |
| "last": "Hassan", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Mohler", |
| "suffix": "" |
| }, |
| { |
| "first": "Radamihalcea", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "The First Joint Conference on Lexical and Computational Semantics, Proceedings of the Sixth International Workshop on Semantic Evaluation", |
| "volume": "2", |
| "issue": "", |
| "pages": "635--642", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Carmen Banea, Samer Hassan, Michael Mohler and RadaMihalcea, 2012, UNT: A Supervised Synergistic Approach to Semantic Text Similarity, SEM 2012: The First Joint Conference on Lexical and Computa- tional Semantics, Proceedings of the Sixth Interna- tional Workshop on Semantic Evaluation (SemEval 2012), Montreal, Vol. 2: 635-642.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "WordNet and wordnets, Encyclopedia of Language and Linguistics", |
| "authors": [ |
| { |
| "first": "Christiane", |
| "middle": [], |
| "last": "Fellbaum", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "665--670", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christiane Fellbaum,2005, WordNet and wordnets, Encyclopedia of Language and Linguistics, Second Ed., Oxford, Elsevier: 665-670.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Clustering abstracts of scientific texts using the Transition Point technique", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Pinto", |
| "suffix": "" |
| }, |
| { |
| "first": "Hector", |
| "middle": [], |
| "last": "Jim\u00e9nez", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Paolo", |
| "middle": [], |
| "last": "Rosso", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "Proc. 7th Int. Conf. on Comput. Linguistics and Intelligent Text Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "536--546", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Pinto, Hector Jim\u00e9nez H. and Paolo Rosso. Clus- tering abstracts of scientific texts using the Transi- tion Point technique, Proc. 7th Int. Conf. on Comput. Linguistics and Intelligent Text Processing, CICL- ing-2006, Springer-Verlag, LNCS(3878): 536-546.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "SemEval-2012 Task 6: A Pilot on Semantic Textual Similarity. SEM 2012: The First Joint Conference on Lexical and Computational Semantics, Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval2012)", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Enekoagirre", |
| "suffix": "" |
| }, |
| { |
| "first": "Mona", |
| "middle": [], |
| "last": "Cer", |
| "suffix": "" |
| }, |
| { |
| "first": "Aitor", |
| "middle": [], |
| "last": "Diab", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Gonzalez-Agirre", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "2", |
| "issue": "", |
| "pages": "386--393", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "EnekoAgirre, Daniel Cer, Mona Diab and Aitor Gonza- lez-Agirre, SemEval-2012 Task 6: A Pilot on Seman- tic Textual Similarity. SEM 2012: The First Joint Conference on Lexical and Computational Seman- tics, Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval2012), Montreal, Vol. 2: 386-393.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Doctoral Thesis: From Distributional to Semantic Similarity, Institute for Communicating and Collaborative Systems", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Richard Curran", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "James Richard Curran, 2003, Doctoral Thesis: From Distributional to Semantic Similarity, Institute for Communicating and Collaborative Systems, School of Informatics, University of Edinburgh.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Measuring semantic similarity by latent relational analysis", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Peter", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Turney", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "1136--1141", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter D. Turney, 2005, Measuring semantic similarity by latent relational analysis, IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence, Edinburgh, Scotland: 1136-1141", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Similarity of Semantic Relations", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Peter", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Turney", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Computational Linguistics", |
| "volume": "32", |
| "issue": "3", |
| "pages": "379--416", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter D. Turney, 2006, Similarity of Semantic Relations, Computational Linguistics, Vol. 32, No. 3: 379-416.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Semantic Similarity in a Taxonomy: An Information-Based Measure and its Application to Problems of Ambiguity in Natural Language", |
| "authors": [ |
| { |
| "first": "Philip", |
| "middle": [], |
| "last": "Resnik", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Journal of Artificial Intelligence Research", |
| "volume": "11", |
| "issue": "", |
| "pages": "95--130", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Philip Resnik, 1999, Semantic Similarity in a Taxono- my: An Information-Based Measure and its Applica- tion to Problems of Ambiguity in Natural Language, Journal of Artificial Intelligence Research, Vol. 11: 95-130.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF0": { |
| "num": null, |
| "content": "<table><tr><td>summarizes the</td></tr></table>", |
| "text": "General description of the features used by the shared method. The second column indicates the source team for each group of features; the third column indicates the number of used features from each group; the last two columns show the information gain rank of each group of features over the training set.", |
| "type_str": "table", |
| "html": null |
| } |
| } |
| } |
| } |