{ "paper_id": "S13-1024", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:41:54.087778Z" }, "title": "UNIBA-CORE: Combining Strategies for Semantic Textual Similarity", "authors": [ { "first": "Annalina", "middle": [], "last": "Caputo", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Bari Aldo Moro Via E. Orabona", "location": { "postCode": "4 -70125", "settlement": "Bari", "country": "Italy" } }, "email": "annalina.caputo@uniba.it" }, { "first": "Pierpaolo", "middle": [], "last": "Basile", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Bari Aldo Moro Via E. Orabona", "location": { "postCode": "4 -70125", "settlement": "Bari", "country": "Italy" } }, "email": "pierpaolo.basile@uniba.it" }, { "first": "Giovanni", "middle": [], "last": "Semeraro", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Bari Aldo Moro Via E. Orabona", "location": { "postCode": "4 -70125", "settlement": "Bari", "country": "Italy" } }, "email": "giovanni.semeraro@uniba.it" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes the UNIBA participation in the Semantic Textual Similarity (STS) core task 2013. We exploited three different systems for computing the similarity between two texts. A system is used as baseline, which represents the best model emerged from our previous participation in STS 2012. Such system is based on a distributional model of semantics capable of taking into account also syntactic structures that glue words together. In addition, we investigated the use of two different learning strategies exploiting both syntactic and semantic features. The former uses ensemble learning in order to combine the best machine learning techniques trained on 2012 training and test sets. The latter tries to overcome the limit of working with different datasets with varying characteristics by selecting only the more suitable dataset for the training purpose.", "pdf_parse": { "paper_id": "S13-1024", "_pdf_hash": "", "abstract": [ { "text": "This paper describes the UNIBA participation in the Semantic Textual Similarity (STS) core task 2013. We exploited three different systems for computing the similarity between two texts. A system is used as baseline, which represents the best model emerged from our previous participation in STS 2012. Such system is based on a distributional model of semantics capable of taking into account also syntactic structures that glue words together. In addition, we investigated the use of two different learning strategies exploiting both syntactic and semantic features. The former uses ensemble learning in order to combine the best machine learning techniques trained on 2012 training and test sets. The latter tries to overcome the limit of working with different datasets with varying characteristics by selecting only the more suitable dataset for the training purpose.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Semantic Textual Similarity is the task of computing the similarity between any two given texts. The task, in its core formulation, aims at capturing the different kinds of similarity that emerge from texts. Machine translation, paraphrasing, synonym substitution or text entailment are some fruitful methods exploited for this purpose. These techniques, along with other methods for estimating the text similarity, were successfully employed via machine learning approaches during the 2012 task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, the STS 2013 core task (Agirre et al., 2013) differs from the 2012 formulation in that it provides a test set which is similar to the training, but not drawn from the same set of data. Hence, in order to generalize the machine learning models trained on a group of datasets, we investigate the use of combination strategies. The objective of combination strategies, known under the name of ensemble learning, is that of reducing the bias-variance decomposition through reducing the variance error. Hence, this class of methods should be more robust with respect to previously unseen data. Among the several ensemble learning alternatives, we exploit the stacked generalization (STACKING) algorithm (Wolpert, 1992) . Moreover, we investigate the use of a two-steps learning algorithm (2STEPSML). In this method the learning algorithm is trained using only the dataset most similar to the instance to be predicted. The first step aims at predicting the dataset more similar to the given pair of texts. Then the second step makes use of the previously trained algorithm to predict the similarity value. The baseline for the evaluation is represented by our best system (DSM PERM) resulting from our participation in the 2012 task. After introducing the general models behind our systems in Section 2, Section 3 describes the evaluation setting of our systems along with the experimental results. Then, some conclusions and remarks close the paper.", "cite_spans": [ { "start": 32, "end": 53, "text": "(Agirre et al., 2013)", "ref_id": "BIBREF0" }, { "start": 707, "end": 722, "text": "(Wolpert, 1992)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Distributional models are effective methods for representing word paradigmatic relations in a simple way through vector spaces (Mitchell and Lapata, 2008) . These spaces are built taking into account the word context, hence the resulting vector representation is such that the distance between vectors reflects their similarity. Although several definitions of context are possible (e.g. a sliding window of text, the word order or syntactic dependencies), in their plain definition these kinds of models account for just one type of context at a time. To overcome this limitation, we exploit a method to encode more definitions of context in the same vector exploiting the vector permutations . This technique, which is based on Random Indexing as a means for computing the distributional model, is based on the idea that when the components of a highly sparse vector are shuffled, the resulting vector is nearly orthogonal to the original one. Hence, vector permutation represents a way for generating new random vectors in a predetermined manner. Different word contexts can be encoded using different types of permutations. In our distributional model system (DSM PERM), we encode the syntactic dependencies between words rather than the mere co-occurrence information. In this way, wordvector components bear the information about both co-occurring and syntactically related words. In this distributional space, a text can be easily represented as the superposition of its words. Then, the vector representation of a text is given by adding the vector representation of its words, and the similarity between texts come through the cosine of the angle between their vector representations.", "cite_spans": [ { "start": 127, "end": 154, "text": "(Mitchell and Lapata, 2008)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Dependency Encoding via Vector Permutations", "sec_num": "2.1" }, { "text": "Stacking algorithms (Wolpert, 1992) are a way of combining different types of learning algorithms reducing the variance of the system. In this model, the meta-learner tries to predict the real value of an instance combining the outputs of other machine learning methods. Figure 1 shows how the learning process takes place. The level-0 represents the ensemble of different models to be trained on the same dataset. The level-0 outputs build up the level-1 dataset: an instance at this level is represented by the numeric values predicted by each level-0 model along with the gold standard value. Then, the objective of the level-1 learning model is to learn how to combine the level-0 outputs in order to provide the best prediction. ", "cite_spans": [], "ref_spans": [ { "start": 271, "end": 279, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Stacking", "sec_num": "2.2" }, { "text": "level-0 level-1 model 1 model 2 \u2022 \u2022 \u2022 model n meta-learner prediction", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stacking", "sec_num": "2.2" }, { "text": "Given an ensemble of datasets with different characteristics, this method is based on the idea that when instances come from a specific dataset, the learning algorithm trained on that dataset outperforms the same algorithm trained on the whole ensemble. Hence, the two steps algorithm tries to overcome the problem of dealing with different datasets having different characteristics through a classification model. In the first step ( Figure 2 ), a different class is assigned to each dataset. The classifier is trained on a set of instances whose classes correspond to the dataset numbers. Then, given a new instance the output of this step will be the dataset to be used for training the learning algorithm in the step 2. In the second step, the learning algorithm is trained on the dataset choose in the first step. The output of this step is the predicted similarity between the two texts. Through these steps, it is possible to select the dataset with the characteristics more similar to a given instance, and exploit just this set of data for learning the algorithm.", "cite_spans": [], "ref_spans": [ { "start": 435, "end": 443, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Two steps learning algorithm", "sec_num": "2.3" }, { "text": "step-1 step-2 dataset 1 dataset 2 \u2022 \u2022 \u2022 dataset", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Two steps learning algorithm", "sec_num": "2.3" }, { "text": "Both STACKING and 2STEPSML systems rely on several kinds of features, which vary from lexical to semantic ones. Features are grouped in seven main classes, as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "2.4" }, { "text": "1. Character/string/annotation-based features:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "2.4" }, { "text": "the length of the longest common contiguous substring between the texts; the Jaccard index of both tokens and lemmas; the Levenshtein distance between texts; the normalized number of common 2-grams, 3-grams and 4-grams; the total number of tokens and characters; the difference in tokens and characters between texts; the normalized difference with respect to the max text length in tokens and characters between texts. Exploiting other linguistic annotations extracted by Stanford CoreNLP 1 , we compute the Jaccard index between PoS-tags and named entities. Using WordNet we extract the Jaccard index between the first sense and its super-sense tag.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "2.4" }, { "text": "2. Textual Similarity-based features: a set of features based on the textual similarity proposed by Mihalcea (Mihalcea et al., 2006) . Given two texts T 1 and T 2 the similarity is computed as follows:", "cite_spans": [ { "start": 109, "end": 132, "text": "(Mihalcea et al., 2006)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "2.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "sim(T 1 , T 2 ) = 1 2 ( w\u2208T 1 maxSim(w, T 2 ) w\u2208T 1 idf (w) + w\u2208T 2 maxSim(w, T 1 ) w\u2208T 2 idf (w) )", "eq_num": "(1)" } ], "section": "Features", "sec_num": "2.4" }, { "text": "1 Available at: http://nlp.stanford.edu/software/corenlp.shtml", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "2.4" }, { "text": "We adopt several similarity measures using semantic distributional models (see Section 2.5), the Resnik's knowledge-based approach (Resnik, 1995) and the point-wise mutual information as suggested by Turney (Turney, 2001 ) computed on British National Corpus 2 . For all the features, the idf is computed relying on UKWaC corpus 3 (Baroni et al., 2009) .", "cite_spans": [ { "start": 131, "end": 145, "text": "(Resnik, 1995)", "ref_id": "BIBREF9" }, { "start": 207, "end": 220, "text": "(Turney, 2001", "ref_id": "BIBREF11" }, { "start": 331, "end": 352, "text": "(Baroni et al., 2009)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "2.4" }, { "text": "3. Head similarity-based features: this measure takes into account the maximum similarity between the roots of each text. The roots are extracted using the dependency parser provided by Stanford CoreNLP. The similarity is computed according to the distributional semantic models proposed in Section 2.5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "2.4" }, { "text": "4. ESA similarity: computes the similarity between texts using the Explicit Semantic Analysis (ESA) approach (Gabrilovich and Markovitch, 2007) . For each text we extract the ESA vector built using the English Wikipedia, and then we compute the similarity as the cosine similarity between the two ESA vectors.", "cite_spans": [ { "start": 109, "end": 143, "text": "(Gabrilovich and Markovitch, 2007)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "2.4" }, { "text": "5. Paraphrasing features: this is a very simple measure which counts the number of possible paraphrasings belonging to the two texts. Given two texts T 1 and T 2 , for each token in T 1 a list of paraphrasings is extracted using a dictionary 4 . If T 2 contains one of the paraphrasing in the list, the score is incremented by one. The final score is divided by the number of tokens in T 1 . The same score is computed taking into account T 2 . Finally, the two score are added and divided by 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "2.4" }, { "text": "6. Greedy Lemma Aligning Overlap features: this measure computes the similarity between texts using the semantic alignment of lemmas as proposed by\u0160ari\u0107 et al. (2012). In order to compute the similarity between lemmas, we exploit the distributional semantic models described in Section 2.5. 7. Compositional features: we build several similarity features using the distributional semantic models described in Section 2.5 and a compositional operator based on sum. This approach is thoroughly explained in Section 2.6", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "2.4" }, { "text": "In several features proposed in our approaches, the similarity between words is computed using Distributional Semantic Models. These models represent word meanings through contexts: the different meanings of a word can be accounted for by looking at the different contexts wherein the word occurs. This insight can beautifully be expressed by the geometrical representation of words as vectors in a semantic space. Each term is represented as a vector whose components are contexts surrounding the term. In this way, the meaning of a term across a corpus is thoroughly conveyed by the contexts it appears in, where a context may typically be the set of co-occurring words in a document, in a sentence or in a window of surrounding terms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributional semantic models", "sec_num": "2.5" }, { "text": "In particular, we take into account two main classes of models: Simple Distributional Spaces and Structured Semantic Spaces. The former considers as context the co-occurring words, the latter takes into account both co-occurrence and syntactic dependency between words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributional semantic models", "sec_num": "2.5" }, { "text": "Simple Distributional Spaces rely on Latent Semantic Analysis (LSA) and Random Indexing (RI) in order to reduce the dimension of the cooccurrences matrix. Moreover, we use an approach which applies LSA to the matrix produced by RI.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributional semantic models", "sec_num": "2.5" }, { "text": "Structured Semantic Spaces are based on two techniques to encode syntactic information into the vector space. The first approach uses the vector permutation of random vector in RI to encode the syntactic role (head or dependent) of a word. The second method is based on Holographic Reduced Representation, in particular using convolution between vectors, to encode syntactic information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributional semantic models", "sec_num": "2.5" }, { "text": "Adopting distributional semantic models, each word can be represented as a vector in a geometric space. The similarity between two words can be easily computed taking into account the cosine similarity between word vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributional semantic models", "sec_num": "2.5" }, { "text": "All models are described in .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributional semantic models", "sec_num": "2.5" }, { "text": "In Distributional Semantic Models, given the vector representations of two words, it is always possible to compute their similarity as the cosine of the angle between them. However, texts are composed by several terms, so in order to compute the similarity between them we need a method to compose words occurring in these texts. It is possible to combine words through the vector addition (+). This operator is similar to the superposition defined in connectionist systems (Smolensky, 1990) , and corresponds to the pointwise sum of components:", "cite_spans": [ { "start": 474, "end": 491, "text": "(Smolensky, 1990)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Compositional features", "sec_num": "2.6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p = u + v", "eq_num": "(2)" } ], "section": "Compositional features", "sec_num": "2.6" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Compositional features", "sec_num": "2.6" }, { "text": "p i = u i + v i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Compositional features", "sec_num": "2.6" }, { "text": "The addition is a commutative operator, which means that it does not take into account any order or underlying structures existing between words. In this first study, we do not exploit more complex methods to combine word vectors. We plan to investigate them in future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Compositional features", "sec_num": "2.6" }, { "text": "Given a text p, we denote with p its vector representation obtained applying addition operator (+) to the vector representation of terms it is composed of. Furthermore, it is possible to compute the similarity between two texts exploiting the cosine similarity between vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Compositional features", "sec_num": "2.6" }, { "text": "Formally, if a = a 1 , a 2 ...a n and b = b 1 , b 2 ...b m are two texts, we build two vectors a and b which represent respectively the two texts in a semantic space. Vector representations for the two texts are built applying the addition operator to the vector representation of words belonging to them:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Compositional features", "sec_num": "2.6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "a = a 1 + a 2 + . . . + a n b = b 1 + b 2 . . . + b m", "eq_num": "(3)" } ], "section": "Compositional features", "sec_num": "2.6" }, { "text": "The similarity between a and b is computed as the cosine similarity between them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Compositional features", "sec_num": "2.6" }, { "text": "SemEval-2013 STS is the second attempt to provide a \"unified framework for the evaluation of modular semantic textual similarity and to characterize their impact on NLP applications\". The task consists in computing the similarity between pair of texts, returning a similarity score. The test set is composed by data coming from the following datasets: news headlines (headlines); mapping of lexical resources from Ontonotes to Wordnet (OnWN) and from FrameNet to WordNet (FNWN); and evaluation of machine translation (SMT). The training data for STS-2013 is made up by training and testing data from the previous edition of STS-2012 task. During the 2012 edition, STS provided participants with three training data: MSR-Paraphrase, MSR-Video, STMeuropar; and five testing data: MSR-Paraphrase, MSR-Video, STMeuropar, SMTnews and OnWN. It is important to note that part of 2012 test sets were made up from the same sources of the training sets. On the other hand, STS-2013 training and testing are very different, making the prediction task a bit harder.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental evaluation", "sec_num": "3" }, { "text": "Humans rated each pair of texts with values from 0 to 5. The evaluation is performed by comparing the humans scores against system performance through Pearson's correlation with the gold standard for the four datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental evaluation", "sec_num": "3" }, { "text": "For the evaluation, we built the distributional spaces using the WaCkypedia EN corpus 5 . WaCkypedia EN is based on a 2009 dump of the English Wikipedia (about 800 million tokens) and includes information about: part-of-speech, lemma and a full dependency parsing performed by MaltParser (Nivre et al., 2007) . The structured spaces described in Subsections 2.1 and 2.5 are built exploiting information about term windows and dependency parsing supplied by WaCkypedia. The total number of dependencies amounts to about 200 million.", "cite_spans": [ { "start": 288, "end": 308, "text": "(Nivre et al., 2007)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "System setup", "sec_num": "3.1" }, { "text": "The RI system is implemented in Java and relies on some portions of code publicly available in the Semantic Vectors package (Widdows and Ferraro, 2008) , while for LSA we exploited the publicly available C library SVDLIBC 6 .", "cite_spans": [ { "start": 124, "end": 151, "text": "(Widdows and Ferraro, 2008)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "System setup", "sec_num": "3.1" }, { "text": "We restricted the vocabulary to the 50,000 most frequent terms, with stop words removal and forcing the system to include terms which occur in the dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System setup", "sec_num": "3.1" }, { "text": "Semantic space building involves some parame-ters. In particular, each semantic space needs to set up the dimension k of the space. All spaces use a dimension of 500 (resulting in a 50,000\u00d7500 matrix). The number of non-zero elements in the random vector is set to 10. When we apply LSA to the output space generated by the Random Indexing we hold all the 500 dimensions, since during the tuning we observed a drop in performance when a lower dimension was set. The co-occurrence distance w between terms was set up to 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System setup", "sec_num": "3.1" }, { "text": "In order to compute the similarity between the vector representations of text using UNIBA-DSM PERM, we used the cosine similarity, and then we multiplied by 5 the obtained value.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System setup", "sec_num": "3.1" }, { "text": "The two supervised methods, UNIBA-2STEPML and UNIBA-STACKING, are developed in Java using Weka 7 to implement the learning algorithms. Regarding the stacking approach (UNIBA-STACKING) we used for the level-0 the following models: Gaussian Process with polynomial kernel, Gaussian Process with RBF kernel, Linear Regression, Support Vector regression with polynomial kernel, and decision tree. The level-1 model uses a Gaussian Process with RBF kernel. In the first step of UNIBA-2STEPML we adopt Support Vector Machine, while in the second one we use Support Vector Machine for regression. In both steps, the RBF-Kernel is used. Features are normalized removing non alphanumerics characters. In all the learning algorithms, we use the default parameters set by Weka. As future work, we plan to perform a tuning step in order to set the best parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System setup", "sec_num": "3.1" }, { "text": "The choice of the learning algorithms for both UNIBA-STACKING and UNIBA-2STEPSML systems was performed after a tuning phase where only the STS-2012 training datasets were exploited. Table 1 reports the values obtained by our three systems on the STS-2012 test sets. After the tuning, we came up with the learning algorithms to employ in the level-0 and level-1 of UNIBA-STACKING and in step-1 and step-2 of UNIBA-2STEPSML. Then, the training of both UNIBA-STACKING and UNIBA-2STEPSML was performed on all STS-2012 datasets (training and test data). ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System setup", "sec_num": "3.1" }, { "text": "Evaluation results on the STS-2013 data are reported in Table 2 . Among the three systems, UNIBA-DSM PERM obtained the best performances on both individual datasets and in the overall evaluation metric (mean), which computes the Pearson's correlation considering all datasets combined in a single one. The best system ranked 54 over a total of 90 submissions, while UNIBA-STACKING and UNIBA-2STEPSML ranked 61 and 71 respectively. These results are at odds with those reported in Table 1 . During the test on 2012 dataset, UNIBA-STACKING gave the best result, followed by UNIBA-2STEPSML, while UNIBA-DSM PERM gave the worst performance. The UNIBA-STACKING system corroborated our hypothesis giving also the best results on those datasets not exploited during the training phase of the system (OnWN, SMTnews). Conversely, UNIBA-2STEPSML reported a different trend showing its weakness with respect to a high variance in the data, and performing worse than UNIBA-DSM PERM on the OnWN and SMTnews datasets.", "cite_spans": [], "ref_spans": [ { "start": 56, "end": 63, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 480, "end": 487, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Evaluation results", "sec_num": "3.2" }, { "text": "However, the evaluation results have refuted our hypothesis, even with the use of the stacking system. The independence from a training set makes the UNIBA-DSM PERM system more robust than other supervised algorithms, even though it is not able to give always the best performance on individual datasets, as highlighted by results in Table 1. 7 http://www.cs.waikato.ac.nz/ml/weka/", "cite_spans": [], "ref_spans": [ { "start": 334, "end": 342, "text": "Table 1.", "ref_id": null } ], "eq_spans": [], "section": "Evaluation results", "sec_num": "3.2" }, { "text": "This paper reports on UNIBA participation in Semantic Textual Similarity 2013 core task. In this task edition, we exploited both distributional models and machine learning techniques to build three systems. A distributional model, which takes into account the syntactic structure that relates words in a corpus, has been used as baseline. Moreover, we investigate the use of two machine learning techniques as a means to make our systems more independent from the training data. However, the evaluation results have highlighted the higher robustness of the distributional model with respect to these systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "4" }, { "text": "http://wacky.sslmit.unibo.it/doku.php?id=corpora 6 http://tedlab.mit.edu/ dr/SVDLIBC/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work fulfils the research objectives of the PON 02 00563 3470993 project \"VINCENTE -A Virtual collective INtelligenCe ENvironment to develop sustainable Technology Entrepreneurship ecosystems\" funded by the Italian Ministry of University and Research (MIUR).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "*sem 2013 shared task: Semantic textual similarity, including a pilot on typed-similarity", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Gonzalez-Agirre", "suffix": "" }, { "first": "Weiwei", "middle": [], "last": "Guo", "suffix": "" } ], "year": 2013, "venue": "*SEM 2013: The Second Joint Conference on Lexical and Computational Semantics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez- Agirre, and Weiwei Guo. 2013. *sem 2013 shared task: Semantic textual similarity, including a pilot on typed-similarity. In *SEM 2013: The Second Joint Conference on Lexical and Computational Semantics. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The WaCky Wide Web: a collection of very large linguistically processed webcrawled corpora. Language Resources and Evaluation", "authors": [ { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Silvia", "middle": [], "last": "Bernardini", "suffix": "" }, { "first": "Adriano", "middle": [], "last": "Ferraresi", "suffix": "" }, { "first": "Eros", "middle": [], "last": "Zanchetta", "suffix": "" } ], "year": 2009, "venue": "", "volume": "43", "issue": "", "pages": "209--226", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky Wide Web: a collection of very large linguistically processed web- crawled corpora. Language Resources and Evalua- tion, 43(3):209-226.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A study on compositional semantics of words in distributional spaces", "authors": [ { "first": "Pierpaolo", "middle": [], "last": "Basile", "suffix": "" }, { "first": "Annalina", "middle": [], "last": "Caputo", "suffix": "" }, { "first": "Giovanni", "middle": [], "last": "Semeraro", "suffix": "" } ], "year": 2012, "venue": "Sixth IEEE International Conference on Semantic Computing, ICSC 2012", "volume": "", "issue": "", "pages": "154--161", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pierpaolo Basile, Annalina Caputo, and Giovanni Semer- aro. 2012. A study on compositional semantics of words in distributional spaces. In Sixth IEEE Inter- national Conference on Semantic Computing, ICSC 2012, Palermo, Italy, September 19-21, 2012, pages 154-161. IEEE Computer Society.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Uniba: Distributional semantics for textual similarity", "authors": [ { "first": "Annalina", "middle": [], "last": "Caputo", "suffix": "" }, { "first": "Pierpaolo", "middle": [], "last": "Basile", "suffix": "" }, { "first": "Giovanni", "middle": [], "last": "Semeraro", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Sixth International Workshop on Semantic Evaluation (Se-mEval 2012)", "volume": "2", "issue": "", "pages": "7--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annalina Caputo, Pierpaolo Basile, and Giovanni Semer- aro. 2012. Uniba: Distributional semantics for tex- tual similarity. In *SEM 2012: The First Joint Confer- ence on Lexical and Computational Semantics -Vol- ume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (Se- mEval 2012), pages 591-596, Montr\u00e9al, Canada, 7-8", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Computing semantic relatedness using Wikipedia-based explicit semantic analysis", "authors": [ { "first": "Evgeniy", "middle": [], "last": "Gabrilovich", "suffix": "" }, { "first": "Shaul", "middle": [], "last": "Markovitch", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 20th international joint conference on artificial intelligence", "volume": "6", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Evgeniy Gabrilovich and Shaul Markovitch. 2007. Com- puting semantic relatedness using Wikipedia-based ex- plicit semantic analysis. In Proceedings of the 20th in- ternational joint conference on artificial intelligence, volume 6, page 12.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Corpus-based and knowledge-based measures of text semantic similarity", "authors": [ { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Courtney", "middle": [], "last": "Corley", "suffix": "" }, { "first": "Carlo", "middle": [], "last": "Strapparava", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the national conference on artificial intelligence", "volume": "21", "issue": "", "pages": "775--780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rada Mihalcea, Courtney Corley, and Carlo Strapparava. 2006. Corpus-based and knowledge-based measures of text semantic similarity. In Proceedings of the na- tional conference on artificial intelligence, volume 21, pages 775-780. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Vector-based models of semantic composition", "authors": [ { "first": "Jeff", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "236--244", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In Kathleen McKe- own, Johanna D. Moore, Simone Teufel, James Allan, and Sadaoki Furui, editors, ACL 2008, Proceedings of the 46th Annual Meeting of the Association for Com- putational Linguistics, June 15-20, 2008, Columbus, Ohio, USA, pages 236-244. The Association for Com- puter Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Maltparser: A languageindependent system for data-driven dependency parsing", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Jens", "middle": [], "last": "Nilsson", "suffix": "" }, { "first": "Atanas", "middle": [], "last": "Chanev", "suffix": "" }, { "first": "G\u00fclsen", "middle": [], "last": "Eryigit", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "K\u00fcbler", "suffix": "" }, { "first": "Svetoslav", "middle": [], "last": "Marinov", "suffix": "" }, { "first": "Erwin", "middle": [], "last": "Marsi", "suffix": "" } ], "year": 2007, "venue": "Natural Language Engineering", "volume": "13", "issue": "2", "pages": "95--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre, Johan Hall, Jens Nilsson, Atanas Chanev, G\u00fclsen Eryigit, Sandra K\u00fcbler, Svetoslav Marinov, and Erwin Marsi. 2007. Maltparser: A language- independent system for data-driven dependency pars- ing. Natural Language Engineering, 13(2):95-135.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Using information content to evaluate semantic similarity", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 14th International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "448--453", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Resnik. 1995. Using information content to evalu- ate semantic similarity. In Proceedings of the 14th In- ternational Joint Conference on Artificial Intelligence, pages 448-453.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Tensor product variable binding and the representation of symbolic structures in connectionist systems", "authors": [ { "first": "Paul", "middle": [], "last": "Smolensky", "suffix": "" } ], "year": 1990, "venue": "Artificial Intelligence", "volume": "46", "issue": "1-2", "pages": "159--216", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Smolensky. 1990. Tensor product variable bind- ing and the representation of symbolic structures in connectionist systems. Artificial Intelligence, 46(1- 2):159-216, November.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Mining the web for synonyms: PMI-IR versus LSA on TOEFL", "authors": [ { "first": "Peter", "middle": [], "last": "Turney", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the Twelfth European Conference on Machine Learning (ECML-2001)", "volume": "", "issue": "", "pages": "491--502", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Turney. 2001. Mining the web for synonyms: PMI-IR versus LSA on TOEFL. In Proceedings of the Twelfth European Conference on Machine Learn- ing (ECML-2001), pages 491-502.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Takelab: Systems for measuring semantic text similarity", "authors": [ { "first": "Goran", "middle": [], "last": "Frane\u0161ari\u0107", "suffix": "" }, { "first": "Mladen", "middle": [], "last": "Glava\u0161", "suffix": "" }, { "first": "Jan\u0161najder", "middle": [], "last": "Karan", "suffix": "" }, { "first": "Bojana Dalbelo", "middle": [], "last": "Ba\u0161i\u0107", "suffix": "" } ], "year": 2012, "venue": "*SEM 2012: The First Joint Conference on Lexical and Computational Semantics", "volume": "1", "issue": "", "pages": "7--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Frane\u0160ari\u0107, Goran Glava\u0161, Mladen Karan, Jan\u0160najder, and Bojana Dalbelo Ba\u0161i\u0107. 2012. Takelab: Systems for measuring semantic text similarity. In *SEM 2012: The First Joint Conference on Lexical and Computa- tional Semantics -Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Pro- ceedings of the Sixth International Workshop on Se- mantic Evaluation (SemEval 2012), pages 441-448, Montr\u00e9al, Canada, 7-8 June. Association for Compu- tational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Semantic Vectors: A Scalable Open Source Package and Online Technology Management Application", "authors": [ { "first": "Dominic", "middle": [], "last": "Widdows", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Ferraro", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC2008)", "volume": "", "issue": "", "pages": "1183--1190", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dominic Widdows and Kathleen Ferraro. 2008. Se- mantic Vectors: A Scalable Open Source Package and Online Technology Management Application. In Nicoletta Calzolari, Khalid Choukri, Bente Maegaard, Joseph Mariani, Jan Odjik, Stelios Piperidis, and Daniel Tapias, editors, Proceedings of the 6th Interna- tional Conference on Language Resources and Eval- uation (LREC2008), pages 1183-1190, Marrakech, Morocco. European Language Resources Association (ELRA).", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Stacked generalization", "authors": [ { "first": "H", "middle": [], "last": "David", "suffix": "" }, { "first": "", "middle": [], "last": "Wolpert", "suffix": "" } ], "year": 1992, "venue": "Neural networks", "volume": "5", "issue": "2", "pages": "241--259", "other_ids": {}, "num": null, "urls": [], "raw_text": "David H. Wolpert. 1992. Stacked generalization. Neural networks, 5(2):241-259, February.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Stacking algorithm", "uris": null, "type_str": "figure", "num": null }, "FIGREF1": { "text": "Two steps machine learning algorithm", "uris": null, "type_str": "figure", "num": null }, "TABREF2": { "content": "