ACL-OCL / Base_JSON /prefixS /json /S14 /S14-2013.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S14-2013",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:33:24.173633Z"
},
"title": "ASAP: Automatic Semantic Alignment for Phrases",
"authors": [
{
"first": "Ana",
"middle": [
"O"
],
"last": "Alves",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CISUC -University of Coimbra and Polytechnic Institute of Coimbra",
"location": {}
},
"email": ""
},
{
"first": "Adriana",
"middle": [],
"last": "Ferrugento",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CISUC -University of Coimbra",
"location": {}
},
"email": ""
},
{
"first": "Mariana",
"middle": [],
"last": "Louren\u00e7o",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CISUC -University of Coimbra",
"location": {}
},
"email": "mrlouren@student.dei.uc.pt"
},
{
"first": "Filipe",
"middle": [],
"last": "Rodrigues",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CISUC -University of Coimbra",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we describe the ASAP system (Automatic Semantic Alignment for Phrases) 1 which participated on the Task 1 at the SemEval-2014 contest (Marelli et al., 2014a). Our assumption is that STS (Semantic Text Similarity) follows a function considering lexical, syntactic, semantic and distributional features. We demonstrate the learning process of this function without any deep preprocessing achieving an acceptable correlation.",
"pdf_parse": {
"paper_id": "S14-2013",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we describe the ASAP system (Automatic Semantic Alignment for Phrases) 1 which participated on the Task 1 at the SemEval-2014 contest (Marelli et al., 2014a). Our assumption is that STS (Semantic Text Similarity) follows a function considering lexical, syntactic, semantic and distributional features. We demonstrate the learning process of this function without any deep preprocessing achieving an acceptable correlation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Evaluation of compositional semantic models on full sentences through semantic relatedness and textual entailment, title of this task on SemEval, aims to collect systems and approaches able to predict the difference of meaning between phrases and sentences based on their included words (Baroni and Zamparelli, 2010; Grefenstette and Sadrzadeh, 2011; Mitchell and Lapata, 2010; Socher et al., 2012) .",
"cite_spans": [
{
"start": 287,
"end": 316,
"text": "(Baroni and Zamparelli, 2010;",
"ref_id": "BIBREF2"
},
{
"start": 317,
"end": 350,
"text": "Grefenstette and Sadrzadeh, 2011;",
"ref_id": "BIBREF5"
},
{
"start": 351,
"end": 377,
"text": "Mitchell and Lapata, 2010;",
"ref_id": "BIBREF12"
},
{
"start": 378,
"end": 398,
"text": "Socher et al., 2012)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contribution is in the use of complementary features in order to learn the function STS, a part of this challenge. Rather than specifying rules, constraints and lexicons manually, we advocate a system for automatically acquiring linguistic knowledge using machine learning (ML) methods. For this we apply some preprocessing techniques over the training set in order to find different types of features. Related to the semantic aspect, we make use of known semantic relatedness and similarity measures on WordNet, in this case, applied to see the relatedness/similarity between phrases from sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Considering the problem of modeling a text corpus to find short descriptions of documents, we aim an efficient processing of large collections while preserving the essential statistical relationships that are useful for, in this case, similarity judgment. Therefore we also apply topic modeling in order to get topic distribution over each sentence set. These features are then used to feed an ensemble algorithm to learn the STS function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "WordNet (Miller, 1995) is a computational lexicon of English created and maintained at Princeton University. It encodes concepts in terms of sets of synonyms (called synsets). A synset can be seen as a set of word senses all expressing the same meaning. Each word sense uniquely identifies a single synset. For instance, car#n#1 uses the notation followed by WordNet and subscript word#p#n where p denotes the part-of-speech tag and n the word's sense identifier, respectively. In this case, the corresponding synset car#n#1, auto#n#1, automobile#n#1, machine#n#6, motorcar#n#1 is uniquely determined. As words are not always so ambiguous, a word w#p is said to be monosemous when it can convey only one meaning. Alternatively, w#p is polysemous if it can convey more meanings each one represented by a sense number s in w#p#s. For each synset, WordNet provides the following information: A gloss, that is, a textual definition of the synset; Semantic relations, which connect pairs of synsets. In this context we focus our attention on the Hypernym/Hyponym relation which refers to inheritance between nouns, also known as an is-a, or kind-of relation and their respective inverses. Y is a hypernym of X if every X is a (kind of) Y (motor vehicle#n#1 is a hypernym of car#n#1 and, conversely, car#n#1 is a hyponym of vehicle#n#1).",
"cite_spans": [
{
"start": 8,
"end": 22,
"text": "(Miller, 1995)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "WordNet",
"sec_num": "2.1"
},
{
"text": "There are mainly two approaches to semantic similarity. First approach is making use of a large corpus and gathering statistical data from this corpus to estimate a score of semantic similarity. Second approach makes use of the relations and the entries of a thesaurus (Lesk, 1986) , which is generally a hand-crafted lexical database such as Word-Net (Banerjee and Pedersen, 2003) . Hybrid approaches combines both methods (Jiang and Conrath, 1997) . Semantic similarity can be seen as a different measure from semantic relatedness since the former compute the proximity between concepts in a given concept hierarchy (e.g. car#n#1 is similar to motorcycle#n); while the later the common use of both concepts together (e.g. car#n#1 is related to tire#n).",
"cite_spans": [
{
"start": 269,
"end": 281,
"text": "(Lesk, 1986)",
"ref_id": "BIBREF8"
},
{
"start": 352,
"end": 381,
"text": "(Banerjee and Pedersen, 2003)",
"ref_id": "BIBREF1"
},
{
"start": 424,
"end": 449,
"text": "(Jiang and Conrath, 1997)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic similarity",
"sec_num": "2.2"
},
{
"text": "The Lesk algorithm (Lesk, 1986) uses dictionary definitions (glosses) to disambiguate a polysemous word in a sentence context. The major objective of his idea is to count the number of words that are shared between two glosses, but, sometimes, dictionary glosses are often quite brief, and may not include sufficient vocabulary to identify related sense. In this sense, Banerjee and Pedersen (Banerjee and Pedersen, 2003) adapted this algorithm to use WordNet as the dictionary for the word definitions and extended this metric to use the rich network of relationships between concepts present in WordNet.",
"cite_spans": [
{
"start": 19,
"end": 31,
"text": "(Lesk, 1986)",
"ref_id": "BIBREF8"
},
{
"start": 392,
"end": 421,
"text": "(Banerjee and Pedersen, 2003)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic similarity",
"sec_num": "2.2"
},
{
"text": "The Jiang and Conrath similarity measure (Jiang and Conrath, 1997) computes the information shared between two concepts. The shared information is determined by Information content of the most specific subsume of the two concepts in the hierarchy. Furthermore this measure combines the distance between this subsuming concept and the other two concepts, counting the edgebased distance from them in the WordNet Hypernym/Hyponym hierarchy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic similarity",
"sec_num": "2.2"
},
{
"text": "Topic models are based upon the idea that documents are mixtures of topics, where a topic is a probability distribution over words. A topic model is a generative model for documents: it specifies a simple probabilistic procedure by which documents can be generated. To make a new document, one chooses a distribution over topics. Then, for each word in that document, one chooses a topic at random according to this distribution, and draws a word from that topic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Modeling",
"sec_num": "2.3"
},
{
"text": "Latent Dirichilet allocation (LDA) is a generative probabilistic topic model of a corpus (Blei et al., 2003) . The basic idea is that documents are represented as random mixtures over latent topics, where each topic is characterized by a distribution over words. This process does not make any assumptions about the order of words as they appear in documents. The only information relevant to the model is the number of times words are produced. This is known as the bag-of-words assumption. The main variables of interest in the model are the topic-word distributions \u03a6 and the topic distributions \u03b8 for each document.",
"cite_spans": [
{
"start": 89,
"end": 108,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Modeling",
"sec_num": "2.3"
},
{
"text": "Our approach to STS is mainly founded on the idea of learning a regression function that computes that similarity using other variable/features as components. Before obtaining those features, sentences are preprocessed trough known state-ofthe-art Natural Language techniques. The resulting preprocessed sentences are then lexically, syntactically and semantically decomposed in order to obtain different partial similarities. These partial similarities are the features used in the supervised learning. These specific stages in our system are explained in detail in the following sections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "Before computing partial similarities considering different properties of sentences, we need to apply some known Natural Language techniques. For this purpose, we chose OpenNLP 2 as an opensource tool suite which contains a variety of Javabased NLP components. Our focus is here on three core NLP components: tokenization, POS tagging and chunking. Besides the fact OpenNLP also offers a stemmer for English we adopted other implementation self-contained in the specific framework for Topic Modeling (detailed in section 3.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Natural Language Preprocessing",
"sec_num": "3.1"
},
{
"text": "OpenNLP is a homogeneous package based on a single machine learning approach, maximum entropy (ME) (Berger et al., 1996) . Each OpenNLP tool requires an ME model that contains statistics about the components default features combining diverse contextual information. OpenNLP offers the possibility of both create component or use pre-built models create for different languages.",
"cite_spans": [
{
"start": 99,
"end": 120,
"text": "(Berger et al., 1996)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Natural Language Preprocessing",
"sec_num": "3.1"
},
{
"text": "On one side, components can be trained and customizable models are built for the language and/or domain in study. On the other, the availability of pre-trained models allows the immediate application of such tools on a new problem. We followed the second approach since the sentences are of common-sense and not about a specific domain and are in English 3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Natural Language Preprocessing",
"sec_num": "3.1"
},
{
"text": "Features, sometimes called attributes, encode information from raw data that allows machine learning algorithms estimate an unknown value. We focus on, what we call, light features since they are completely automatic and unsupervised computed, non-requiring a specific labeled dataset for this phase. Each feature is computed as a partial similarity metric, which will later feed the posterior regression analysis. This process is fully automatized, being all features extracted using a pipeline from OpenNLP and other tools that will be introduced in the specific stage where they are used. For convenience and an easier identification in the later machine learning process, we set for each feature an id in the form f #n, n \u2208 {1..65}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Engineering",
"sec_num": "3.2"
},
{
"text": "Some basic similarity metrics are used as features related exclusively with word forms. In this set we include: number of negative words 4 for each sentence (f 1 and f 2 respectively), and the absolute value of the difference of these counts (f 3 = |f 1 \u2212 f 2|); the absolute value of the difference of overlapping words for each sentence pair (f 4..7) 5 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Features",
"sec_num": "3.2.1"
},
{
"text": "OpenNLP tokenization, POS (Part-of-Speech) tagging 6 and text chunking applied on a pipeline fashion allows the identification of (NPs) Noun Phrases, VPs (Verbal Phrases) and (Prepositional Phrases) in sentences. Heuristically, these NPs are 3 OpenNLP offers, for the vast majority of components, at least one pre-trained model for this language. 4 The Snowball stop word list (Porter, 2001 ) was used and those words expressing negation were identified (such as: never, not, neither, no, nobody, aren't, isn't, don't, doesn't, hasn't, hadn't, haven't) 5 Thanks to the SemEval organizers in making available the python script which computes baselines compute overlap baseline.py which was applied using different setting for stop word removal, from 0 to 3. 6 As alternative models are available, the Maxent model with tag dictionary was used on this component. Available at http://opennlp.sourceforge.net/models-1.5/en-pos-maxent.bin further identified as subjects if they are in the beginning of sentences. This kind of shallow parser will be useful to identify the syntactic structure of sentences. Considering only this property, different features were computed as the absolute value of the difference of the number of NPs (f 8), VPs (f 9) and PPs(f 10) for each sentence pair.",
"cite_spans": [
{
"start": 347,
"end": 348,
"text": "4",
"ref_id": null
},
{
"start": 377,
"end": 390,
"text": "(Porter, 2001",
"ref_id": "BIBREF14"
},
{
"start": 553,
"end": 554,
"text": "5",
"ref_id": null
},
{
"start": 757,
"end": 758,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Features",
"sec_num": "3.2.2"
},
{
"text": "WordNet::Similarity (Pedersen et al., 2004 ) is a freely available software package for measuring the semantic similarity or relatedness between a pair of concepts (or word senses). At this stage we have for each sentence the subject identified as the first NP beginning a sentence.",
"cite_spans": [
{
"start": 20,
"end": 42,
"text": "(Pedersen et al., 2004",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Features",
"sec_num": "3.2.3"
},
{
"text": "This NP can be composed of a simple or compound noun, in a root form (lemma) or in a inflected form (plural) (e.g. electrics or economic electric cars). WorNet::Similarity package also contains a lemmatizer, in the module WordNet::QueryData, which compare a inflected word form and return all WordNet entries which can be the root form of this word. This search is made in all four morphological categories in WordNet (Adjectives, Adverbs, Nouns and Verbs), except when indicated the POS in the end of the queried word, the lemmatizer only see in that specific category (e.g. f lies#n returns f lies#n, f ly#n, while f lies returns more entries: f lies#n, f ly#n, f ly#v). Therefore, a lemmatized is successively applied over the Subjects found for each pair of sentences. The compound subjects are reduced from left to right until a head noun been found as a valid WordNet entry (e.g. the subject economicelectriccars is reduced until the valid entry electriccar which is present on WordNet).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Features",
"sec_num": "3.2.3"
},
{
"text": "After all the subjects been found and a valid WordNet entry has been matched semantic similarity (f 11) (Jiang and Conrath, 1997) and semantic relatedness (f 12) (Lesk, 1986) is computed for each sentence pair. In the case where pair word#n has multiple senses, the one that maximizes partial similarity is selected.",
"cite_spans": [
{
"start": 162,
"end": 174,
"text": "(Lesk, 1986)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Features",
"sec_num": "3.2.3"
},
{
"text": "The distribution of topics over documents (in our case, sentences) may contribute to model Distributional Semantic in texts since in the way that the model is defined, there is no notion of mutual exclusivity that restricts words to be part of one topic only. This allows topic models to cap-ture polysemy, where the same word has multiple meanings. In this sense we can see topics as natural word sense contexts where words appear in different topics with distinct senses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional Features",
"sec_num": "3.3"
},
{
"text": "Gensim (\u0158eh\u016f\u0159ek and Sojka, 2010 ) is a machine learning framework for Topic Modeling which includes several preprocessing techniques such as stop-word removal and TF-IDF. TF-IDF is a standard statistical method that combines the frequency of a term in a particular document with its inverse document frequency in general use (Salton and Buckley, 1988) . This score is high for rare terms that appear frequently in a document and are therefore more likely to be significant. In a pragmatic view, tf -idf t,d assigns to term t a weight in document d that is: highest when t occurs many times within a small number of documents; lower when the term occurs fewer times in a document, or occurs in many documents; lowest when the term occurs in virtually all documents.",
"cite_spans": [
{
"start": 7,
"end": 31,
"text": "(\u0158eh\u016f\u0159ek and Sojka, 2010",
"ref_id": null
},
{
"start": 325,
"end": 351,
"text": "(Salton and Buckley, 1988)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional Features",
"sec_num": "3.3"
},
{
"text": "Gensim computes a distribution of 25 topics over sentences not and using TF-IDF (f 13...37 and f 38...63). Each feature is the absolute value of the difference of topic",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional Features",
"sec_num": "3.3"
},
{
"text": "i (i.e. topic[i] = |topic[i] s1 \u2212 topic[i] s2 |).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional Features",
"sec_num": "3.3"
},
{
"text": "Euclidean distance over the difference of topic distribution between sentence pairs in each case (without and with TF-IDF) was also considered as a feature (f 64 and f 65).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional Features",
"sec_num": "3.3"
},
{
"text": "WEKA (Hall et al., 2009 ) is a large collection of state-of-the-art machine learning algorithms written in Java. WEKA contains tools for classification, regression, classifier ensemble, and others. Considering the developer version 3.7.11 7 we used the following experiment setup considering the 65 features previously computed for both sentence dataset (train and test) (Marelli et al., 2014b) .",
"cite_spans": [
{
"start": 5,
"end": 23,
"text": "(Hall et al., 2009",
"ref_id": "BIBREF6"
},
{
"start": 371,
"end": 394,
"text": "(Marelli et al., 2014b)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Learning",
"sec_num": "3.4"
},
{
"text": "One of four approaches is commonly adopted for building classifier ensembles each one focusing a different level of action. Approach A concerns the different ways of combining the results from the classifiers, but there is no evidence that this strategy is better than using different models (Approach B). At feature level (Approach C) different feature subsets can be used for the classifiers, either if they use the same classification model or not. Finally, the data sets can be modified so that each classifier in the ensemble is trained on its own data set (Approach D).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Learning",
"sec_num": "3.4"
},
{
"text": "Different methods for generating and combining models exist, like Stacking (Seewald, 2002) (Approach B). These combined models share sometimes however the disadvantage of being difficult to analyse, once they can comprise dozens of individual classifiers. Stacking is used to combine different types of classifiers and it demands the use of another learner algorithm to predict which of the models would be the most reliable for each case. This combination is done using a metalearner, another learner scheme that combines the output of the base learners. The base learners are generally called level-0 models, and the metalearner is a level-1 model. The predictions of the base learners are input to the meta-learner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Learning",
"sec_num": "3.4"
},
{
"text": "In WEKA, there is a meta classifier called \"Stacking\".We use this stacking ensemble combining two level-0 models: a K-Nearest Neighbour classifier (K = 1) (Aha et al., 1991) ; and a Linear Regression model without any attribute selection method (\u2212S1) and the ridge parameter by default (1.0 exp \u22128). The meta-classifier was M5P which implements base routines for generating M5 Model trees and rules (Quinlan, 1992; Wang and Witten, 1997) .",
"cite_spans": [
{
"start": 155,
"end": 173,
"text": "(Aha et al., 1991)",
"ref_id": "BIBREF0"
},
{
"start": 399,
"end": 414,
"text": "(Quinlan, 1992;",
"ref_id": "BIBREF15"
},
{
"start": 415,
"end": 437,
"text": "Wang and Witten, 1997)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Learning",
"sec_num": "3.4"
},
{
"text": "Our contribution is in the use of complementary features in order to learn the function of STS, a part of the challenge of building Compositional Distributional Semantic Models. For this we applied some preprocessing tasks over the sentence set in order to find lexical, syntactic, semantic and distributional features. On the semantic aspect, we made use of known semantic relatedness and similarity measures on WordNet, in this case, applied to see the relatedness/similarity between phrases from sentences. We also applied topic modeling in order to get topic distributions over set of sentences. These features were then used to feed an ensemble learning algorithm in order to learn the STS function. This was achieved with a Pearson's r of 0.62780. One direction to follow is to find where the ensemble is failing and try to complement the feature set with more semantic features. Indeed, we plan to explore different topic distribution varying number of topics in order to maximize the log likelihood. Also we would like to select the most relevant feature from this set. We are motivated after this first participation in continuing to improve the system here proposed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "4"
},
{
"text": "http://opennlp.sourceforge.net",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.cs.waikato.ac.nz/ml/weka/downloading.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Instance-based learning algorithms",
"authors": [
{
"first": "David",
"middle": [
"W"
],
"last": "Aha",
"suffix": ""
},
{
"first": "Dennis",
"middle": [],
"last": "Kibler",
"suffix": ""
},
{
"first": "Marc",
"middle": [
"K"
],
"last": "Albert",
"suffix": ""
}
],
"year": 1991,
"venue": "Mach. Learn",
"volume": "6",
"issue": "1",
"pages": "37--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David W. Aha, Dennis Kibler, and Marc K. Albert. 1991. Instance-based learning algorithms. Mach. Learn., 6(1):37-66.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Extended gloss overlaps as a measure of semantic relatedness",
"authors": [
{
"first": "Satanjeev",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Pedersen",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 18th International Joint Conference on Artificial Intelligence (IJCAI'03)",
"volume": "",
"issue": "",
"pages": "805--810",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satanjeev Banerjee and Ted Pedersen. 2003. Extended gloss overlaps as a measure of semantic relatedness. In Proceedings of the 18th International Joint Con- ference on Artificial Intelligence (IJCAI'03), pages 805-810, CA, USA.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP'10)",
"volume": "",
"issue": "",
"pages": "1183--1193",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni and Roberto Zamparelli. 2010. Nouns are vectors, adjectives are matrices: Represent- ing adjective-noun constructions in semantic space. In Proceedings of the 2010 Conference on Em- pirical Methods in Natural Language Processing (EMNLP'10), pages 1183-1193, PA, USA.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A maximum entropy approach to natural language processing",
"authors": [
{
"first": "Adam",
"middle": [
"L"
],
"last": "Berger",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A"
],
"last": "Della Pietra",
"suffix": ""
}
],
"year": 1996,
"venue": "Comput. Linguist",
"volume": "22",
"issue": "1",
"pages": "39--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam L. Berger, Vincent J. Della Pietra, and Stephen A. Della Pietra. 1996. A maximum entropy ap- proach to natural language processing. Comput. Linguist., 22(1):39-71.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Latent dirichlet allocation",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. Journal of Ma- chine Learning Research, 3:993-1022.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Experimental support for a categorical compositional distributional model of meaning",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Mehrnoosh",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP '11)",
"volume": "",
"issue": "",
"pages": "1394--1404",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Grefenstette and Mehrnoosh Sadrzadeh. 2011. Experimental support for a categorical composi- tional distributional model of meaning. In Proceed- ings of the Conference on Empirical Methods in Natural Language Processing (EMNLP '11), pages 1394-1404, PA, USA.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The weka data mining software: An update",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Eibe",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Holmes",
"suffix": ""
},
{
"first": "Bernhard",
"middle": [],
"last": "Pfahringer",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Reutemann",
"suffix": ""
},
{
"first": "Ian",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
}
],
"year": 2009,
"venue": "SIGKDD Explor. Newsl",
"volume": "11",
"issue": "1",
"pages": "10--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The weka data mining software: An update. SIGKDD Explor. Newsl., 11(1):10-18.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Semantic similarity based on corpus statistics and lexical taxonomy",
"authors": [
{
"first": "J",
"middle": [],
"last": "Jay",
"suffix": ""
},
{
"first": "David",
"middle": [
"W"
],
"last": "Jiang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Conrath",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. of the Int'l. Conf. on Research in Computational Linguistics",
"volume": "",
"issue": "",
"pages": "19--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jay J. Jiang and David W. Conrath. 1997. Semantic similarity based on corpus statistics and lexical tax- onomy. In Proc. of the Int'l. Conf. on Research in Computational Linguistics, pages 19-33.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Automatic sense disambiguation using machine readable dictionaries: How to tell a pine cone from an ice cream cone",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Lesk",
"suffix": ""
}
],
"year": 1986,
"venue": "Proceedings of the 5th Annual International Conference on Systems Documentation (SIGDOC '86)",
"volume": "",
"issue": "",
"pages": "24--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Lesk. 1986. Automatic sense disambiguation using machine readable dictionaries: How to tell a pine cone from an ice cream cone. In Proceedings of the 5th Annual International Conference on Sys- tems Documentation (SIGDOC '86), pages 24-26, NY, USA.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Semeval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Marelli",
"suffix": ""
},
{
"first": "Luisa",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Raffaella",
"middle": [],
"last": "Bernardi",
"suffix": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Menini",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Marelli, Luisa Bentivogli, Marco Baroni, Raf- faella Bernardi, Stefano Menini, and Roberto Zam- parelli. 2014a. Semeval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and tex- tual entailment. SemEval-2014.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A sick cure for the evaluation of compositional distributional semantic models",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Marelli",
"suffix": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Menini",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Luisa",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "Raffaella",
"middle": [],
"last": "Bernardi",
"suffix": ""
},
{
"first": "Robertomode",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Robertomode Zamparelli. 2014b. A sick cure for the evaluation of compositional distributional semantic models. In Proceedings of LREC 2014.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Wordnet: A lexical database for english",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "COMMUNICATIONS OF THE ACM",
"volume": "38",
"issue": "",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A. Miller. 1995. Wordnet: A lexical database for english. COMMUNICATIONS OF THE ACM, 38:39-41.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Composition in distributional models of semantics",
"authors": [
{
"first": "Jeff",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2010,
"venue": "Cognitive Science",
"volume": "34",
"issue": "8",
"pages": "1388--1439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive Sci- ence, 34(8):1388-1439.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Wordnet::similarity: Measuring the relatedness of concepts",
"authors": [
{
"first": "Ted",
"middle": [],
"last": "Pedersen",
"suffix": ""
},
{
"first": "Siddharth",
"middle": [],
"last": "Patwardhan",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Michelizzi",
"suffix": ""
}
],
"year": 2004,
"venue": "Demonstration Papers at HLT-NAACL 2004, HLT-NAACL-Demonstrations '04",
"volume": "",
"issue": "",
"pages": "38--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ted Pedersen, Siddharth Patwardhan, and Jason Miche- lizzi. 2004. Wordnet::similarity: Measuring the re- latedness of concepts. In Demonstration Papers at HLT-NAACL 2004, HLT-NAACL-Demonstrations '04, pages 38-41, PA, USA.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Snowball: A language for stemming algorithms",
"authors": [
{
"first": "Martin",
"middle": [
"F"
],
"last": "Porter",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin F. Porter. 2001. Snowball: A language for stemming algorithms. Published online.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Software Framework for Topic Modelling with Large Corpora",
"authors": [
{
"first": "Ross",
"middle": [
"J"
],
"last": "Quinlan",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the Workshop on New Challenges for NLP Frameworks (LREC 2010)",
"volume": "",
"issue": "",
"pages": "45--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ross J. Quinlan. 1992. Learning with continuous classes. In 5th Australian Joint Conference on Ar- tificial Intelligence, pages 343-348, Singapore. Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Cor- pora. In Proceedings of the Workshop on New Chal- lenges for NLP Frameworks (LREC 2010), pages 45-50, Valletta, Malta.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Termweighting approaches in automatic text retrieval",
"authors": [
{
"first": "Gerard",
"middle": [],
"last": "Salton",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Buckley",
"suffix": ""
}
],
"year": 1988,
"venue": "Inf. Process. Manage",
"volume": "24",
"issue": "5",
"pages": "513--523",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerard Salton and Christopher Buckley. 1988. Term- weighting approaches in automatic text retrieval. Inf. Process. Manage., 24(5):513-523.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "How to make stacking better and faster while also taking care of an unknown weakness",
"authors": [
{
"first": "K",
"middle": [],
"last": "Alexander",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Seewald",
"suffix": ""
}
],
"year": 2002,
"venue": "Nineteenth International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "554--561",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander K. Seewald. 2002. How to make stacking better and faster while also taking care of an un- known weakness. In C. Sammut and A. Hoffmann, editors, Nineteenth International Conference on Ma- chine Learning, pages 554-561.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Semantic compositionality through recursive matrix-vector spaces",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Brody",
"middle": [],
"last": "Huval",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL '12)",
"volume": "",
"issue": "",
"pages": "1201--1211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Brody Huval, Christopher D. Man- ning, and Andrew Y. Ng. 2012. Semantic com- positionality through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Process- ing and Computational Natural Language Learning (EMNLP-CoNLL '12), pages 1201-1211, PA, USA.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Induction of model trees for predicting continuous classes",
"authors": [
{
"first": "Yong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ian",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
}
],
"year": 1997,
"venue": "Poster papers of the 9th European Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yong Wang and Ian H. Witten. 1997. Induction of model trees for predicting continuous classes. In Poster papers of the 9th European Conference on Machine Learning.",
"links": null
}
},
"ref_entries": {}
}
}