ACL-OCL / Base_JSON /prefixS /json /S14 /S14-2020.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S14-2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:32:16.670579Z"
},
"title": "Blinov: Distributed Representations of Words for Aspect-Based Sentiment Analysis at SemEval 2014",
"authors": [
{
"first": "Pavel",
"middle": [],
"last": "Blinov",
"suffix": "",
"affiliation": {},
"email": "blinoff.pavel@gmail.com"
},
{
"first": "Eugeny",
"middle": [],
"last": "Kotelnikov",
"suffix": "",
"affiliation": {},
"email": "kotelnikov.ev@gmail.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The article describes our system submitted to the SemEval-2014 task on Aspect-Based Sentiment Analysis. The methods based on distributed representations of words for the aspect term extraction and aspect term polarity detection tasks are presented. The methods for the aspect category detection and category polarity detection tasks are presented as well. Well-known skip-gram model for constructing the distributed representations is briefly described. The results of our methods are shown in comparison with the baseline and the best result.",
"pdf_parse": {
"paper_id": "S14-2020",
"_pdf_hash": "",
"abstract": [
{
"text": "The article describes our system submitted to the SemEval-2014 task on Aspect-Based Sentiment Analysis. The methods based on distributed representations of words for the aspect term extraction and aspect term polarity detection tasks are presented. The methods for the aspect category detection and category polarity detection tasks are presented as well. Well-known skip-gram model for constructing the distributed representations is briefly described. The results of our methods are shown in comparison with the baseline and the best result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The sentiment analysis became an important Natural Language Processing (NLP) task in the recent few years. As many NLP tasks it's a challenging one. The sentiment analysis can be very helpful for some practical applications. For example, it allows to study the users' opinions about a product automatically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Many research has been devoted to the general sentiment analysis (Pang et al., 2002) , (Amine et al., 2013) , (Blinov et al., 2013) or analysis of individual sentences (Yu and Hatzivassiloglou, 2003) , (Kim and Hovy, 2004) , (Wiebe and Riloff, 2005) . Soon it became clear that the sentiment analysis on the level of a whole text or even sentences is too coarse. Gen-eral sentiment analysis by its design is not capable to perform the detailed analysis of an expressed opinion. For example, it cannot correctly detect the opinion in the sentence \"Great food but the service was dreadful!\". The sentence carries opposite opinions on two facets of a restaurant. Therefore the more detailed version of the sentiment analysis is needed. Such a version is called the aspect-based sentiment analysis and it works on the level of the significant aspects of the target entity (Liu, 2012) .",
"cite_spans": [
{
"start": 65,
"end": 84,
"text": "(Pang et al., 2002)",
"ref_id": "BIBREF2"
},
{
"start": 87,
"end": 107,
"text": "(Amine et al., 2013)",
"ref_id": null
},
{
"start": 110,
"end": 131,
"text": "(Blinov et al., 2013)",
"ref_id": "BIBREF4"
},
{
"start": 168,
"end": 199,
"text": "(Yu and Hatzivassiloglou, 2003)",
"ref_id": "BIBREF15"
},
{
"start": 202,
"end": 222,
"text": "(Kim and Hovy, 2004)",
"ref_id": "BIBREF9"
},
{
"start": 225,
"end": 249,
"text": "(Wiebe and Riloff, 2005)",
"ref_id": "BIBREF5"
},
{
"start": 868,
"end": 879,
"text": "(Liu, 2012)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The aspect-based sentiment analysis includes two main subtasks: the aspect term extraction and its polarity detection (Liu, 2012) . In this article we describe the methods which address both subtasks. The methods are based on the distributed representations of words. Such word representations (or word embeddings) are useful in many NLP task, e.g. (Turian et al., 2009) , (Al-Rfou' et al., 2013) , (Turney, 2013) .",
"cite_spans": [
{
"start": 118,
"end": 129,
"text": "(Liu, 2012)",
"ref_id": "BIBREF3"
},
{
"start": 349,
"end": 370,
"text": "(Turian et al., 2009)",
"ref_id": "BIBREF6"
},
{
"start": 373,
"end": 396,
"text": "(Al-Rfou' et al., 2013)",
"ref_id": "BIBREF13"
},
{
"start": 399,
"end": 413,
"text": "(Turney, 2013)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of the article is as follows: section two gives the overview of the data; the third section shortly describes the distributed representations of words. The methods of the aspect term extraction and polarity detection are presented in the fourth and the fifth sections respectively. The conclusions are given in the sixth section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The organisers provided the train data for restaurant and laptop domains. But as it will be clear further our methods are heavily dependent on unlabelled text data. So we additionally collected the user reviews about restaurants from tripadviser.com and about laptops from amazon.com. General statistics of the data are shown in Table 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Data",
"sec_num": "2"
},
{
"text": "This work is licensed under a Creative Commons Attribution 4.0 International Licence. Page numbers and proceedings footer are added by the organisers. Licence details: http://creativecommons.org/licenses/by/4.0/ For all the data we performed tokenization, stemming and morphological analysis using the FreeLing library (Padr\u00f3 and Stanilovsky, 2012) .",
"cite_spans": [
{
"start": 319,
"end": 348,
"text": "(Padr\u00f3 and Stanilovsky, 2012)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Data",
"sec_num": "2"
},
{
"text": "In this section we'll try to give the high level idea of the distributed representations of words. The more technical details can be found in (Mikolov et al., 2013) .",
"cite_spans": [
{
"start": 142,
"end": 164,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distributed Representations of Words",
"sec_num": "3"
},
{
"text": "It is closely related with a new promising direction in machine learning called the deep learning. The core idea of the unsupervised deep learning algorithms is to find automatically the \"good\" set of features to represent the target object (text, image, audio signal, etc.). The object represented by the vector of real numbers is called the distributed representation (Rumelhart et al., 1986) . We used the skip-gram model (Mikolov et al., 2013) implemented in Gensim toolkit (\u0158eh\u016f\u0159ek and Sojka, 2010) .",
"cite_spans": [
{
"start": 370,
"end": 394,
"text": "(Rumelhart et al., 1986)",
"ref_id": "BIBREF14"
},
{
"start": 425,
"end": 447,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF7"
},
{
"start": 478,
"end": 503,
"text": "(\u0158eh\u016f\u0159ek and Sojka, 2010)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distributed Representations of Words",
"sec_num": "3"
},
{
"text": "In general the learning procedure is as follows. All the texts of the corpus are stuck together in a single sequence of sentences. On the basis of the corpus the lexicon is constructed. Next, the dimensionality of the vectors is chosen (we used 300 in our experiments). The greater number of dimensions allows to capture more language regularities but leads to more computational complexity of the learning. Each word from the lexicon is associated with the real numbers vector of the selected dimensionality. Originally all the vectors are randomly initialized. During the learning procedure the algorithm \"slides\" with the fixed size window (it's algorithm parameter that was retained by default -5 words) along the words of the sequence and calculates the probability (1) of context words appearance within the window based on its central word under review (or more precisely, its vector representation) (Mikolov et al., 2013) . The ultimate goal of the described process is to get such \"good\" vectors for each word, which allow to predict its probable context. All such vectors together form the vector space where semantically similar words are grouped.",
"cite_spans": [
{
"start": 907,
"end": 929,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distributed Representations of Words",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\uf0e5 \uf03d \uf0a2 \uf0a2 \uf03d W w w T w w T w I O v v v v w w p I O 1 ) exp( ) exp( ) | ( ,",
"eq_num": "(1)"
}
],
"section": "Distributed Representations of Words",
"sec_num": "3"
},
{
"text": "We apply the same method for the aspect term extraction task (Pontiki et al., 2014) for both domains. The method consists of two steps: the candidate selection and the term extraction.",
"cite_spans": [
{
"start": 61,
"end": 83,
"text": "(Pontiki et al., 2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Aspect Term Extraction Method",
"sec_num": "4"
},
{
"text": "First of all we collect some statistics about the terms in the train collection. We analysed two facets of the aspect terms: the number of words and their morphological structure. The information about the number of words in a term is shown in Table 2 . On the basis of that we've decided to process only single and two-word aspect terms. From the single terms we treat only singular (NN, e.g. staff, rice, texture, processor, ram, insult) and plural nouns (NNS, e.g. perks, bagels, times, dvds, buttons, pictures) as possible candidates, because they largely predominate among the one-word terms. All conjunctions of the form NN_NN (e.g. sea_bass, lotus_leaf, chicken_dish, battery_life, virus_protection, custom-er_disservice) and NN_NNS (e.g. sushi_places, menu_choices, seafood_lovers, usb_devices, re-covery_discs, software_works) were candidates for the two-word terms also because they are most common in two-word aspect terms.",
"cite_spans": [
{
"start": 627,
"end": 728,
"text": "NN_NN (e.g. sea_bass, lotus_leaf, chicken_dish, battery_life, virus_protection, custom-er_disservice)",
"ref_id": null
},
{
"start": 733,
"end": 835,
"text": "NN_NNS (e.g. sushi_places, menu_choices, seafood_lovers, usb_devices, re-covery_discs, software_works)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 244,
"end": 251,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Candidate Selection",
"sec_num": "4.1"
},
{
"text": "The second step for the aspect term identification is the term extraction. As has already been told the space (see Section 3) specifies the word groups. Therefore the measure of similarity between the words (vectors) can be defined. For NLP tasks it is often the cosine similarity measure. The similarity between two vectors ) ,..., ( is given by (Manning et al., 2008) :",
"cite_spans": [
{
"start": 347,
"end": 369,
"text": "(Manning et al., 2008)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Term Extraction",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\uf0e5 \uf0e5 \uf0e5 \uf03d \uf03d \uf03d \uf0d7 \uf03d n i i n i i n i i i b a b a 1 2 1 2 1 ) cos(\uf071 ,",
"eq_num": "(2)"
}
],
"section": "Term Extraction",
"sec_num": "4.2"
},
{
"text": "where \uf071the angle between the vectors, nthe dimensionality of the space. In case of the restaurant domain the category and aspect terms are specified. For each category the seed of the aspect terms can be automatically selected: if only one category is assigned for a train sentence then all its terms belong to it. Within each set the average similarity between the terms (the threshold category) can be found.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term Extraction",
"sec_num": "4.2"
},
{
"text": "For the new candidate the average similarities with the category's seeds are calculated. If it is greater than the threshold of any category than the candidate is marked as an aspect term.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term Extraction",
"sec_num": "4.2"
},
{
"text": "Also we've additionally applied some rules: \uf0b7 Join consecutive terms in a single term. \uf0b7 Join neutral adjective ahead the term (see Section 5.2 for clarification about the neutral adjective). \uf0b7 Join fragments matching the pattern: <an aspect term> of <an aspect term>. In case of the laptop domain there are no specified categories so we treated all terms as the terms belonging to one general category. And the same procedure with candidates was performed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term Extraction",
"sec_num": "4.2"
},
{
"text": "For the restaurant domain there was also the aspect category detection task (Pontiki et al., 2014) .",
"cite_spans": [
{
"start": 76,
"end": 98,
"text": "(Pontiki et al., 2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Category Detection",
"sec_num": "4.3"
},
{
"text": "Since each word is represented by a vector, each sentence can be cast to a single point as the average of its vectors. Further average point for each category can be found by means of the sentence points. Then for an unseen sentence the average point of its word vectors is calculated. The category is selected by calculating the distances between all category points and a new point and by choosing the minimum distance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Category Detection",
"sec_num": "4.3"
},
{
"text": "The aspect term extraction and the aspect category detection tasks were evaluated with Precision, Recall and F-measure (Pontiki et al., 2014) . The F-measure was a primary metric for these tasks so we present only it.",
"cite_spans": [
{
"start": 119,
"end": 141,
"text": "(Pontiki et al., 2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "The result of our method ranked 19 out of 28 submissions (constrained and unconstrained) for the aspect term extraction task for the laptop domain and 17 out of 29 for the restaurant domain. For the category detection task (restaurant domain) the method ranked 9 out of 21. Table 3 shows the results of our method (Bold) for aspect term extraction task in comparison with the baseline (Pontiki et al., 2014) and the best result. Analogically the results for the aspect category detection task are presented in Table 4 . ",
"cite_spans": [
{
"start": 385,
"end": 407,
"text": "(Pontiki et al., 2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 274,
"end": 281,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 510,
"end": 517,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "Our polarity detection method also exploits the vector space (from Section 3) because the emotional similarity between words can be traced in it. As with the aspect term extraction method we follow two-stage approach: the candidate selection and the polarity detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Polarity Detection Method",
"sec_num": "5"
},
{
"text": "All adjectives and verbs are considered as the polarity term candidates. The amplifiers and the negations have an important role in the process of result polarity forming. In our method we took into account only negations because it strongly affects the word polarity. We've joined into one unit all text fragments that match the following pattern: not + <JJ | VB>.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Selection",
"sec_num": "5.1"
},
{
"text": "At first we manually collected the small etalon sets of positive and negative words for each domain. Every set contained 15 words that clearly identify the sentiment. For example, for the positive polarity there were words such as: great, fast, attentive, yummy, etc. and for the negative polarity there were words like: terrible, ugly, not_work, offensive, etc. By measuring the average similarity for a candidate to the positive and the negative seed words we decided whether it is positive (+1) or negative (-1). Also we set up a neutral threshold and a candidate's polarity was treated as neutral (0) if it didn't exceed the threshold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term Polarity Detection",
"sec_num": "5.2"
},
{
"text": "For each term (within the window of 6 words) we were looking for its closest polarity term candidate and sum up their polarities. For the final decision about the term's polarity there were some conditions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term Polarity Detection",
"sec_num": "5.2"
},
{
"text": "\uf0b7 If sum > 0 then positive. \uf0b7 If sum < 0 then negative. \uf0b7 If sum == 0 and all polarity terms are neutral then neutral else conflict.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term Polarity Detection",
"sec_num": "5.2"
},
{
"text": "By analogy with the category detection method, using the train collection, we calculate the average polarity points for each category, i.e. there were 5\u00d74 such points (5 categories and 4 values of polarity). Then a sentence was cast to a point as the average of all its word-vectors. And closest polarity points for the specified categories defined the polarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Category Polarity Detection",
"sec_num": "5.3"
},
{
"text": "The results of our method (Bold) for the polarity detection tasks are around the baseline results for the Accuracy measure (Tables 5, 6 ). However the test data is skewed to the positive class and for that case the Accuracy is a poor indicator. Because of that we also show macro Fmeasure results for our and baseline methods (Tables 7, 8). From that we can conclude that our method of the polarity detection more delicately deals with the minor represented classes than the baseline method.",
"cite_spans": [],
"ref_spans": [
{
"start": 123,
"end": 135,
"text": "(Tables 5, 6",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.4"
},
{
"text": "In the article we presented the methods for two main subtasks for aspect-based sentiment analysis: the aspect term extraction and the polarity detection. The methods are based on the distributed representation of words and the notion of similarities between the words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "For the aspect term extraction and category detection tasks we get satisfied results which are consistent with our cross-validation metrics. Unfortunately for the polarity detection tasks the result of our method by official metrics are low. But we showed that the proposed method is not so bad and is capable to deal with the skewed data better than the baseline method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [
{
"text": "We would like to thank the organizers and the reviewers for their efforts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Detecting Opinions in Tweets",
"authors": [],
"year": 2013,
"venue": "International Journal of Data Mining and Emerging Technologies",
"volume": "3",
"issue": "1",
"pages": "23--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abdelmalek Amine, Reda Mohamed Hamou and Michel Simonet. 2013. Detecting Opinions in Tweets. International Journal of Data Mining and Emerging Technologies, 3(1):23-32.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Introduction to Information Retrieval",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Prabhakar",
"middle": [],
"last": "Raghavan",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Manning, Prabhakar Raghavan and Hin- rich Sch\u00fctze. 2008. Introduction to Information Re- trieval. Cambridge University Press, New York, NY, USA.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Thumbs up?: sentiment classification using machine learning techniques",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Shivakumar",
"middle": [],
"last": "Vaithyanathan",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "79--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Pang, Lillian Lee and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing, pages 79-86.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Sentiment Analysis and Opinion Mining",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2012,
"venue": "Synthesis Lectures on Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bing Liu. 2012. Sentiment Analysis and Opinion Mining. Synthesis Lectures on Human Language Technologies.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Research of lexical approach and machine learning methods for sentiment analysis",
"authors": [
{
"first": "Pavel",
"middle": [],
"last": "Blinov",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Klekovkina",
"suffix": ""
},
{
"first": "Eugeny",
"middle": [],
"last": "Kotelnikov",
"suffix": ""
},
{
"first": "Oleg",
"middle": [],
"last": "Pestov",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Linguistics and Intellectual Technologies",
"volume": "2",
"issue": "",
"pages": "48--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pavel Blinov, Maria Klekovkina, Eugeny Kotelnikov and Oleg Pestov. 2013. Research of lexical ap- proach and machine learning methods for senti- ment analysis. Computational Linguistics and In- tellectual Technologies, 2(12):48-58.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Creating subjective and objective sentence classifiers from unannotated texts",
"authors": [
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 6th International Conference on Computational Linguistics and Intelligent Text Processing",
"volume": "",
"issue": "",
"pages": "486--497",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Janyce Wiebe and Ellen Riloff. 2005. Creating sub- jective and objective sentence classifiers from un- annotated texts. In Proceedings of the 6th Interna- tional Conference on Computational Linguistics and Intelligent Text Processing, pages 486-497.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A preliminary evaluation of word representations for named-entity recognition",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Turian",
"suffix": ""
},
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of NIPS Workshop on Grammar Induction, Representation of Language and Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Turian, Lev Ratinov, Yoshua Bengio and Dan Roth. 2009. A preliminary evaluation of word rep- resentations for named-entity recognition. In Pro- ceedings of NIPS Workshop on Grammar Induc- tion, Representation of Language and Language Learning.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Distributed Representations of Words and Phrases and their Compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado and Jeffrey Dean. 2013. Distributed Represen- tations of Words and Phrases and their Composi- tionality. In Proceedings of NIPS, pages 3111- 3119.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "FreeLing 3.0: Towards Wider Multilinguality",
"authors": [
{
"first": "Llu\u00eds",
"middle": [],
"last": "Padr\u00f3",
"suffix": ""
},
{
"first": "Evgeny",
"middle": [],
"last": "Stanilovsky",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Language Resources and Evaluation Conference, LREC 2012",
"volume": "",
"issue": "",
"pages": "2473--2479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Llu\u00eds Padr\u00f3 and Evgeny Stanilovsky. 2012. FreeLing 3.0: Towards Wider Multilinguality. In Proceed- ings of the Language Resources and Evaluation Conference, LREC 2012, pages 2473-2479.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Determining the sentiment of opinions",
"authors": [
{
"first": "Min",
"middle": [],
"last": "Soo",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soo-Min Kim and Eduard Hovy. 2004. Determining the sentiment of opinions. In Proceedings of the 20th International Conference on Computational Linguistics, COLING-2004.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "SemEval-2014 Task 4: Aspect Based Sentiment Analysis",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Pontiki",
"suffix": ""
},
{
"first": "Dimitrios",
"middle": [],
"last": "Galanis",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Pavlopoulos",
"suffix": ""
},
{
"first": "Harris",
"middle": [],
"last": "Papageorgiou",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 8th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Pontiki, Dimitrios Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos and Suresh Manandhar. 2014. SemEval-2014 Task 4: Aspect Based Sentiment Analysis. In Proceedings of the 8th International Workshop on Semantic Evaluation, SemEval 2014, Dublin, Ireland.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Distributional semantics beyond words: Supervised learning of analogy and paraphrase",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2013,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "353--366",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Turney. 2013. Distributional semantics beyond words: Supervised learning of analogy and para- phrase. Transactions of the Association for Compu- tational Linguistics, 1:353-366.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Software Framework for Topic Modelling with Large Corpora",
"authors": [
{
"first": "Radim",
"middle": [],
"last": "\u0158eh\u016f\u0159ek",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Sojka",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks",
"volume": "",
"issue": "",
"pages": "46--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radim \u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Cor- pora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 46-50.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Polyglot: Distributed Word Representations for Multilingual NLP",
"authors": [
{
"first": "Rami",
"middle": [],
"last": "Al-Rfou",
"suffix": ""
},
{
"first": "'",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Perozzi",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Skiena",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rami Al-Rfou', Bryan Perozzi, Steven Skiena. 2013. Polyglot: Distributed Word Representations for Multilingual NLP. In Proceedings of Conference on Computational Natural Language Learning, CoNLL'2013.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Learning representations by backpropagating errors",
"authors": [
{
"first": "David",
"middle": [],
"last": "Rumelhart",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hintont",
"suffix": ""
}
],
"year": 1986,
"venue": "Nature",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Rumelhart, Geoffrey Hintont, Ronald Wil- liams. 1986. Learning representations by back- propagating errors. Nature.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences",
"authors": [
{
"first": "Hong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Vasileios",
"middle": [],
"last": "Hatzivassiloglou",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "129--136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hong Yu and Vasileios Hatzivassiloglou. 2003. To- wards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences. In Proceedings of the 2003 Conference on Empirical Methods in Natural Lan- guage Processing, pages 129-136.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "input and output vector representations of w; I w and O w are the current and predicted words, Wthe number of words in vocabulary.",
"type_str": "figure",
"num": null
},
"TABREF0": {
"html": null,
"num": null,
"text": "The amount of reviews.",
"type_str": "table",
"content": "<table><tr><td>Domain</td><td>The amount of reviews</td></tr><tr><td>Restaurants</td><td>652 055</td></tr><tr><td>Laptops</td><td>109 550</td></tr></table>"
},
"TABREF1": {
"html": null,
"num": null,
"text": "The statistics for the number of words in a term.",
"type_str": "table",
"content": "<table><tr><td/><td>Domain</td><td/></tr><tr><td>Aspect term</td><td colspan=\"2\">Restaurant, % Laptop, %</td></tr><tr><td>One-word</td><td>72.13</td><td>55.66</td></tr><tr><td>Two-word</td><td>19.05</td><td>32.87</td></tr><tr><td>Greater</td><td>8.82</td><td>11.47</td></tr></table>"
},
"TABREF2": {
"html": null,
"num": null,
"text": "",
"type_str": "table",
"content": "<table><tr><td colspan=\"3\">: Aspect term extraction results</td></tr><tr><td>(F-measure).</td><td/><td/></tr><tr><td/><td>Laptop</td><td>Restaurant</td></tr><tr><td>Best</td><td>0.7455</td><td>0.8401</td></tr><tr><td>Blinov</td><td>0.5207</td><td>0.7121</td></tr><tr><td>Baseline</td><td>0.3564</td><td>0.4715</td></tr></table>"
},
"TABREF3": {
"html": null,
"num": null,
"text": "",
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">: Aspect category detection results</td></tr><tr><td>(F-measure).</td><td/></tr><tr><td/><td>Restaurant</td></tr><tr><td>Best</td><td>0.8858</td></tr><tr><td>Blinov</td><td>0.7527</td></tr><tr><td>Baseline</td><td>0.6389</td></tr></table>"
},
"TABREF4": {
"html": null,
"num": null,
"text": "Aspect term polarity detection results (Accuracy).",
"type_str": "table",
"content": "<table><tr><td/><td>Laptop</td><td>Restaurant</td></tr><tr><td>Best</td><td>0.7049</td><td>0.8095</td></tr><tr><td>Blinov</td><td>0.5229</td><td>0.6358</td></tr><tr><td>Baseline</td><td>0.5107</td><td>0.6428</td></tr></table>"
},
"TABREF5": {
"html": null,
"num": null,
"text": "",
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">: Category polarity detection results</td></tr><tr><td>(Accuracy).</td><td/></tr><tr><td/><td>Restaurant</td></tr><tr><td>Best</td><td>0.8293</td></tr><tr><td>Blinov</td><td>0.6566</td></tr><tr><td>Baseline</td><td>0.6566</td></tr></table>"
},
"TABREF6": {
"html": null,
"num": null,
"text": "Aspect term polarity detection results (F-measure).",
"type_str": "table",
"content": "<table><tr><td/><td>Laptop</td><td>Restaurant</td></tr><tr><td>Blinov</td><td>0.3738</td><td>0.4334</td></tr><tr><td>Baseline</td><td>0.2567</td><td>0.2989</td></tr></table>"
},
"TABREF7": {
"html": null,
"num": null,
"text": "Category polarity detection results (F-measure).",
"type_str": "table",
"content": "<table><tr><td/><td>Restaurant</td></tr><tr><td>Blinov</td><td>0.5051</td></tr><tr><td>Baseline</td><td>0.3597</td></tr></table>"
}
}
}
}