| { |
| "paper_id": "P10-1042", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T09:20:21.842726Z" |
| }, |
| "title": "Sentiment Learning on Product Reviews via Sentiment Ontology Tree", |
| "authors": [ |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Wei", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Norwegian University of Science", |
| "location": {} |
| }, |
| "email": "wwei@idi.ntnu.no" |
| }, |
| { |
| "first": "Jon", |
| "middle": [ |
| "Atle" |
| ], |
| "last": "Gulla", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Norwegian University of Science", |
| "location": {} |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Existing works on sentiment analysis on product reviews suffer from the following limitations: (1) The knowledge of hierarchical relationships of products attributes is not fully utilized. (2) Reviews or sentences mentioning several attributes associated with complicated sentiments are not dealt with very well. In this paper, we propose a novel HL-SOT approach to labeling a product's attributes and their associated sentiments in product reviews by a Hierarchical Learning (HL) process with a defined Sentiment Ontology Tree (SOT). The empirical analysis against a humanlabeled data set demonstrates promising and reasonable performance of the proposed HL-SOT approach. While this paper is mainly on sentiment analysis on reviews of one product, our proposed HL-SOT approach is easily generalized to labeling a mix of reviews of more than one products.", |
| "pdf_parse": { |
| "paper_id": "P10-1042", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Existing works on sentiment analysis on product reviews suffer from the following limitations: (1) The knowledge of hierarchical relationships of products attributes is not fully utilized. (2) Reviews or sentences mentioning several attributes associated with complicated sentiments are not dealt with very well. In this paper, we propose a novel HL-SOT approach to labeling a product's attributes and their associated sentiments in product reviews by a Hierarchical Learning (HL) process with a defined Sentiment Ontology Tree (SOT). The empirical analysis against a humanlabeled data set demonstrates promising and reasonable performance of the proposed HL-SOT approach. While this paper is mainly on sentiment analysis on reviews of one product, our proposed HL-SOT approach is easily generalized to labeling a mix of reviews of more than one products.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "As the internet reaches almost every corner of this world, more and more people write reviews and share opinions on the World Wide Web. The usergenerated opinion-rich reviews will not only help other users make better judgements but they are also useful resources for manufacturers of products to keep track and manage customer opinions. However, as the number of product reviews grows, it becomes difficult for a user to manually learn the panorama of an interesting topic from existing online information. Faced with this problem, research works, e.g., (Hu and Liu, 2004; Liu et al., 2005; Lu et al., 2009) , of sentiment analysis on product reviews were proposed and have become a popular research topic at the crossroads of information retrieval and computational linguistics.", |
| "cite_spans": [ |
| { |
| "start": 555, |
| "end": 573, |
| "text": "(Hu and Liu, 2004;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 574, |
| "end": 591, |
| "text": "Liu et al., 2005;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 592, |
| "end": 608, |
| "text": "Lu et al., 2009)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Carrying out sentiment analysis on product reviews is not a trivial task. Although there have already been a lot of publications investigating on similar issues, among which the representatives are (Turney, 2002; Dave et al., 2003; Hu and Liu, 2004; Liu et al., 2005; Popescu and Etzioni, 2005; Zhuang et al., 2006; Lu and Zhai, 2008; Titov and McDonald, 2008; Zhou and Chaovalit, 2008; Lu et al., 2009) , there is still room for improvement on tackling this problem. When we look into the details of each example of product reviews, we find that there are some intrinsic properties that existing previous works have not addressed in much detail.", |
| "cite_spans": [ |
| { |
| "start": 198, |
| "end": 212, |
| "text": "(Turney, 2002;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 213, |
| "end": 231, |
| "text": "Dave et al., 2003;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 232, |
| "end": 249, |
| "text": "Hu and Liu, 2004;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 250, |
| "end": 267, |
| "text": "Liu et al., 2005;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 268, |
| "end": 294, |
| "text": "Popescu and Etzioni, 2005;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 295, |
| "end": 315, |
| "text": "Zhuang et al., 2006;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 316, |
| "end": 334, |
| "text": "Lu and Zhai, 2008;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 335, |
| "end": 360, |
| "text": "Titov and McDonald, 2008;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 361, |
| "end": 386, |
| "text": "Zhou and Chaovalit, 2008;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 387, |
| "end": 403, |
| "text": "Lu et al., 2009)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "First of all, product reviews constitute domainspecific knowledge. The product's attributes mentioned in reviews might have some relationships between each other. For example, for a digital camera, comments on image quality are usually mentioned. However, a sentence like \"40D handles noise very well up to ISO 800\", also refers to image quality of the camera 40D. Here we say \"noise\" is a sub-attribute factor of \"image quality\". We argue that the hierarchical relationship between a product's attributes can be useful knowledge if it can be formulated and utilized in product reviews analysis. Secondly, Vocabularies used in product reviews tend to be highly overlapping. Especially, for same attribute, usually same words or synonyms are involved to refer to them and to describe sentiment on them. We believe that labeling existing product reviews with attributes and corresponding sentiment forms an effective training resource to perform sentiment analysis. Thirdly, sentiments expressed in a review or even in a sentence might be opposite on different attributes and not every attributes mentioned are with sentiments. For example, it is common to find a fragment of a review as follows: Example 1: \"...I am very impressed with this camera except for its a bit heavy weight especially with Figure 1 : an example of part of a SOT for digital camera extra lenses attached. It has many buttons and two main dials. The first dial is thumb dial, located near shutter button. The second one is the big round dial located at the back of the camera...\" In this example, the first sentence gives positive comment on the camera as well as a complaint on its heavy weight. Even if the words \"lenses\" appears in the review, it is not fair to say the customer expresses any sentiment on lens. The second sentence and the rest introduce the camera's buttons and dials. It's also not feasible to try to get any sentiment from these contents. We argue that when performing sentiment analysis on reviews, such as in the Example 1, more attention is needed to distinguish between attributes that are mentioned with and without sentiment.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 1297, |
| "end": 1305, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we study the problem of sentiment analysis on product reviews through a novel method, called the HL-SOT approach, namely Hierarchical Learning (HL) with Sentiment Ontology Tree (SOT). By sentiment analysis on product reviews we aim to fulfill two tasks, i.e., labeling a target text 1 with: 1) the product's attributes (attributes identification task), and 2) their corresponding sentiments mentioned therein (sentiment annotation task). The result of this kind of labeling process is quite useful because it makes it possible for a user to search reviews on particular attributes of a product. For example, when considering to buy a digital camera, a prospective user who cares more about image quality probably wants to find comments on the camera's image quality in other users' reviews. SOT is a tree-like ontology structure that formulates the relationships between a product's attributes. For example, Fig. 1 is a SOT for a digital camera 2 . The root node of the SOT is a camera itself. Each of the non-leaf nodes (white nodes) of the SOT represents an attribute of a camera 3 . All leaf nodes (gray nodes) of the SOT represent sentiment (positive/negative) nodes respectively associated with their parent nodes. A formal definition on SOT is presented in Section 3.1. With the proposed concept of SOT, we manage to formulate the two tasks of the sentiment analysis to be a hierarchical classification problem. We further propose a specific hierarchical learning algorithm, called HL-SOT algorithm, which is developed based on generalizing an online-learning algorithm H-RLS (Cesa-Bianchi et al., 2006) . The HL-SOT algorithm has the same property as the H-RLS algorithm that allows multiple-path labeling (input target text can be labeled with nodes belonging to more than one path in the SOT) and partial-path labeling (the input target text can be labeled with nodes belonging to a path that does not end on a leaf). This property makes the approach well suited for the situation where complicated sentiments on different attributes are expressed in one target text. Unlike the H-RLS algorithm , the HL-SOT algorithm enables each classifier to separately learn its own specific threshold. The proposed HL-SOT approach is empirically analyzed against a human-labeled data set. The experimental results demonstrate promising and reasonable performance of our approach.", |
| "cite_spans": [ |
| { |
| "start": 1597, |
| "end": 1624, |
| "text": "(Cesa-Bianchi et al., 2006)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 923, |
| "end": 929, |
| "text": "Fig. 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This paper makes the following contributions:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 To the best of our knowledge, with the proposed concept of SOT, the proposed HL-SOT approach is the first work to formulate the tasks of sentiment analysis to be a hierarchical classification problem.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 A specific hierarchical learning algorithm is tive/negative sentiment associated with an attribute m.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "further proposed to achieve tasks of sentiment analysis in one hierarchical classification process.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 The proposed HL-SOT approach can be generalized to make it possible to perform sentiment analysis on target texts that are a mix of reviews of different products, whereas existing works mainly focus on analyzing reviews of only one type of product.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The remainder of the paper is organized as follows. In Section 2, we provide an overview of related work on sentiment analysis. Section 3 presents our work on sentiment analysis with HL-SOT approach. The empirical analysis and the results are presented in Section 4, followed by the conclusions, discussions, and future work in Section 5.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The task of sentiment analysis on product reviews was originally performed to extract overall sentiment from the target texts. However, in (Turney, 2002) , as the difficulty shown in the experiments, the whole sentiment of a document is not necessarily the sum of its parts. Then there came up with research works shifting focus from overall document sentiment to sentiment analysis based on product attributes (Hu and Liu, 2004; Popescu and Etzioni, 2005; Ding and Liu, 2007; Liu et al., 2005) . Document overall sentiment analysis is to summarize the overall sentiment in the document. Research works related to document overall sentiment analysis mainly rely on two finer levels sentiment annotation: word-level sentiment annotation and phrase-level sentiment annotation. The wordlevel sentiment annotation is to utilize the polarity annotation of words in each sentence and summarize the overall sentiment of each sentimentbearing word to infer the overall sentiment within the text (Hatzivassiloglou and Wiebe, 2000; Andreevskaia and Bergler, 2006; Esuli and Sebastiani, 2005; Esuli and Sebastiani, 2006; Hatzivassiloglou and McKeown, 1997; Kamps et al., 2004; Devitt and Ahmad, 2007; Yu and Hatzivassiloglou, 2003) . The phrase-level sentiment annotation focuses sentiment annotation on phrases not words with concerning that atomic units of expression is not individual words but rather appraisal groups (Whitelaw et al., 2005) . In (Wilson et al., 2005) , the concepts of prior polarity and contextual polarity were proposed. This paper presented a system that is able to automatically identify the contextual polarity for a large subset of sentiment expressions. In (Turney, 2002) , an unsupervised learning algorithm was proposed to classify reviews as recommended or not recommended by averaging sentiment annotation of phrases in reviews that contain adjectives or adverbs. However, the performances of these works are not good enough for sentiment analysis on product reviews, where sentiment on each attribute of a product could be so complicated that it is unable to be expressed by overall document sentiment.", |
| "cite_spans": [ |
| { |
| "start": 139, |
| "end": 153, |
| "text": "(Turney, 2002)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 411, |
| "end": 429, |
| "text": "(Hu and Liu, 2004;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 430, |
| "end": 456, |
| "text": "Popescu and Etzioni, 2005;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 457, |
| "end": 476, |
| "text": "Ding and Liu, 2007;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 477, |
| "end": 494, |
| "text": "Liu et al., 2005)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 987, |
| "end": 1021, |
| "text": "(Hatzivassiloglou and Wiebe, 2000;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 1022, |
| "end": 1053, |
| "text": "Andreevskaia and Bergler, 2006;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 1054, |
| "end": 1081, |
| "text": "Esuli and Sebastiani, 2005;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 1082, |
| "end": 1109, |
| "text": "Esuli and Sebastiani, 2006;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 1110, |
| "end": 1145, |
| "text": "Hatzivassiloglou and McKeown, 1997;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 1146, |
| "end": 1165, |
| "text": "Kamps et al., 2004;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 1166, |
| "end": 1189, |
| "text": "Devitt and Ahmad, 2007;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 1190, |
| "end": 1220, |
| "text": "Yu and Hatzivassiloglou, 2003)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 1411, |
| "end": 1434, |
| "text": "(Whitelaw et al., 2005)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 1440, |
| "end": 1461, |
| "text": "(Wilson et al., 2005)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 1675, |
| "end": 1689, |
| "text": "(Turney, 2002)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Attributes-based sentiment analysis is to analyze sentiment based on each attribute of a product. In (Hu and Liu, 2004) , mining product features was proposed together with sentiment polarity annotation for each opinion sentence. In that work, sentiment analysis was performed on product attributes level. In (Liu et al., 2005) , a system with framework for analyzing and comparing consumer opinions of competing products was proposed. The system made users be able to clearly see the strengths and weaknesses of each product in the minds of consumers in terms of various product features. In (Popescu and Etzioni, 2005 ), Popescu and Etzioni not only analyzed polarity of opinions regarding product features but also ranked opinions based on their strength. In , Liu et al. proposed Sentiment-PLSA that analyzed blog entries and viewed them as a document generated by a number of hidden sentiment factors. These sentiment factors may also be factors based on product attributes. In (Lu and Zhai, 2008) , Lu et al. proposed a semi-supervised topic models to solve the problem of opinion integration based on the topic of a product's attributes. The work in (Titov and McDonald, 2008) presented a multi-grain topic model for extracting the ratable attributes from product reviews. In (Lu et al., 2009) , the problem of rated attributes summary was studied with a goal of generating ratings for major aspects so that a user could gain different perspectives towards a target entity. All these research works concentrated on attribute-based sentiment analysis. However, the main difference with our work is that they did not sufficiently utilize the hierarchical relationships among a product attributes. Although a method of ontologysupported polarity mining, which also involved ontology to tackle the sentiment analysis problem, was proposed in (Zhou and Chaovalit, 2008) , that work studied polarity mining by machine learning techniques that still suffered from a problem of ignoring dependencies among attributes within an ontology's hierarchy. In the contrast, our work solves the sentiment analysis problem as a hierarchical classification problem that fully utilizes the hierarchy of the SOT during training and classification process.", |
| "cite_spans": [ |
| { |
| "start": 101, |
| "end": 119, |
| "text": "(Hu and Liu, 2004)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 309, |
| "end": 327, |
| "text": "(Liu et al., 2005)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 593, |
| "end": 619, |
| "text": "(Popescu and Etzioni, 2005", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 983, |
| "end": 1002, |
| "text": "(Lu and Zhai, 2008)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 1157, |
| "end": 1183, |
| "text": "(Titov and McDonald, 2008)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 1283, |
| "end": 1300, |
| "text": "(Lu et al., 2009)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 1845, |
| "end": 1871, |
| "text": "(Zhou and Chaovalit, 2008)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In this section, we first propose a formal definition on SOT. Then we formulate the HL-SOT approach. In this novel approach, tasks of sentiment analysis are to be achieved in a hierarchical classification process.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The HL-SOT Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "As we discussed in Section 1, the hierarchial relationships among a product's attributes might help improve the performance of attribute-based sentiment analysis. We propose to use a tree-like ontology structure SOT, i.e., Sentiment Ontology Tree, to formulate relationships among a product's attributes. Here,we give a formal definition on what a SOT is. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentiment Ontology Tree", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "ogy structure T (v, v + , v \u2212 , T). v", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 1 [SOT] SOT is an abbreviation for Sentiment Ontology Tree that is a tree-like ontol-", |
| "sec_num": null |
| }, |
| { |
| "text": "(v \u2032 , v \u2032+ , v \u2032\u2212 , T \u2032 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 1 [SOT] SOT is an abbreviation for Sentiment Ontology Tree that is a tree-like ontol-", |
| "sec_num": null |
| }, |
| { |
| "text": "which represents a subattribute of its parent attribute node.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 1 [SOT] SOT is an abbreviation for Sentiment Ontology Tree that is a tree-like ontol-", |
| "sec_num": null |
| }, |
| { |
| "text": "By the Definition 1, we define a root of a SOT to represent an attribute of a product. The SOT's two leaf child nodes are sentiment (positive/negative) nodes associated with the root attribute. The SOT recursively contains a set of sub-SOTs where each root of a sub-SOT is a non-leaf child node of the root of the SOT and represent a sub-attribute belonging to its parent attribute. This definition successfully describes the hierarchical relationships among all the attributes of a product. For example, in Fig. 1 the root node of the SOT for a digital camera is its general overview attribute. Comments on a digital camera's general overview attribute appearing in a review might be like \"this camera is great\". The \"camera\" SOT has two sentiment leaf child nodes as well as three non-leaf child nodes which are respectively root nodes of sub-SOTs for sub-attributes \"design and usability\", \"image quality\", and \"lens\". These sub-attributes SOTs recursively repeat until each node in the SOT does not have any more non-leaf child node, which means the corresponding attributes do not have any sub-attributes, e.g., the attribute node \"button\" in Fig. 1 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 508, |
| "end": 514, |
| "text": "Fig. 1", |
| "ref_id": null |
| }, |
| { |
| "start": 1148, |
| "end": 1154, |
| "text": "Fig. 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Definition 1 [SOT] SOT is an abbreviation for Sentiment Ontology Tree that is a tree-like ontol-", |
| "sec_num": null |
| }, |
| { |
| "text": "In this subsection, we present the HL-SOT approach. With the defined SOT, the problem of sentiment analysis is able to be formulated to be a hierarchial classification problem. Then a specific hierarchical learning algorithm is further proposed to solve the formulated problem.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentiment Analysis with SOT", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "In the proposed HL-SOT approach, each target text is to be indexed by a unit-norm vector", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formulation", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "x \u2208 X , X = R d . Let Y = {1, ..., N } denote the fi- nite set of nodes in SOT. Let y = {y 1 , ..., y N } \u2208 {0, 1} N be a label vector to a target text x, where \u2200i \u2208 Y : yi = {", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formulation", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "1, if x is labeled by the classifier of node i, 0, if x is not labeled by the classifier of node i.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formulation", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "A label vector y \u2208 {0, 1} N is said to respect SOT if and only if y satisfies \u2200i \u2208 Y , \u2200j \u2208 A(i) : if y i = 1 then y j = 1, where A(i) represents a set ancestor nodes of i, i.e.,A(i) = {x|ancestor(i, x)}. Let Y denote a set of label vectors that respect SOT. Then the tasks of sentiment analysis can be formulated to be the goal of a hierarchical classification that is to learn a function f : X \u2192 Y, that is able to label each target text x \u2208 X with classifier of each node and generating with x a label vector y \u2208 Y that respects SOT. The requirement of a generated label vector y \u2208 Y ensures that a target text is to be labeled with a node only if its parent attribute node is labeled with the target text. For example, in Fig. 1 a review is to be labeled with \"image quality +\" requires that the review should be successively labeled as related to \"camera\" and \"image quality\". This is reasonable and consistent with intuition, because if a review cannot be identified to be related to a camera, it is not safe to infer that the review is commenting a camera's image quality with positive sentiment.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 726, |
| "end": 732, |
| "text": "Fig. 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Problem Formulation", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "The algorithm H-RLS studied in (Cesa-Bianchi et al., 2006 ) solved a similar hierarchical classification problem as we formulated above. However, the H-RLS algorithm was designed as an onlinelearning algorithm which is not suitable to be applied directly in our problem setting. Moreover, the algorithm H-RLS defined the same value as the threshold of each node classifier. We argue that if the threshold values could be learned separately for each classifiers, the performance of classification process would be improved. Therefore we propose a specific hierarchical learning algorithm, named HL-SOT algorithm, that is able to train each node classifier in a batch-learning setting and allows separately learning for the threshold of each node classifier.", |
| "cite_spans": [ |
| { |
| "start": 31, |
| "end": 57, |
| "text": "(Cesa-Bianchi et al., 2006", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "HL-SOT Algorithm", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "Defining the f function Let w 1 , ..., w N be weight vectors that define linear-threshold classifiers of each node in SOT. Let W = (w 1 , ..., w N ) \u22a4 be an N \u00d7 d matrix called weight matrix. Here we generalize the work in (Cesa-Bianchi et al., 2006) and define the hierarchical classification function f as:", |
| "cite_spans": [ |
| { |
| "start": 223, |
| "end": 250, |
| "text": "(Cesa-Bianchi et al., 2006)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "HL-SOT Algorithm", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "\u0177 = f (x) = g(W \u2022 x),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "HL-SOT Algorithm", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "where x \u2208 X ,\u0177 \u2208 Y. Let z = W \u2022 x. Then the function\u0177 = g(z) on an N -dimensional vector z defines: \u2200i = 1, ..., N :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "HL-SOT Algorithm", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "y i = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 B(z i \u2265 \u03b8 i )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "HL-SOT Algorithm", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": ", if i is a root node in SOT or y j = 1 for j = P(i), 0, else where P(i) is the parent node of i in SOT and B(S) is a boolean function which is 1 if and only if the statement S is true. Then the hierarchical classification function f is parameterized by the weight matrix W = (w 1 , ..., w N ) \u22a4 and threshold vector \u03b8 = (\u03b8 1 , ..., \u03b8 N ) \u22a4 . The hierarchical learning algorithm HL-SOT is proposed for learning the parameters of W and \u03b8.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "HL-SOT Algorithm", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "Parameters Learning for f function Let D denote the training data set: D = {(r, l)|r \u2208 X , l \u2208 Y}. In the HL-SOT learning process, the weight matrix W is firstly initialized to be a 0 matrix, where each row vector w i is a 0 vector. The threshold vector is initialized to be a 0 vector. Each instance in the training set D goes into the training process. When a new instance r t is observed, each row vector w i,t of the weight matrix W t is updated by a regularized least squares estimator given by:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "HL-SOT Algorithm", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "w i,t = (I + S i,Q(i,t\u22121) S \u22a4 i,Q(i,t\u22121) + r t r \u22a4 t ) \u22121 \u00d7S i,Q(i,t\u22121) (l i,i 1 , l i,i 2 , ..., l i,i Q(i,t\u22121) ) \u22a4", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "HL-SOT Algorithm", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "(1) where I is a d \u00d7 d identity matrix, Q(i, t \u2212 1) denotes the number of times the parent of node i observes a positive label before observing the in- Q(i, t\u22121) matrix whose columns are the instances", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 17, |
| "end": 51, |
| "text": "d \u00d7 d identity matrix, Q(i, t \u2212 1)", |
| "ref_id": null |
| }, |
| { |
| "start": 152, |
| "end": 161, |
| "text": "Q(i, t\u22121)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "HL-SOT Algorithm", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "stance r t , S i,Q(i,t\u22121) = [r i 1 , ..., r i Q(i,t\u22121) ] is a d \u00d7", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "HL-SOT Algorithm", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "r i 1 , ..., r i Q(i,t\u22121) , and (l i,i 1 , l i,i 2 , ..., l i,i Q(i,t\u22121) ) \u22a4", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "HL-SOT Algorithm", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "is a Q(i, t\u22121)-dimensional vector of the corresponding labels observed by node i. The Formula 1 restricts that the weight vector w i,t of the classifier i is only updated on the examples that are positive for its parent node. Then the label vector\u0177 rt is computed for the instance r t , before the real label vector l rt is observed. Then the current threshold vector \u03b8 t is updated by:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "HL-SOT Algorithm", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u03b8 t+1 = \u03b8 t + \u03f5(\u0177 rt \u2212 l rt ),", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "HL-SOT Algorithm", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "where \u03f5 is a small positive real number that denotes a corrective step for correcting the current threshold vector \u03b8 t . To illustrate the idea behind the Formula 2, let y \u2032 t =\u0177 rt \u2212 l rt . Let y \u2032 i,t denote an element of the vector y \u2032 t . The Formula 2 correct the current threshold \u03b8 i,t for the classifier i in the following way:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "HL-SOT Algorithm", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "\u2022 If y \u2032 i,t = 0, it means the classifier i made a proper classification for the current instance r t . Then the current threshold \u03b8 i does not need to be adjusted.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "HL-SOT Algorithm", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "\u2022 If y \u2032 i,t = 1, it means the classifier i made an improper classification by mistakenly identifying the attribute i of the training instance r t that should have not been identified. This indicates the value of \u03b8 i is not big enough to serve as a threshold so that the attribute i in this case can be filtered out by the classifier i. Therefore, the current threshold \u03b8 i will be adjusted to be larger by \u03f5.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "HL-SOT Algorithm", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "\u2022 If y \u2032 i,t = \u22121, it means the classifier i made an improper classification by failing to identify the attribute i of the training instance r t that should have been identified. This indicates the value of \u03b8 i is not small enough to serve as a threshold so that the attribute i in this case Algorithm 1 Hierarchical Learning Algorithm HL-SOT INITIALIZATION: 1: Each vector w i,1 , i = 1, ..., N of weight matrix W 1 is set to be 0 vector 2: Threshold vector \u03b8 1 is set to be 0 vector BEGIN 3: for t = 1, ..., |D| do", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "HL-SOT Algorithm", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "Observe instance r t \u2208 X 5: ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "4:", |
| "sec_num": null |
| }, |
| { |
| "text": "for i = 1, ...N", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "4:", |
| "sec_num": null |
| }, |
| { |
| "text": "Compute\u0177 rt = f (r t ) = g(W t \u2022 r t ) 9:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "4:", |
| "sec_num": null |
| }, |
| { |
| "text": "Observe label vector l rt \u2208 Y of the instance r t", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "4:", |
| "sec_num": null |
| }, |
| { |
| "text": "Update threshold vector \u03b8 t by Formula 2 11: end for END can be recognized by the classifier i. Therefore, the current threshold \u03b8 i will be adjusted to be smaller by \u03f5.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "10:", |
| "sec_num": null |
| }, |
| { |
| "text": "The hierarchial learning algorithm HL-SOT is presented as in Algorithm 1. The HL-SOT algorithm enables each classifier to have its own specific threshold value and allows this threshold value can be separately learned and corrected through the training process. It is not only a batchlearning setting of the H-RLS algorithm but also a generalization to the latter. If we set the algorithm HL-SOT's parameter \u03f5 to be 0, the HL-SOT becomes the H-RLS algorithm in a batch-learning setting.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "10:", |
| "sec_num": null |
| }, |
| { |
| "text": "In this section, we conduct systematic experiments to perform empirical analysis on our proposed HL-SOT approach against a human-labeled data set. In order to encode each text in the data set by a d-dimensional vector x \u2208 R d , we first remove all the stop words and then select the top d frequency terms appearing in the data set to construct the index term space. Our experiments are intended to address the following questions:(1) whether utilizing the hierarchical relationships among labels help to improve the accuracy of the classification?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Empirical Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "(2) whether the introduction of separately learning threshold for each classifier help to improve the accuracy of the classification? (3) how does the corrective step \u03f5 impact the performance of the proposed approach?(4)how does the dimensionality d of index terms space impact the proposed approach's computing efficiency and accuracy?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Empirical Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The data set contains 1446 snippets of customer reviews on digital cameras that are collected from a customer review website 4 . We manually construct a SOT for the product of digital cameras. The constructed SOT (e.g., Fig. 1 ) contains 105 nodes that include 35 non-leaf nodes representing attributes of the digital camera and 70 leaf nodes representing associated sentiments with attribute nodes. Then we label all the snippets with corresponding labels of nodes in the constructed SOT complying with the rule that a target text is to be labeled with a node only if its parent attribute node is labeled with the target text. We randomly divide the labeled data set into five folds so that each fold at least contains one example snippets labeled by each node in the SOT. For each experiment setting, we run 5 experiments to perform cross-fold evaluation by randomly picking three folds as the training set and the other two folds as the testing set. All the testing results are averages over 5 running of experiments.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 220, |
| "end": 226, |
| "text": "Fig. 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data Set Preparation", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Since the proposed HL-SOT approach is a hierarchical classification process, we use three classic loss functions for measuring classification performance. They are the One-error Loss (O-Loss) function, the Symmetric Loss (S-Loss) function, and the Hierarchical Loss (H-Loss) function:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2022 One-error loss (O-Loss) function is defined as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "L O (\u0177, l) = B(\u2203i :\u0177 i \u0338 = l i ),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "where\u0177 is the prediction label vector and l is the true label vector; B is the boolean function as defined in Section 3.2.2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2022 Symmetric loss (S-Loss) function is defined as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "L S (\u0177, l) = N \u2211 i=1 B(\u0177 i \u0338 = l i ),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2022 Hierarchical loss (H-Loss) function is defined as: Unlike the O-Loss function and the S-Loss function, the H-Loss function captures the intuition that loss should only be charged on a node whenever a classification mistake is made on a node of SOT but no more should be charged for any additional mistake occurring in the subtree of that node. It measures the discrepancy between the prediction labels and the true labels with consideration on the SOT structure defined over the labels.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "L H (\u0177, l) = N \u2211 i=1 B(\u0177 i \u0338 = l i \u2227 \u2200j \u2208 A(i),\u0177 j = l j ),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In our experiments, the recorded loss function values for each experiment running are computed by averaging the loss function values of each testing snippets in the testing set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In order to answer the questions (1), (2) in the beginning of this section, we compare our HL-SOT approach with the following two baseline approaches:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Performance Comparison", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "\u2022 HL-flat: The HL-flat approach involves an algorithm that is a \"flat\" version of HL-SOT algorithm by ignoring the hierarchical relationships among labels when each classifier is trained. In the training process of HL-flat, the algorithm reflexes the restriction in the HL-SOT algorithm that requires the weight vector w i,t of the classifier i is only updated on the examples that are positive for its parent node.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Performance Comparison", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "\u2022 H-RLS: The H-RLS approach is implemented by applying the H-RLS algorithm studied in (Cesa-Bianchi et al., 2006) . Unlike our proposed HL-SOT algorithm that enables the threshold values to be learned separately for each classifiers in the training process, the H-RLS algorithm only uses an identical threshold values for each classifiers in the classification process.", |
| "cite_spans": [ |
| { |
| "start": 86, |
| "end": 113, |
| "text": "(Cesa-Bianchi et al., 2006)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Performance Comparison", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Experiments are conducted on the performance comparison between the proposed HL-SOT approach with HL-flat approach and the H-RLS approach. The dimensionality d of the index term space is set to be 110 and 220. The corrective step \u03f5 is set to be 0.005. The experimental results are summarized in Table 1 . From Table 1 , we can observe that the HL-SOT approach generally beats the H-RLS approach and HL-flat approach on O-Loss, S-Loss, and H-Loss respectively. The H-RLS performs worse than the HL-flat and the HL-SOT, which indicates that the introduction of separately learning threshold for each classifier did improve the accuracy of the classification. The HL-SOT approach performs better than the HL-flat, which demonstrates the effectiveness of utilizing the hierarchical relationships among labels.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 295, |
| "end": 302, |
| "text": "Table 1", |
| "ref_id": "TABREF3" |
| }, |
| { |
| "start": 310, |
| "end": 317, |
| "text": "Table 1", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Performance Comparison", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The parameter \u03f5 in the proposed HL-SOT approach controls the corrective step of the classifiers' thresholds when any mistake is observed in the training process. If the corrective step \u03f5 is set too large, it might cause the algorithm to be too sensitive to each observed mistake. On the contrary, if the corrective step is set too small, it might cause the algorithm not sensitive enough to the observed mistakes. Hence, the corrective step \u03f5 is a factor that might impact the performance of the proposed approach. Fig. 2 demonstrates the impact of \u03f5 on O-Loss, S-Loss, and H-Loss. The dimensionality of index term space d is set to be 110 and 220. The value of \u03f5 is set to vary from 0.001 to 0.1 with each step of 0.001. Fig. 2 shows that the parameter \u03f5 impacts the classification performance significantly. As the value of \u03f5 increase, the O-Loss, S-Loss, and H-Loss generally increase (performance decrease). In Fig. 2c it is obviously detected that the H-Loss decreases a little (performance increase) at first before it increases (performance decrease) with further increase of the value of \u03f5. This indicates that a finer-grained value of \u03f5 will not necessarily result in a better performance on the H-loss. However, a fine-grained corrective step generally makes a better performance than a coarse-grained corrective step.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 515, |
| "end": 521, |
| "text": "Fig. 2", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 722, |
| "end": 728, |
| "text": "Fig. 2", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 915, |
| "end": 922, |
| "text": "Fig. 2c", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Impact of Corrective Step \u03f5", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "In the proposed HL-SOT approach, the dimensionality d of the index term space controls the number of terms to be indexed. If d is set too small, important useful terms will be missed that will limit the performance of the approach. However, if d is set too large, the computing efficiency will be decreased. Fig. 3 shows the impacts of the parameter d respectively on O-Loss, S-Loss, and H-Loss, where d varies from 50 to 300 with each step of 10 and the \u03f5 is set to be 0.005. From Fig. 3 , we observe that as the d increases the O-Loss, S-Loss, and H-Loss generally decrease (performance increase). This means that when more terms are indexed better performance can be achieved by the HL-SOT approach. However, Fig. 4 shows that the computational complexity of our approach is non-linear increased with d's growing, which indicates that indexing more terms will improve the accuracy of our proposed approach although this is paid by decreasing the computing efficiency.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 308, |
| "end": 314, |
| "text": "Fig. 3", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 477, |
| "end": 488, |
| "text": "From Fig. 3", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 712, |
| "end": 718, |
| "text": "Fig. 4", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Impact of Dimensionality d of Index Term Space", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "In this paper, we propose a novel and effective approach to sentiment analysis on product reviews. In our proposed HL-SOT approach, we define SOT to formulate the knowledge of hierarchical relationships among a product's attributes and tackle the problem of sentiment analysis in a hierarchical classification process with the proposed algorithm. The empirical analysis on a humanlabeled data set demonstrates the promising results of our proposed approach. The performance comparison shows that the proposed HL-SOT approach outperforms two baselines: the HL-flat and the H-RLS approach. This confirms two intuitive motivations based on which our approach is proposed: 1) separately learning threshold values for each classifier improve the classification accuracy; 2) knowledge of hierarchical relationships of labels improve the approach's performance. The experiments on analyzing the impact of parameter \u03f5 indicate that a fine-grained corrective step generally makes a better performance than a coarsegrained corrective step. The experiments on analyzing the impact of the dimensionality d show that indexing more terms will improve the accuracy of our proposed approach while the computing efficiency will be greatly decreased.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions, Discussions and Future Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The focus of this paper is on analyzing review texts of one product. However, the framework of our proposed approach can be generalized to deal with a mix of review texts of more than one products. In this generalization for sentiment analysis on multiple products reviews, a \"big\" SOT is constructed and the SOT for each product reviews is a sub-tree of the \"big\" SOT. The sentiment analysis on multiple products reviews can be performed the same way the HL-SOT approach is applied on single product reviews and can be tackled in a hierarchical classification process with the \"big\" SOT.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions, Discussions and Future Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "This paper is motivated by the fact that the relationships among a product's attributes could be a useful knowledge for mining product review texts. The SOT is defined to formulate this knowledge in the proposed approach. However, what attributes to be included in a product's SOT and how to structure these attributes in the SOT is an effort of human beings. The sizes and structures of SOTs constructed by different individuals may vary. How the classification performance will be affected by variances of the generated SOTs is worthy of study. In addition, an automatic method to learn a product's attributes and the structure of SOT from existing product review texts will greatly benefit the efficiency of the proposed approach. We plan to investigate on these issues in our future work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions, Discussions and Future Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Each product review to be analyzed is called target text in the following of this paper.2 Due to the space limitation, not all attributes of a digital camera are enumerated in this SOT; m+/m-means posi-", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "A product itself can be treated as an overall attribute of the product.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.consumerreview.com/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "The authors would like to thank the anonymous reviewers for many helpful comments on the manuscript. This work is funded by the Research Council of Norway under the VERDIKT research programme (Project No.: 183337).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Mining wordnet for a fuzzy sentiment: Sentiment tag extraction from wordnet glosses", |
| "authors": [ |
| { |
| "first": "Alina", |
| "middle": [], |
| "last": "Andreevskaia", |
| "suffix": "" |
| }, |
| { |
| "first": "Sabine", |
| "middle": [], |
| "last": "Bergler", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of 11th Conference of the European Chapter of the Association for Computational Linguistics (EACL'06)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alina Andreevskaia and Sabine Bergler. 2006. Min- ing wordnet for a fuzzy sentiment: Sentiment tag extraction from wordnet glosses. In Proceedings of 11th Conference of the European Chapter of the As- sociation for Computational Linguistics (EACL'06), Trento, Italy.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Incremental algorithms for hierarchical classification", |
| "authors": [ |
| { |
| "first": "Nicol\u00f2", |
| "middle": [], |
| "last": "Cesa-Bianchi", |
| "suffix": "" |
| }, |
| { |
| "first": "Claudio", |
| "middle": [], |
| "last": "Gentile", |
| "suffix": "" |
| }, |
| { |
| "first": "Luca", |
| "middle": [], |
| "last": "Zaniboni", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "7", |
| "issue": "", |
| "pages": "31--54", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nicol\u00f2 Cesa-Bianchi, Claudio Gentile, and Luca Zani- boni. 2006. Incremental algorithms for hierarchi- cal classification. Journal of Machine Learning Re- search (JMLR), 7:31-54.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Mining the peanut gallery: opinion extraction and semantic classification of product reviews", |
| "authors": [ |
| { |
| "first": "Kushal", |
| "middle": [], |
| "last": "Dave", |
| "suffix": "" |
| }, |
| { |
| "first": "Steve", |
| "middle": [], |
| "last": "Lawrence", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [ |
| "M" |
| ], |
| "last": "Pennock", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of 12nd International World Wide Web Conference (WWW'03)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kushal Dave, Steve Lawrence, and David M. Pennock. 2003. Mining the peanut gallery: opinion extraction and semantic classification of product reviews. In Proceedings of 12nd International World Wide Web Conference (WWW'03), Budapest, Hungary.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Sentiment polarity identification in financial news: A cohesionbased approach", |
| "authors": [ |
| { |
| "first": "Ann", |
| "middle": [], |
| "last": "Devitt", |
| "suffix": "" |
| }, |
| { |
| "first": "Khurshid", |
| "middle": [], |
| "last": "Ahmad", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of 45th Annual Meeting of the Association for Computational Linguistics (ACL'07)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ann Devitt and Khurshid Ahmad. 2007. Sentiment polarity identification in financial news: A cohesion- based approach. In Proceedings of 45th Annual Meeting of the Association for Computational Lin- guistics (ACL'07), Prague, Czech Republic.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "The utility of linguistic rules in opinion mining", |
| "authors": [ |
| { |
| "first": "Xiaowen", |
| "middle": [], |
| "last": "Ding", |
| "suffix": "" |
| }, |
| { |
| "first": "Bing", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of 30th Annual International ACM Special Interest Group on Information Retrieval Conference (SI-GIR'07)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xiaowen Ding and Bing Liu. 2007. The utility of linguistic rules in opinion mining. In Proceedings of 30th Annual International ACM Special Inter- est Group on Information Retrieval Conference (SI- GIR'07), Amsterdam, The Netherlands.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Determining the semantic orientation of terms through gloss classification", |
| "authors": [ |
| { |
| "first": "Andrea", |
| "middle": [], |
| "last": "Esuli", |
| "suffix": "" |
| }, |
| { |
| "first": "Fabrizio", |
| "middle": [], |
| "last": "Sebastiani", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of 14th ACM Conference on Information and Knowledge Management (CIKM'05)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrea Esuli and Fabrizio Sebastiani. 2005. Deter- mining the semantic orientation of terms through gloss classification. In Proceedings of 14th ACM Conference on Information and Knowledge Man- agement (CIKM'05), Bremen, Germany.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Sentiwordnet: A publicly available lexical resource for opinion mining", |
| "authors": [ |
| { |
| "first": "Andrea", |
| "middle": [], |
| "last": "Esuli", |
| "suffix": "" |
| }, |
| { |
| "first": "Fabrizio", |
| "middle": [], |
| "last": "Sebastiani", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of 5th International Conference on Language Resources and Evaluation (LREC'06)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrea Esuli and Fabrizio Sebastiani. 2006. Senti- wordnet: A publicly available lexical resource for opinion mining. In Proceedings of 5th International Conference on Language Resources and Evaluation (LREC'06), Genoa, Italy.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Predicting the semantic orientation of adjectives", |
| "authors": [ |
| { |
| "first": "Vasileios", |
| "middle": [], |
| "last": "Hatzivassiloglou", |
| "suffix": "" |
| }, |
| { |
| "first": "Kathleen", |
| "middle": [ |
| "R" |
| ], |
| "last": "Mckeown", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings of 35th Annual Meeting of the Association for Computational Linguistics (ACL'97)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vasileios Hatzivassiloglou and Kathleen R. McKeown. 1997. Predicting the semantic orientation of ad- jectives. In Proceedings of 35th Annual Meeting of the Association for Computational Linguistics (ACL'97), Madrid, Spain.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Effects of adjective orientation and gradability on sentence subjectivity", |
| "authors": [ |
| { |
| "first": "Vasileios", |
| "middle": [], |
| "last": "Hatzivassiloglou", |
| "suffix": "" |
| }, |
| { |
| "first": "Janyce", |
| "middle": [ |
| "M" |
| ], |
| "last": "Wiebe", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of 18th International Conference on Computational Linguistics (COLING'00)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vasileios Hatzivassiloglou and Janyce M. Wiebe. 2000. Effects of adjective orientation and grad- ability on sentence subjectivity. In Proceedings of 18th International Conference on Computational Linguistics (COLING'00), Saarbr\u00fcken, Germany.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Mining and summarizing customer reviews", |
| "authors": [ |
| { |
| "first": "Minqing", |
| "middle": [], |
| "last": "Hu", |
| "suffix": "" |
| }, |
| { |
| "first": "Bing", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of 10th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD'04)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Minqing Hu and Bing Liu. 2004. Mining and sum- marizing customer reviews. In Proceedings of 10th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD'04), Seattle, USA.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Using WordNet to measure semantic orientation of adjectives", |
| "authors": [ |
| { |
| "first": "Jaap", |
| "middle": [], |
| "last": "Kamps", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "Ort" |
| ], |
| "last": "Marx", |
| "suffix": "" |
| }, |
| { |
| "first": "Maarten", |
| "middle": [], |
| "last": "Mokken", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "De Rijke", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of 4th International Conference on Language Resources and Evaluation (LREC'04)", |
| "volume": "", |
| "issue": "", |
| "pages": "Por-- tugal", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jaap Kamps, Maarten Marx, R. ort. Mokken, and Maarten de Rijke. 2004. Using WordNet to mea- sure semantic orientation of adjectives. In Proceed- ings of 4th International Conference on Language Resources and Evaluation (LREC'04), Lisbon, Por- tugal.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Opinion observer: analyzing and comparing opinions on the web", |
| "authors": [ |
| { |
| "first": "Bing", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Minqing", |
| "middle": [], |
| "last": "Hu", |
| "suffix": "" |
| }, |
| { |
| "first": "Junsheng", |
| "middle": [], |
| "last": "Cheng", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of 14th International World Wide Web Conference (WWW'05)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bing Liu, Minqing Hu, and Junsheng Cheng. 2005. Opinion observer: analyzing and comparing opin- ions on the web. In Proceedings of 14th Inter- national World Wide Web Conference (WWW'05), Chiba, Japan.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "ARSA: a sentiment-aware model for predicting sales performance using blogs", |
| "authors": [ |
| { |
| "first": "Yang", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiangji", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Aijun", |
| "middle": [], |
| "last": "An", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaohui", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the 30th Annual International ACM Special Interest Group on Information Retrieval Conference (SI-GIR'07)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yang Liu, Xiangji Huang, Aijun An, and Xiaohui Yu. 2007. ARSA: a sentiment-aware model for predict- ing sales performance using blogs. In Proceedings of the 30th Annual International ACM Special Inter- est Group on Information Retrieval Conference (SI- GIR'07), Amsterdam, The Netherlands.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Opinion integration through semi-supervised topic modeling", |
| "authors": [ |
| { |
| "first": "Yue", |
| "middle": [], |
| "last": "Lu", |
| "suffix": "" |
| }, |
| { |
| "first": "Chengxiang", |
| "middle": [], |
| "last": "Zhai", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of 17th International World Wide Web Conference (WWW'08)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yue Lu and Chengxiang Zhai. 2008. Opinion inte- gration through semi-supervised topic modeling. In Proceedings of 17th International World Wide Web Conference (WWW'08), Beijing, China.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Rated aspect summarization of short comments", |
| "authors": [ |
| { |
| "first": "Yue", |
| "middle": [], |
| "last": "Lu", |
| "suffix": "" |
| }, |
| { |
| "first": "Chengxiang", |
| "middle": [], |
| "last": "Zhai", |
| "suffix": "" |
| }, |
| { |
| "first": "Neel", |
| "middle": [], |
| "last": "Sundaresan", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of 18th International World Wide Web Conference (WWW'09)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yue Lu, ChengXiang Zhai, and Neel Sundaresan. 2009. Rated aspect summarization of short com- ments. In Proceedings of 18th International World Wide Web Conference (WWW'09), Madrid, Spain.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Extracting product features and opinions from reviews", |
| "authors": [ |
| { |
| "first": "Ana-Maria", |
| "middle": [], |
| "last": "Popescu", |
| "suffix": "" |
| }, |
| { |
| "first": "Oren", |
| "middle": [], |
| "last": "Etzioni", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of Human Language Technology Conference and Empirical Methods in Natural Language Processing Conference (HLT/EMNLP'05)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ana-Maria Popescu and Oren Etzioni. 2005. Extract- ing product features and opinions from reviews. In Proceedings of Human Language Technology Con- ference and Empirical Methods in Natural Lan- guage Processing Conference (HLT/EMNLP'05), Vancouver, Canada.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Modeling online reviews with multi-grain topic models", |
| "authors": [ |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Titov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [ |
| "T" |
| ], |
| "last": "Mcdonald", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of 17th International World Wide Web Conference (WWW'08)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ivan Titov and Ryan T. McDonald. 2008. Modeling online reviews with multi-grain topic models. In Proceedings of 17th International World Wide Web Conference (WWW'08), Beijing, China.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Thumbs up or thumbs down? semantic orientation applied to unsupervised classification of reviews", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Peter", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Turney", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of 40th Annual Meeting of the Association for Computational Linguistics (ACL'02)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter D. Turney. 2002. Thumbs up or thumbs down? semantic orientation applied to unsupervised classi- fication of reviews. In Proceedings of 40th Annual Meeting of the Association for Computational Lin- guistics (ACL'02), Philadelphia, USA.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Using appraisal taxonomies for sentiment analysis", |
| "authors": [ |
| { |
| "first": "Casey", |
| "middle": [], |
| "last": "Whitelaw", |
| "suffix": "" |
| }, |
| { |
| "first": "Navendu", |
| "middle": [], |
| "last": "Garg", |
| "suffix": "" |
| }, |
| { |
| "first": "Shlomo", |
| "middle": [], |
| "last": "Argamon", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of 14th ACM Conference on Information and Knowledge Management (CIKM'05)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Casey Whitelaw, Navendu Garg, and Shlomo Arga- mon. 2005. Using appraisal taxonomies for senti- ment analysis. In Proceedings of 14th ACM Confer- ence on Information and Knowledge Management (CIKM'05), Bremen, Germany.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Recognizing contextual polarity in phraselevel sentiment analysis", |
| "authors": [ |
| { |
| "first": "Theresa", |
| "middle": [], |
| "last": "Wilson", |
| "suffix": "" |
| }, |
| { |
| "first": "Janyce", |
| "middle": [], |
| "last": "Wiebe", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Hoffmann", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of Human Language Technology Conference and Empirical Methods in Natural Language Processing Conference (HLT/EMNLP'05)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phrase- level sentiment analysis. In Proceedings of Hu- man Language Technology Conference and Empir- ical Methods in Natural Language Processing Con- ference (HLT/EMNLP'05), Vancouver, Canada.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences", |
| "authors": [ |
| { |
| "first": "Hong", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Vasileios", |
| "middle": [], |
| "last": "Hatzivassiloglou", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of 8th Conference on Empirical Methods in Natural Language Processing (EMNLP'03)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hong Yu and Vasileios Hatzivassiloglou. 2003. To- wards answering opinion questions: Separating facts from opinions and identifying the polarity of opin- ion sentences. In Proceedings of 8th Conference on Empirical Methods in Natural Language Processing (EMNLP'03), Sapporo, Japan.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Ontologysupported polarity mining", |
| "authors": [ |
| { |
| "first": "Lina", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Pimwadee", |
| "middle": [], |
| "last": "Chaovalit", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Journal of the American Society for Information Science and Technology", |
| "volume": "59", |
| "issue": "1", |
| "pages": "98--110", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lina Zhou and Pimwadee Chaovalit. 2008. Ontology- supported polarity mining. Journal of the American Society for Information Science and Technology (JA- SIST), 59(1):98-110.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Movie review mining and summarization", |
| "authors": [ |
| { |
| "first": "Li", |
| "middle": [], |
| "last": "Zhuang", |
| "suffix": "" |
| }, |
| { |
| "first": "Feng", |
| "middle": [], |
| "last": "Jing", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiao-Yan", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 15th ACM International Conference on Information and knowledge management (CIKM'06)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Li Zhuang, Feng Jing, and Xiao-Yan Zhu. 2006. Movie review mining and summarization. In Pro- ceedings of the 15th ACM International Confer- ence on Information and knowledge management (CIKM'06), Arlington, USA.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "num": null, |
| "uris": null, |
| "text": "Impact of CorrectiveStep \u03f5 where A denotes a set of nodes that are ancestors of node i in SOT." |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "num": null, |
| "uris": null, |
| "text": "Impact of Dimensionality d of Index Term Space (\u03f5 = 0.005)" |
| }, |
| "FIGREF3": { |
| "type_str": "figure", |
| "num": null, |
| "uris": null, |
| "text": "Time Consuming Impacted by d considering the computing efficiency impacted by d," |
| }, |
| "TABREF1": { |
| "html": null, |
| "type_str": "table", |
| "text": "is the root node of T which represents an attribute of a given product. v + is a positive sentiment leaf node associated with the attribute v. v \u2212 is a negative sentiment leaf node associated with the attribute v. T is a set of subtrees. Each element of T is also a SOT T \u2032", |
| "content": "<table/>", |
| "num": null |
| }, |
| "TABREF3": { |
| "html": null, |
| "type_str": "table", |
| "text": "Performance Comparisons (A Smaller Loss Value Means a Better Performance)", |
| "content": "<table><tr><td/><td/><td/><td/><td/><td colspan=\"2\">Metrics</td><td colspan=\"6\">Dimensinality=110 H-RLS HL-flat HL-SOT</td><td colspan=\"4\">Dimensinality=220 H-RLS HL-flat HL-SOT</td><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td colspan=\"2\">O-Loss</td><td>0.9812</td><td colspan=\"3\">0.8772</td><td>0.8443</td><td/><td>0.9783</td><td>0.8591</td><td colspan=\"2\">0.8428</td><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td>S-Loss</td><td>8.5516</td><td colspan=\"3\">2.8921</td><td>2.3190</td><td/><td>7.8623</td><td>2.8449</td><td colspan=\"2\">2.2812</td><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td colspan=\"2\">H-Loss</td><td>3.2479</td><td colspan=\"3\">1.1383</td><td>1.0366</td><td/><td>3.1029</td><td>1.1298</td><td colspan=\"2\">1.0247</td><td/><td/><td/></tr><tr><td/><td>0.852</td><td/><td/><td/><td/><td/><td/><td>2.4</td><td/><td/><td/><td/><td/><td/><td>1.05</td><td/><td/><td/><td/></tr><tr><td/><td>0.85</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>1.045</td><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td>2.35</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>0.848</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>1.04</td><td/><td/><td/><td/></tr><tr><td>O\u2212Loss</td><td>0.844 0.846</td><td/><td/><td/><td/><td/><td>S\u2212Loss</td><td>2.25 2.3</td><td/><td/><td/><td/><td/><td>H\u2212Loss</td><td>1.03 1.035</td><td/><td/><td/><td/></tr><tr><td/><td>0.842</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>1.025</td><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td>2.2</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>0.84</td><td/><td/><td/><td/><td>d=110</td><td/><td/><td/><td/><td/><td/><td>d=110</td><td/><td>1.02</td><td/><td/><td/><td>d=110</td></tr><tr><td/><td/><td/><td/><td/><td/><td>d=220</td><td/><td/><td/><td/><td/><td/><td>d=220</td><td/><td/><td/><td/><td/><td>d=220</td></tr><tr><td/><td>0.838</td><td>0</td><td>0.02</td><td>0.04</td><td>0.06</td><td>0.08</td><td>0.1</td><td>2.15</td><td>0</td><td>0.02</td><td>0.04</td><td>0.06</td><td>0.08</td><td>0.1</td><td>0</td><td>0.02</td><td>0.04</td><td>0.06</td><td>0.08</td><td>0.1</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">Corrective Step</td><td/><td/><td/><td/><td/><td colspan=\"2\">Corrective Step</td><td/><td/><td/><td/><td colspan=\"2\">Corrective Step</td><td/></tr><tr><td/><td/><td/><td/><td colspan=\"2\">(a) O-Loss</td><td/><td/><td/><td/><td/><td colspan=\"2\">(b) S-Loss</td><td/><td/><td/><td/><td/><td/><td/></tr></table>", |
| "num": null |
| } |
| } |
| } |
| } |