| { |
| "paper_id": "N07-1038", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T14:48:42.197464Z" |
| }, |
| "title": "Multiple Aspect Ranking using the Good Grief Algorithm", |
| "authors": [ |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Snyder", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Artificial Intelligence Laboratory", |
| "institution": "Massachusetts Institute of Technology", |
| "location": {} |
| }, |
| "email": "bsnyder@csail.mit.edu" |
| }, |
| { |
| "first": "Regina", |
| "middle": [], |
| "last": "Barzilay", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Artificial Intelligence Laboratory", |
| "institution": "Massachusetts Institute of Technology", |
| "location": {} |
| }, |
| "email": "regina@csail.mit.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We address the problem of analyzing multiple related opinions in a text. For instance, in a restaurant review such opinions may include food, ambience and service. We formulate this task as a multiple aspect ranking problem, where the goal is to produce a set of numerical scores, one for each aspect. We present an algorithm that jointly learns ranking models for individual aspects by modeling the dependencies between assigned ranks. This algorithm guides the prediction of individual rankers by analyzing meta-relations between opinions, such as agreement and contrast. We prove that our agreementbased joint model is more expressive than individual ranking models. Our empirical results further confirm the strength of the model: the algorithm provides significant improvement over both individual rankers and a state-of-the-art joint ranking model.", |
| "pdf_parse": { |
| "paper_id": "N07-1038", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We address the problem of analyzing multiple related opinions in a text. For instance, in a restaurant review such opinions may include food, ambience and service. We formulate this task as a multiple aspect ranking problem, where the goal is to produce a set of numerical scores, one for each aspect. We present an algorithm that jointly learns ranking models for individual aspects by modeling the dependencies between assigned ranks. This algorithm guides the prediction of individual rankers by analyzing meta-relations between opinions, such as agreement and contrast. We prove that our agreementbased joint model is more expressive than individual ranking models. Our empirical results further confirm the strength of the model: the algorithm provides significant improvement over both individual rankers and a state-of-the-art joint ranking model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Previous work on sentiment categorization makes an implicit assumption that a single score can express the polarity of an opinion text (Pang et al., 2002; Turney, 2002; Yu and Hatzivassiloglou, 2003) . However, multiple opinions on related matters are often intertwined throughout a text. For example, a restaurant review may express judgment on food quality as well as the service and ambience of the restaurant. Rather than lumping these aspects into a single score, we would like to capture each aspect of the writer's opinion separately, thereby providing a more fine-grained view of opinions in the review.", |
| "cite_spans": [ |
| { |
| "start": 135, |
| "end": 154, |
| "text": "(Pang et al., 2002;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 155, |
| "end": 168, |
| "text": "Turney, 2002;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 169, |
| "end": 199, |
| "text": "Yu and Hatzivassiloglou, 2003)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To this end, we aim to predict a set of numeric ranks that reflects the user's satisfaction for each aspect. In the example above, we would assign a numeric rank from 1-5 for each of: food quality, service, and ambience.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "A straightforward approach to this task would be to rank 1 the text independently for each aspect, using standard ranking techniques such as regression or classification. However, this approach fails to exploit meaningful dependencies between users' judgments across different aspects. Knowledge of these dependencies can be crucial in predicting accurate ranks, as a user's opinions on one aspect can influence his or her opinions on others.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The algorithm presented in this paper models the dependencies between different labels via the agreement relation. The agreement relation captures whether the user equally likes all aspects of the item or whether he or she expresses different degrees of satisfaction. Since this relation can often be determined automatically for a given text (Marcu and Echihabi, 2002) , we can readily use it to improve rank prediction. The Good Grief model consists of a ranking model for each aspect as well as an agreement model which predicts whether or not all rank aspects are equal. The Good Grief decoding algorithm predicts a set of ranks -one for each aspect -which maximally satisfy the preferences of the individual rankers and the agreement model. For example, if the agreement model predicts consensus but the individual rankers select ranks 5, 5, 4 , then the decoder decides whether to trust the the third ranker, or alter its prediction and output 5, 5, 5 to be consistent with the agreement prediction. To obtain a model well-suited for this decoding, we also develop a joint training method that conjoins the training of multiple aspect models.", |
| "cite_spans": [ |
| { |
| "start": 343, |
| "end": 369, |
| "text": "(Marcu and Echihabi, 2002)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We demonstrate that the agreement-based joint model is more expressive than individual ranking models. That is, every training set that can be perfectly ranked by individual ranking models for each aspect can also be perfectly ranked with our joint model. In addition, we give a simple example of a training set which cannot be perfectly ranked without agreement-based joint inference. Our experimental results further confirm the strength of the Good Grief model. Our model significantly outperforms individual ranking models as well as a stateof-the-art joint ranking model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Sentiment Classification Traditionally, categorization of opinion texts has been cast as a binary classification task (Pang et al., 2002; Turney, 2002; Yu and Hatzivassiloglou, 2003; Dave et al., 2003) . More recent work (Pang and Lee, 2005; Goldberg and Zhu, 2006) has expanded this analysis to the ranking framework where the goal is to assess review polarity on a multi-point scale. While this approach provides a richer representation of a single opinion, it still operates on the assumption of one opinion per text. Our work generalizes this setting to the problem of analyzing multiple opinions -or multiple aspects of an opinion. Since multiple opinions in a single text are related, it is insufficient to treat them as separate single-aspect ranking tasks. This motivates our exploration of a new method for joint multiple aspect ranking.", |
| "cite_spans": [ |
| { |
| "start": 118, |
| "end": 137, |
| "text": "(Pang et al., 2002;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 138, |
| "end": 151, |
| "text": "Turney, 2002;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 152, |
| "end": 182, |
| "text": "Yu and Hatzivassiloglou, 2003;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 183, |
| "end": 201, |
| "text": "Dave et al., 2003)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 221, |
| "end": 241, |
| "text": "(Pang and Lee, 2005;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 242, |
| "end": 265, |
| "text": "Goldberg and Zhu, 2006)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Ranking The ranking, or ordinal regression, problem has been extensivly studied in the Machine Learning and Information Retrieval communities. In this section we focus on two online ranking methods which form the basis of our approach. The first is a model proposed by Crammer and Singer (2001) . The task is to predict a rank y \u2208 {1, ..., k} for every input x \u2208 R n . Their model stores a weight vector w \u2208 R n and a vector of increasing boundaries b 0 = \u2212\u221e \u2264 b 1 \u2264 ... \u2264 b k\u22121 \u2264 b k = \u221e which divide the real line into k segments, one for each possible rank. The model first scores each input with the weight vector: score(x) = w \u2022 x. Finally, the model locates score(x) on the real line and returns the appropriate rank as indicated by the boundaries. Formally, the model returns the rank r such that b r\u22121 \u2264 score(x) < b r . The model is trained with the Perceptron Ranking algorithm (or \"PRank algorithm\"), which reacts to incorrect predictions on the training set by updating the weight and boundary vectors. The PRanking model and algorithm were tested on the EachMovie dataset with a separate ranking model learned for each user in the database.", |
| "cite_spans": [ |
| { |
| "start": 269, |
| "end": 294, |
| "text": "Crammer and Singer (2001)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "An extension of this model is provided by Basilico and Hofmann (2004) in the context of collaborative filtering. Instead of training a separate model for each user, Basilico and Hofmann train a joint ranking model which shares a set of boundaries across all users. In addition to these shared boundaries, userspecific weight vectors are stored. To compute the score for input x and user i, the weight vectors for all users are employed:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "score i (x) = w[i] \u2022 x + j sim(i, j)(w[j] \u2022 x) (1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "where 0 \u2264 sim(i, j) \u2264 1 is the cosine similarity between users i and j, computed on the entire training set. Once the score has been computed, the prediction rule follows that of the PRanking model. The model is trained using the PRank algorithm, with the exception of the new definition for the scoring function. 2 While this model shares information between the different ranking problems, it fails to explicitly model relations between the rank predictions. In contrast, our algorithm uses an agreement model to learn such relations and inform joint predictions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "2 In the notation of Basilico and Hofmann (2004) , this definition of scorei(x) corresponds to the kernel", |
| "cite_spans": [ |
| { |
| "start": 21, |
| "end": 48, |
| "text": "Basilico and Hofmann (2004)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "K = (K id U + K co U ) \u2295 K at X .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The goal of our algorithm is to find a rank assignment that is consistent with predictions of individual rankers and the agreement model. To this end, we develop the Good Grief decoding procedure that minimizes the dissatisfaction (grief ) of individual components with a joint prediction. In this section, we formally define the grief of each component, and a mechanism for its minimization. We then describe our method for joint training of individual rankers that takes into account the Good Grief decoding procedure.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Algorithm", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In an m-aspect ranking problem, we are given a training sequence of instance-label pairs", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formulation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "(x 1 , y 1 ), ..., (x t , y t ), ....", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formulation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Each instance x t is a feature vector in R n and the label y t is a vector of m ranks in Y m , where Y = {1, .., k} is the set of possible ranks. The i th component of y t is the rank for the i th aspect, and will be denoted by y[i] t . The goal is to learn a mapping from instances to rank sets, H : X \u2192 Y m , which minimizes the distance between predicted ranks and true ranks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formulation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Our m-aspect ranking model contains m + 1 components:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Model", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "( w[1], b[1] , ..., w[m], b[m] , a).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Model", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The first m components are individual ranking models, one for each aspect, and the final component is the agreement model. For each aspect i \u2208 1...m, w[i] \u2208 R n is a vector of weights on the input features, and", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Model", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "b[i] \u2208 R k\u22121", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Model", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "is a vector of boundaries which divide the real line into k intervals, corresponding to the k possible ranks. The default prediction of the aspect ranking model simply uses the ranking rule of the PRank algorithm. This rule predicts the rank r", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Model", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "such that b[i] r\u22121 \u2264 score i (x) < b[i] r . 3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Model", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The value score i (x) can be defined simply as the dot product w[i]\u2022x, or it can take into account the weight vectors for other aspects weighted by a measure of interaspect similarity. We adopt the definition given in equation 1, replacing the user-specific weight vectors with our aspect-specific weight vectors.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Model", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The agreement model is a vector of weights a \u2208 R n . A value of a \u2022 x > 0 predicts that the ranks of all m aspects are equal, and a value of a \u2022 x \u2264 0 indicates disagreement. The absolute value |a \u2022 x| indicates the confidence in the agreement prediction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Model", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The goal of the decoding procedure is to predict a joint rank for the m aspects which satisfies the individual ranking models as well as the agreement model. For a given input x, the individual model for aspect i predicts a default rank\u0177 [ ", |
| "cite_spans": [ |
| { |
| "start": 238, |
| "end": 239, |
| "text": "[", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Model", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "\u2022 x > 0, but\u0177[i] =\u0177[j]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Model", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "for some i, j \u2208 1...m, then the agreement model predicts complete consensus, whereas the individual aspect models do not.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Model", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We therefore adopt a joint prediction criterion which simultaneously takes into account all model components -individual aspect models as well as the agreement model. For each possible prediction r = (r[1], ..., r[m]) this criterion assesses the level of grief associated with the i th -aspect ranking model, g i (x, r[i]). Similarly, we compute the grief of the agreement model with the joint prediction, g a (x, r) (both g i and g a are defined formally below). The decoder then predicts the m ranks which minimize the overall grief:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Model", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "H(x) = arg min r\u2208Y m g a (x, r) + m i=1 g i (x, r[i])", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Model", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "(2) If the default rank predictions for the aspect models, y = (\u0177[1], ...,\u0177[m]), are in accord with the agreement model (both indicating consensus or both indicating contrast), then the grief of all model components will be zero, and we simply output\u0177. On the other hand, if\u0177 indicates disagreement but the agreement model predicts consensus, then we have the option of predicting\u0177 and bearing the grief of the agreement model. Alternatively, we can predict some consensus y (i.e. with y [i] = y [j], \u2200i, j) and bear the grief of the component ranking models. The decoder H chooses the option with lowest overall grief. 4 Now we formally define the measures of grief used in this criterion.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Model", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We define the grief of the i thaspect ranking model with respect to a rank r to be the smallest magnitude correction term which places the input's score into the r th segment of the real line: ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Aspect Model Grief", |
| "sec_num": null |
| }, |
| { |
| "text": "g i (x, r) = min |c| s.t. b[i] r\u22121 \u2264 score i (x) + c < b[i] r Agreement Model Grief", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Aspect Model Grief", |
| "sec_num": null |
| }, |
| { |
| "text": "g a (x, r) = min |c| s.t. a \u2022 x + c > 0 \u2227 \u2200i, j \u2208 1...m : r[i] = r[j] \u2228 a \u2022 x + c \u2264 0 \u2227 \u2203i, j \u2208 1...m : r[i] = r[j]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Aspect Model Grief", |
| "sec_num": null |
| }, |
| { |
| "text": "Ranking models Pseudo-code for Good Grief training is shown in Figure 1 . This training algorithm is based on PRanking (Crammer and Singer, 2001) , an online perceptron algorithm. The training is performed by iteratively ranking each training input x and updating the model. If the predicted rank\u0177 is equal to the true rank y, the weight and boundaries vectors remain unchanged. On the other hand, if y = y, then the weights and boundaries are updated to improve the prediction for x (step 4.c in Figure 1 ). See (Crammer and Singer, 2001 ) for explanation and analysis of this update rule. Our algorithm departs from PRanking by conjoining the updates for the m ranking models. We achieve this by using Good Grief decoding at each step throughout training. Our decoder H(x) (from equation 2) uses all the aspect component models ponent models are comparable. In practice, we take an uncalibrated agreement model a and reweight it with a tuning parameter: a = \u03b1a . The value of \u03b1 is estimated using the development set. We assume that the griefs of the ranking models are comparable since they are jointly trained.", |
| "cite_spans": [ |
| { |
| "start": 119, |
| "end": 145, |
| "text": "(Crammer and Singer, 2001)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 513, |
| "end": 538, |
| "text": "(Crammer and Singer, 2001", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 63, |
| "end": 71, |
| "text": "Figure 1", |
| "ref_id": null |
| }, |
| { |
| "start": 497, |
| "end": 505, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "as well as the (previously trained) agreement model to determine the predicted rank for each aspect. In concrete terms, for every training instance x, we predict the ranks of all aspects simultaneously (step 2 in Figure 1 ). Then, for each aspect we make a separate update based on this joint prediction (step 4 in Figure 1) , instead of using the individual models' predictions.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 213, |
| "end": 221, |
| "text": "Figure 1", |
| "ref_id": null |
| }, |
| { |
| "start": 315, |
| "end": 324, |
| "text": "Figure 1)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Agreement model The agreement model a is assumed to have been previously trained on the same training data. An instance is labeled with a positive label if all the ranks associated with this instance are equal. The rest of the instances are labeled as negative. This model can use any standard training algorithm for binary classification such as Perceptron or SVM optimization.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Ranking Models Following previous work on sentiment classification (Pang et al., 2002) , we represent each review as a vector of lexical features. More specifically, we extract all unigrams and bigrams, discarding those that appear fewer than three times. This process yields about 30,000 features.", |
| "cite_spans": [ |
| { |
| "start": 67, |
| "end": 86, |
| "text": "(Pang et al., 2002)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Representation", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Agreement Model The agreement model also operates over lexicalized features. The effectiveness of these features for recognition of discourse relations has been previously shown by Marcu and Echihabi (2002) . In addition to unigrams and bigrams, we also introduce a feature that measures the maximum contrastive distance between pairs of words in a review. For example, the presence of \"delicious\" and \"dirty\" indicate high contrast, whereas the pair \"expensive\" and \"slow\" indicate low contrast. The contrastive distance for a pair of words is computed by considering the difference in relative weight assigned to the words in individually trained PRanking models.", |
| "cite_spans": [ |
| { |
| "start": 181, |
| "end": 206, |
| "text": "Marcu and Echihabi (2002)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Representation", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "In this section, we prove that our model is able to perfectly rank a strict superset of the training corpora perfectly rankable by m ranking models individually. We first show that if the independent ranking models can individually rank a training set perfectly, then our model can do so as well. Next, we show that our model is more expressive by providing Input : (x 1 , y 1 ), ..., (x T , y T ), Agreement model a, Decoder defintion H(x) (from equation 2). Initialize :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Set w[i] 1 = 0, b[i] 1 1 , ..., b[i] 1 k\u22121 = 0, b[i] 1 k = \u221e, \u2200i \u2208 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "..m. Loop : For t = 1, 2, ..., T :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "1. Get a new instance x t \u2208 R n . 2. Predict\u0177 t = H(x; w t , b t , a) (Equation 2). 3. Get a new label y t . 4. For aspect i = 1, ..., m:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "If\u0177[i] t = y[i] t update model (otherwise set w[i] t+1 = w[i] t , b[i] t+1 r = b[i] t r , \u2200r): 4.a For r = 1, ..., k \u2212 1 : If y[i] t \u2264 r then y[i] t r = \u22121 else y[i] t r = 1. 4.b For r = 1, ..., k \u2212 1 : If (\u0177[i] t \u2212 r)y[i] t r \u2264 0 then \u03c4 [i] t r = y[i] t r else \u03c4 [i] t r = 0. 4.c Update w[i] t+1 \u2190 w[i] t + ( r \u03c4 [i] t r )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "x t . For r = 1, ..., k \u2212 1 update :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "b[i] t+1 r \u2190 b[i] t r \u2212 \u03c4 [i] t r . Output :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "H(x; w T +1 , b T +1 , a). Figure 1 : Good Grief Training. The algorithm is based on PRanking training algorithm. Our algorithm differs in the joint computation of all aspect predictions\u0177 t based on the Good Grief Criterion (step 2) and the calculation of updates for each aspect based on the joint prediction (step 4).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 27, |
| "end": 35, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "a simple illustrative example of a training set which can only be perfectly ranked with the inclusion of an agreement model. First we introduce some notation. For each training instance (x t , y t ), each aspect i \u2208 1...m, and each rank r \u2208 1...k, define an auxiliary variable", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "y[i] t r with y[i] t r = \u22121 if y[i] t \u2264 r and y[i] t r = 1 if y[i] t > r. In words, y[i] t", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "r indicates whether the true rank y[i] t is to the right or left of a potential rank r. Now suppose that a training set (x 1 , y 1 ), ..., (x T , y T ) is perfectly rankable for each aspect independently.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "That is, for each aspect i \u2208 1...m, there exists some ideal model", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "v[i] * = (w[i] * , b[i] *", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": ") such that the signed distance from the prediction to the r th boundary:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "w[i] * \u2022 x t \u2212 b[i] *", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "r has the same sign as the auxiliary variable y[i] t r . In other words, the minimum margin over all training instances and ranks,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u03b3 = min r,t {(w[i] * \u2022 x t \u2212 b[i] * r )y[i] t r }", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": ", is no less than zero.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Now for the t th training instance, define an agreement auxiliary variable a t , where a t = 1 when all aspects agree in rank and a t = \u22121 when at least two aspects disagree in rank. First consider the case where the agreement model a perfectly classifies all training instances: (a \u2022 x t )a t > 0, \u2200t. It is clear that Good Grief decoding with the ideal joint model a) will produce the same output as the component ranking models run separately (since the grief will always be zero for the default rank predictions). Now consider the case where the training data is not linearly separable with regard to agreement classification. Define the margin of the worst case error to be \u03b2 = max t {|(a\u2022x t )| : (a\u2022x t )a t < 0}. If \u03b2 < \u03b3, then again Good Grief decoding will always produce the default results (since the grief of the agreement model will be at most \u03b2 in cases of error, whereas the grief of the ranking models for any deviation from their default predictions will be at least \u03b3). On the other hand, if \u03b2 \u2265 \u03b3, then the agreement model errors could potentially disrupt the perfect ranking. However, we need only rescale w * := w * ( \u03b2 \u03b3 + ) and b * := b * ( \u03b2 \u03b3 + ) to ensure that the grief of the ranking models will always exceed the grief of the agreement model in cases where the latter is in error. Thus whenever independent ranking models can perfectly rank a training set, a joint ranking model with Good Grief decoding can do so as well. Now we give a simple example of a training set which can only be perfectly ranked with the addition of an agreement model. Consider a training set of four instances with two rank aspects: y 1 = (1, 0, 1), (2, 1) x 2 , y 2 = (1, 0, 0), (2, 2) x 3 , y 3 = (0, 1, 1), (1, 2) x 4 , y 4 = (0, 1, 0), (1, 1) We can interpret these inputs as feature vectors corresponding to the presence of \"good\", \"bad\", and \"but not\" in the following four sentences:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 367, |
| "end": 369, |
| "text": "a)", |
| "ref_id": null |
| }, |
| { |
| "start": 1641, |
| "end": 1664, |
| "text": "y 1 = (1, 0, 1), (2, 1)", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "( w[1] * , b[1] * , ..., w[m] * , b[m] * ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "x 1 ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The food was good, but not the ambience.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The food was good, and so was the ambience. The food was bad, but not the ambience.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The food was bad, and so was the ambience. We can further interpret the first rank aspect as the quality of food, and the second as the quality of the ambience, both on a scale of 1-2. A simple ranking model which only considers the words \"good\" and \"bad\" perfectly ranks the food aspect. However, it is easy to see that no single model perfectly ranks the ambience aspect. Consider any model w, b = (b) . Note that w \u2022 x 1 < b and w \u2022 x 2 \u2265 b together imply that w 3 < 0, whereas w \u2022 x 3 \u2265 b and w \u2022 x 4 < b together imply that w 3 > 0. Thus independent ranking models cannot perfectly rank this corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The addition of an agreement model, however, can easily yield a perfect ranking. With a = (0, 0, \u22125) (which predicts contrast with the presence of the words \"but not\") and a ranking model for the ambience aspect such as w = (1, \u22121, 0), b = (0), the Good Grief decoder will produce a perfect rank.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We evaluate our multi-aspect ranking algorithm on a corpus 5 of restaurant reviews available on the website http://www.we8there.com. Reviews from this website have been previously used in other sentiment analysis tasks (Higashinaka et al., 2006) . Each review is accompanied by a set of five ranks, each on a scale of 1-5, covering food, ambience, service, value, and overall experience. These ranks are provided by consumers who wrote original reviews. Our corpus does not contain incomplete data points since all the reviews available on this website contain both a review text and the values for all the five aspects.", |
| "cite_spans": [ |
| { |
| "start": 219, |
| "end": 245, |
| "text": "(Higashinaka et al., 2006)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Set-Up", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Training and Testing Division Our corpus con-tains 4,488 reviews, averaging 115 words. We randomly select 3,488 reviews for training, 500 for development and 500 for testing. Parameter Tuning We used the development set to determine optimal numbers of training iterations for our model and for the baseline models. Also, given an initial uncalibrated agreement model a , we define our agreement model to be a = \u03b1a for an appropriate scaling factor \u03b1. We tune the value of \u03b1 on the development set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Set-Up", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Corpus Statistics Our training corpus contains 528 among 5 5 = 3025 possible rank sets. The most frequent rank set 5, 5, 5, 5, 5 accounts for 30.5% of the training set. However, no other rank set comprises more than 5% of the data. To cover 90% of occurrences in the training set, 227 rank sets are required. Therefore, treating a rank tuple as a single label is not a viable option for this task. We also find that reviews with full agreement across rank aspects are quite common in our corpus, accounting for 38% of the training data. Thus an agreementbased approach is natural and relevant.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Set-Up", |
| "sec_num": "5" |
| }, |
| { |
| "text": "A rank of 5 is the most common rank for all aspects and thus a prediction of all 5's gives a MAJOR-ITY baseline and a natural indication of task difficulty.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Set-Up", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Evaluation Measures We evaluate our algorithm and the baseline using ranking loss (Crammer and Singer, 2001; Basilico and Hofmann, 2004) . Ranking loss measures the average distance between the true rank and the predicted rank. Formally, given N test instances (x 1 , y 1 ), ..., (x N , y N ) of an m-aspect ranking problem and the corresponding predictions\u0177 1 , ...,\u0177 N , ranking loss is defined as", |
| "cite_spans": [ |
| { |
| "start": 82, |
| "end": 108, |
| "text": "(Crammer and Singer, 2001;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 109, |
| "end": 136, |
| "text": "Basilico and Hofmann, 2004)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Set-Up", |
| "sec_num": "5" |
| }, |
| { |
| "text": "t,i |y[i] t \u2212\u0177[i] t | mN", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Set-Up", |
| "sec_num": "5" |
| }, |
| { |
| "text": ". Lower values of this measure correspond to a better performance of the algorithm.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Set-Up", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Comparison with Baselines Table 1 shows the performance of the Good Grief training algorithm GG TRAIN+DECODE along with various baselines, including the simple MAJORITY baseline mentioned in section 5. The first competitive baseline, PRANK, learns a separate ranker for each aspect using the PRank algorithm. The second competitive baseline, SIM, shares the weight vectors across aspects using a similarity measure (Basilico and Hofmann, 2004 Both of these methods are described in detail in Section 2. In addition, we consider two variants of our algorithm: GG DECODE employs the PRank training algorithm to independently train all component ranking models and only applies Good Grief decoding at test time. GG ORACLE uses Good Grief training and decoding but in both cases is given perfect knowledge of whether or not the true ranks all agree (instead of using the trained agreement model). Our model achieves a rank error of 0.632, compared to 0.675 for PRANK and 0.663 for SIM. Both of these differences are statistically significant at p < 0.002 by a Fisher Sign Test. The gain in performance is observed across all five aspects. Our model also yields significant improvement (p < 0.05) over the decoding-only variant GG DECODE, confirming the importance of joint training. As shown in Figure 2, formance on the 210 test instances where all the target ranks agree and the remaining 290 instances where there is some contrast. As Table 2 shows, we outperform the PRANK baseline in both cases. However on the consensus instances we achieve a relative reduction in error of 21.8% compared to only a 1.1% reduction for the other set. In cases of consensus, the agreement model can guide the ranking models by reducing the decision space to five rank sets. In cases of disagreement, however, our model does not provide sufficient constraints as the vast majority of ranking sets remain viable. This explains the performance of GG ORACLE, the variant of our algorithm with perfect knowledge of agreement/disagreement facts. As shown in Table 1 , GG ORACLE yields substantial improvement over our algorithm, but most of this gain comes from consensus instances (see Table 2 ).", |
| "cite_spans": [ |
| { |
| "start": 415, |
| "end": 442, |
| "text": "(Basilico and Hofmann, 2004", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 26, |
| "end": 33, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 1291, |
| "end": 1300, |
| "text": "Figure 2,", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 1434, |
| "end": 1441, |
| "text": "Table 2", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 2035, |
| "end": 2042, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 2164, |
| "end": 2171, |
| "text": "Table 2", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We also examine the impact of the agreement model accuracy on our algorithm. The agreement model, when considered on its own, achieves classification accuracy of 67% on the test set, compared to a majority baseline of 58%. However, those instances with high confidence |a \u2022 x| exhibit substantially higher classification accuracy. Figure 3 shows the performance of the agreement model as a function of the confidence value. The 10% of the data with highest confidence values can be classified by the agreement model with 90% accuracy, and the third of the data with highest confidence can be classified at 80% accuracy.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 331, |
| "end": 339, |
| "text": "Figure 3", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "This property explains why the agreement model helps in joint ranking even though its overall accuracy may seem low. Under the Good Grief criterion, the agreement model's prediction will only be enforced when its grief outweighs that of the ranking models. Thus in cases where the prediction confidence (|a\u2022x|) is relatively low, 6 the agreement model will essentially be ignored.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We considered the problem of analyzing multiple related aspects of user reviews. The algorithm presented jointly learns ranking models for individual aspects by modeling the dependencies between assigned ranks. The strength of our algorithm lies in its ability to guide the prediction of individual rankers using rhetorical relations between aspects such as agreement and contrast. Our method yields significant empirical improvements over individual rankers as well as a state-of-the-art joint ranking model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Our current model employs a single rhetorical relation -agreement vs. contrast -to model dependencies between different opinions. As our analy-sis shows, this relation does not provide sufficient constraints for non-consensus instances. An avenue for future research is to consider the impact of additional rhetorical relations between aspects. We also plan to theoretically analyze the convergence properties of this and other joint perceptron algorithms.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "In this paper, ranking refers to the task of assigning an integer from 1 to k to each instance. This task is sometimes referred to as \"ordinal regression\"(Crammer and Singer, 2001) and \"rating prediction\"(Pang and Lee, 2005).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "More precisely (taking into account the possibility of ties):y[i] = min r\u2208{1,..,k} {r : scorei(x) \u2212 b[i]r < 0}", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "This decoding criterion assumes that the griefs of the com-", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Data and code used in this paper are available at http://people.csail.mit.edu/bsnyder/naacl07", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "What counts as \"relatively low\" will depend on both the value of the tuning parameter \u03b1 and the confidence of the component ranking models for a particular input x.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "The authors acknowledge the support of the National Sci- ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Unifying collaborative and content-based filtering", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Basilico", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Hofmann", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "65--72", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Basilico, T. Hofmann. 2004. Unifying collabora- tive and content-based filtering. In Proceedings of the ICML, 65-72.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Pranking with ranking", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Crammer", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Singer", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "641--647", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. Crammer, Y. Singer. 2001. Pranking with ranking. In NIPS, 641-647.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Mining the peanut gallery: Opinion extraction and semantic classification of product reviews", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Dave", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Lawrence", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Pennock", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of WWW", |
| "volume": "", |
| "issue": "", |
| "pages": "519--528", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. Dave, S. Lawrence, D. Pennock. 2003. Mining the peanut gallery: Opinion extraction and semantic classification of product reviews. In Proceedings of WWW, 519-528.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Seeing stars when there aren't many stars: Graph-based semi-supervised learning for sentiment categorization", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [ |
| "B" |
| ], |
| "last": "Goldberg", |
| "suffix": "" |
| }, |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of HLT/NAACL workshop on TextGraphs", |
| "volume": "", |
| "issue": "", |
| "pages": "45--52", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. B. Goldberg, X. Zhu. 2006. Seeing stars when there aren't many stars: Graph-based semi-supervised learn- ing for sentiment categorization. In Proceedings of HLT/NAACL workshop on TextGraphs, 45-52.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Learning to generate naturalistic utterances using reviews in spoken dialogue systems", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Higashinaka", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Prasad", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Walker", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of COL-ING/ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "265--272", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Higashinaka, R. Prasad, M. Walker. 2006. Learn- ing to generate naturalistic utterances using reviews in spoken dialogue systems. In Proceedings of COL- ING/ACL, 265-272.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "An unsupervised approach to recognizing discourse relations", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Echihabi", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "368--375", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Marcu, A. Echihabi. 2002. An unsupervised approach to recognizing discourse relations. In Proceedings of ACL, 368-375.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Pang", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "115--124", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. Pang, L. Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the ACL, 115-124.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Thumbs up? sentiment classification using machine learning techniques", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Pang", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Vaithyanathan", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "79--86", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. Pang, L. Lee, S. Vaithyanathan. 2002. Thumbs up? sentiment classification using machine learning tech- niques. In Proceedings of EMNLP, 79-86.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Thumbs up or thumbs down? semantic orientation applied to unsupervised classsification of reviews", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Turney", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "417--424", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Turney. 2002. Thumbs up or thumbs down? semantic orientation applied to unsupervised classsification of reviews. In Proceedings of the ACL, 417-424.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Hatzivassiloglou", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "129--136", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "H. Yu, V. Hatzivassiloglou. 2003. Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences. In Pro- ceedings of EMNLP, 129-136.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "Rank loss for our algorithm and baselines as a function of training round." |
| }, |
| "FIGREF1": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "Accuracy of the agreement model on subsets of test instances with highest confidence |a \u2022 x|." |
| }, |
| "TABREF0": { |
| "num": null, |
| "text": "i] based on its feature weight and boundary vectors w[i], b[i] . In addition, the agreement model makes a prediction regarding rank consensus based on a \u2022 x. However, the default aspect predictions\u0177[1] . . .\u0177[m] may not accord with the agreement model. For example, if a", |
| "content": "<table/>", |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF2": { |
| "num": null, |
| "text": "Ranking loss on the test set for variants of Good Grief and various baselines.", |
| "content": "<table><tr><td/><td colspan=\"4\">Food Service Value Atmosphere Experience Total</td></tr><tr><td>MAJORITY</td><td>0.848 1.056 1.030</td><td>1.044</td><td>1.028</td><td>1.001</td></tr><tr><td>PRANK</td><td>0.606 0.676 0.700</td><td>0.776</td><td>0.618</td><td>0.675</td></tr><tr><td>SIM</td><td>0.562 0.648 0.706</td><td>0.798</td><td>0.600</td><td>0.663</td></tr><tr><td>GG DECODE</td><td>0.544 0.648 0.704</td><td>0.798</td><td>0.584</td><td>0.656</td></tr><tr><td colspan=\"2\">GG TRAIN+DECODE 0.534 0.622 0.644</td><td>0.774</td><td>0.584</td><td>0.632</td></tr><tr><td>GG ORACLE</td><td>0.510 0.578 0.674</td><td>0.694</td><td>0.518</td><td>0.595</td></tr><tr><td>Table 1:</td><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td>).</td></tr></table>", |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF4": { |
| "num": null, |
| "text": "Ranking loss for our model and PRANK computed separately on cases of actual consensus and actual disagreement.", |
| "content": "<table/>", |
| "type_str": "table", |
| "html": null |
| } |
| } |
| } |
| } |