| { |
| "paper_id": "N07-1033", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T14:47:52.277706Z" |
| }, |
| "title": "Using \"Annotator Rationales\" to Improve Machine Learning for Text Categorization *", |
| "authors": [ |
| { |
| "first": "Omar", |
| "middle": [ |
| "F" |
| ], |
| "last": "Zaidan", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Johns Hopkins University Baltimore", |
| "location": { |
| "postCode": "21218", |
| "region": "MD", |
| "country": "USA" |
| } |
| }, |
| "email": "ozaidan@cs.jhu.edu" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Eisner", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Johns Hopkins University Baltimore", |
| "location": { |
| "postCode": "21218", |
| "region": "MD", |
| "country": "USA" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Christine", |
| "middle": [ |
| "D" |
| ], |
| "last": "Piatko", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "JHU Applied Physics Laboratory", |
| "institution": "", |
| "location": { |
| "addrLine": "11100 Johns Hopkins Road Laurel", |
| "postCode": "20723", |
| "region": "MD", |
| "country": "USA" |
| } |
| }, |
| "email": "christine.piatko@jhuapl.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We propose a new framework for supervised machine learning. Our goal is to learn from smaller amounts of supervised training data, by collecting a richer kind of training data: annotations with \"rationales.\" When annotating an example, the human teacher will also highlight evidence supporting this annotation-thereby teaching the machine learner why the example belongs to the category. We provide some rationale-annotated data and present a learning method that exploits the rationales during training to boost performance significantly on a sample task, namely sentiment classification of movie reviews. We hypothesize that in some situations, providing rationales is a more fruitful use of an annotator's time than annotating more examples.", |
| "pdf_parse": { |
| "paper_id": "N07-1033", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We propose a new framework for supervised machine learning. Our goal is to learn from smaller amounts of supervised training data, by collecting a richer kind of training data: annotations with \"rationales.\" When annotating an example, the human teacher will also highlight evidence supporting this annotation-thereby teaching the machine learner why the example belongs to the category. We provide some rationale-annotated data and present a learning method that exploits the rationales during training to boost performance significantly on a sample task, namely sentiment classification of movie reviews. We hypothesize that in some situations, providing rationales is a more fruitful use of an annotator's time than annotating more examples.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Annotation cost is a bottleneck for many natural language processing applications. While supervised machine learning systems are effective, it is laborintensive and expensive to construct the many training examples needed. Previous research has explored active or semi-supervised learning as possible ways to lessen this burden.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We propose a new way of breaking this annotation bottleneck. Annotators currently indicate what the correct answers are on training data. We propose that they should also indicate why, at least by coarse hints. We suggest new machine learning approaches that can benefit from this \"why\" information.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "For example, an annotator who is categorizing phrases or documents might also be asked to highlight a few substrings that significantly influenced her judgment. We call such clues \"rationales.\" They need not correspond to machine learning features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In some circumstances, rationales should not be too expensive or time-consuming to collect. As long as the annotator is spending the time to study example x i and classify it, it may not require much extra effort for her to mark reasons for her classification.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We will not rely exclusively on the rationales, but use them only as an added source of information. The idea is to help direct the learning algorithm's attention-helping it tease apart signal from noise.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Using Rationales to Aid Learning", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Machine learning algorithms face a well-known \"credit assignment\" problem. Given a complex datum x i and the desired response y i , many features of x i could be responsible for the choice of y i . The learning algorithm must tease out which features were actually responsible. This requires a lot of training data, and often a lot of computation as well.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Using Rationales to Aid Learning", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Our rationales offer a shortcut to solving this \"credit assignment\" problem, by providing the learning algorithm with hints as to which features of x i were relevant. Rationales should help guide the learning algorithm toward the correct classification function, by pushing it toward a function that correctly pays attention to each example's relevant features. This should help the algorithm learn from less data and avoid getting trapped in local maxima. 1 In this paper, we demonstrate the \"annotator rationales\" technique on a text categorization problem previously studied by others.", |
| "cite_spans": [ |
| { |
| "start": 457, |
| "end": 458, |
| "text": "1", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Using Rationales to Aid Learning", |
| "sec_num": "2" |
| }, |
| { |
| "text": "One popular approach for text categorization is to use a discriminative model such as a Support Vector Machine (SVM) (e.g. (Joachims, 1998; Dumais, 1998) ). We propose that SVM training can in general incorporate annotator rationales as follows.", |
| "cite_spans": [ |
| { |
| "start": 123, |
| "end": 139, |
| "text": "(Joachims, 1998;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 140, |
| "end": 153, |
| "text": "Dumais, 1998)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discriminative Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "From the rationale annotations on a positive example \u2212 \u2192 x i , we will construct one or more \"not-quiteas-positive\" contrast examples \u2212 \u2192 v ij . In our text categorization experiments below, each contrast document \u2212 \u2192 v ij was obtained by starting with the original and \"masking out\" one or all of the several rationale substrings that the annotator had highlighted (r ij ). The intuition is that the correct model should be less sure of a positive classification on the contrast example \u2212 \u2192 v ij than on the original example x i , because \u2212 \u2192 v ij lacks evidence that the annotator found significant.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discriminative Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We can translate this intuition into additional constraints on the correct model, i.e., on the weight vector w. In addition to the usual SVM constraint on positive examples that w", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discriminative Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 \u2212 \u2192 x i \u2265 1, we also want (for each j) that w \u2022 x i \u2212 w \u2022 \u2212 \u2192 v ij \u2265 \u00b5,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discriminative Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "where \u00b5 \u2265 0 controls the size of the desired margin between original and contrast examples.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discriminative Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "An ordinary soft-margin SVM chooses w and \u03be to minimize", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discriminative Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "1 2 w 2 + C( i \u03be i )", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Discriminative Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "subject to the constraints", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discriminative Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "(\u2200i) w \u2022 \u2212 \u2192 x i \u2022 y i \u2265 1 \u2212 \u03be i (2) (\u2200i) \u03be i \u2265 0 (3)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discriminative Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "where \u2212 \u2192 x i is a training example, y i \u2208 {\u22121, +1} is its desired classification, and \u03be i is a slack variable that allows training example \u2212 \u2192 x i to miss satisfying the margin constraint if necessary. The parameter C > 0 controls the cost of taking such slack, and should generally be lower for noisier or less linearly separable datasets. We add the contrast constraints", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discriminative Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "(\u2200i, j) w \u2022 ( \u2212 \u2192 x i \u2212 \u2212 \u2192 v ij ) \u2022 y i \u2265 \u00b5(1 \u2212 \u03be ij ),", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Discriminative Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "where \u2212 \u2192 v ij is one of the contrast examples constructed from example \u2212 \u2192 x i , and \u03be ij \u2265 0 is an associated slack variable. Just as these extra constraints have their own margin \u00b5, their slack variables have their own cost, so the objective function (1) becomes", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discriminative Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "1 2 w 2 + C( i \u03be i ) + C contrast ( i,j \u03be ij )", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "Discriminative Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The parameter C contrast \u2265 0 determines the importance of satisfying the contrast constraints. It should generally be less than C if the contrasts are noisier than the training examples. 2 In practice, it is possible to solve this optimization using a standard soft-margin SVM learner. Dividing equation 4through by \u00b5, it becomes", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discriminative Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "(\u2200i, j) w \u2022 \u2212 \u2192 x ij \u2022 y i \u2265 1 \u2212 \u03be ij ,", |
| "eq_num": "(6)" |
| } |
| ], |
| "section": "Discriminative Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "where", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discriminative Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2212 \u2192 x ij def = \u2212 \u2192 x i \u2212 \u2212 \u2192 v ij", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discriminative Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u00b5 . Since equation 6takes the same form as equation 2, we simply add the pairs ( \u2212 \u2192", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discriminative Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "x ij , y i ) to the training set as pseudoexamples, weighted by C contrast rather than C so that the learner will use the objective function (5).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discriminative Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "There is one subtlety. To allow a biased hyperplane, we use the usual trick of prepending a 1 element to each training example. Thus we require w", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discriminative Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 (1, \u2212 \u2192 x i ) \u2265 1 \u2212 \u03be i (", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discriminative Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "which makes w 0 play the role of a bias term). This means, however, that we must prepend a 0 element to each pseudoexample:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discriminative Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "w \u2022 (1, x i )\u2212(1, \u2212 \u2192 v ij ) \u00b5 = w \u2022 (0, \u2212 \u2192 x ij ) \u2265 1 \u2212 \u03be ij .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discriminative Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In our experiments, we optimize \u00b5, C, and C contrast on held-out data (see section 5.2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discriminative Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In order to demonstrate that annotator rationales help machine learning, we needed annotated data that included rationales for the annotations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rationale Annotation for Movie Reviews", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We chose a dataset that would be enjoyable to reannotate: the movie review dataset of (Pang et al., 2002; Pang and Lee, 2004) . 3 The dataset consists of 1000 positive and 1000 negative movie reviews obtained from the Internet Movie Database (IMDb) review archive, all written before 2002 by a total of 312 authors, with a cap of 20 reviews per author per category. Pang and Lee have divided the 2000 documents into 10 folds, each consisting of 100 positive reviews and 100 negative reviews.", |
| "cite_spans": [ |
| { |
| "start": 86, |
| "end": 105, |
| "text": "(Pang et al., 2002;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 106, |
| "end": 125, |
| "text": "Pang and Lee, 2004)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 128, |
| "end": 129, |
| "text": "3", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rationale Annotation for Movie Reviews", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The dataset is arguably artificial in that it keeps only reviews where the reviewer provided a rather high or rather low numerical rating, allowing Pang and Lee to designate the review as positive or negative. Nonetheless, most reviews contain a difficult mix of praise, criticism, and factual description. In fact, it is possible for a mostly critical review to give a positive overall recommendation, or vice versa.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rationale Annotation for Movie Reviews", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Rationale annotators were given guidelines 4 that read, in part:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation procedure", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Each review was intended to give either a positive or a negative overall recommendation. You will be asked to justify why a review is positive or negative. To justify why a review is positive, highlight the most important words and phrases that would tell someone to see the movie. To justify why a review is negative, highlight words and phrases that would tell someone not to see the movie. These words and phrases are called rationales.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation procedure", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "You can highlight the rationales as you notice them, which should result in several rationales per review. Do your best to mark enough rationales to provide convincing support for the class of interest.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation procedure", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "You do not need to go out of your way to mark everything. You are probably doing too much work if you find yourself going back to a paragraph to look for even more rationales in it. Furthermore, it is perfectly acceptable to skim through sections that you feel would not contain many rationales, such as a reviewer's plot summary, even if that might cause you to miss a rationale here and there.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation procedure", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The last two paragraphs were intended to provide some guidance on how many rationales to annotate. Even so, as section 4.2 shows, some annotators were considerably more thorough (and slower).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation procedure", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Annotators were also shown the following examples 5 of positive rationales:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation procedure", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2022 you will enjoy the hell out of American Pie.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation procedure", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2022 fortunately, they managed to do it in an interesting and funny way.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation procedure", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2022 he is one of the most exciting martial artists on the big screen, continuing to perform his own stunts and dazzling audiences with his flashy kicks and punches.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation procedure", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2022 the romance was enchanting. and the following examples 5 of negative rationales:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation procedure", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Figure 1: Histograms of rationale counts per document (A0's annotations). The overall mean of 8.55 is close to that of the four annotators in Table 1 . The median and mode are 8 and 7.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 142, |
| "end": 149, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Annotation procedure", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2022 A woman in peril. A confrontation. An explosion. The end. Yawn. Yawn. Yawn.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation procedure", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2022 when a film makes watching Eddie Murphy a tedious experience, you know something is terribly wrong.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation procedure", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2022 the movie is so badly put together that even the most casual viewer may notice the miserable pacing and stray plot threads.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation procedure", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2022 don't go see this movie", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation procedure", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The annotation involves boldfacing the rationale phrases using an HTML editor. Note that a fancier annotation tool would be necessary for a task like named entity tagging, where an annotator must mark many named entities in a single document. At any given moment, such a tool should allow the annotator to highlight, view, and edit only the several rationales for the \"current\" annotated entity (the one most recently annotated or re-selected).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation procedure", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "One of the authors (A0) annotated folds 0-8 of the movie review set (1,800 documents) with rationales that supported the gold-standard classifications. This training/development set was used for all of the learning experiments in sections 5-6. A histogram of rationale counts is shown in Figure 1 . As mentioned in section 3, the rationale annotations were just textual substrings. The annotator did not require knowledge of the classifier features. Thus, our rationale dataset is a new resource 4 that could also be used to study exploitation of rationales under feature sets or learning methods other than those considered here (see section 8).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 288, |
| "end": 296, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Annotation procedure", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "To study the annotation process, we randomly selected 150 documents from the dataset. The doc-Rationales % rationales also % rationales also % rationales also % rationales also % rationales also per document annotated by A1 annotated by A2 annotated by AX annotated by AY ann. by anyone else A1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inter-annotator agreement", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "5.02 (100 Table 1 : Average number of rationales and inter-annotator agreement for Tasks 2 and 3. A rationale by Ai (\"I think this is a great movie!\") is considered to have been annotated also by Aj if at least one of Aj's rationales overlaps it (\"I think this is a great movie!\"). In computing pairwise agreement on rationales, we ignored documents where Ai and Aj disagreed on the class. Notice that the most thorough annotator AY caught most rationales marked by the others (exhibiting high \"recall\"), and that most rationales enjoyed some degree of consensus, especially those marked by the least thorough annotator A1 (exhibiting high \"precision\").", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 10, |
| "end": 17, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Inter-annotator agreement", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "uments were split into three groups, each consisting of 50 documents (25 positive and 25 negative). Each subset was used for one of three tasks: 6", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inter-annotator agreement", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2022 Task 1: Given the document, annotate only the class (positive/negative).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inter-annotator agreement", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2022 Task 2: Given the document and its class, annotate some rationales for that class.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inter-annotator agreement", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2022 Task 3: Given the document, annotate both the class and some rationales for it.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inter-annotator agreement", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We carried out a pilot study (annotators AX and AY: two of the authors) and a later, more controlled study (annotators A1 and A2: paid students). The latter was conducted in a more controlled environment where both annotators used the same annotation tool and annotation setup as each other. Their guidelines were also more detailed (see section 4.1). In addition, the documents for the different tasks were interleaved to avoid any practice effect.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inter-annotator agreement", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The annotators' classification accuracies in Tasks 1 and 3 (against Pang & Lee's labels) ranged from 92%-97%, with 4-way agreement on the class for 89% of the documents, and pairwise agreement also ranging from 92%-97%. Table 1 shows how many rationales the annotators provided and how well their rationales agreed.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 220, |
| "end": 227, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Inter-annotator agreement", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Interestingly, in Task 3, four of AX's rationales for a positive class were also partially highlighted by AY as support for AY's (incorrect) negative classifications, such as: 6 Each task also had a \"warmup\" set of 10 documents to be annotated before that tasks's 50 documents. Documents for Tasks 2 and 3 would automatically open in an HTML editor while Task 1 documents opened in an HTML viewer with no editing option. The annotators recorded their classifications for Tasks 1 and 3 on a spreadsheet.", |
| "cite_spans": [ |
| { |
| "start": 176, |
| "end": 177, |
| "text": "6", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inter-annotator agreement", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "A1 time A2 time AX time AY time Task Table 2 : Average annotation rates on each task.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 32, |
| "end": 36, |
| "text": "Task", |
| "ref_id": null |
| }, |
| { |
| "start": 37, |
| "end": 44, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "min./KB", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 Even with its numerous flaws, the movie all comes together, if only for those who . . .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "min./KB", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 \"Beloved\" acts like an incredibly difficult chamber drama paired with a ghost story.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "min./KB", |
| "sec_num": null |
| }, |
| { |
| "text": "Average annotation times are in Table 2 . As hoped, rationales did not take too much extra time for most annotators to provide. For each annotator except A2, providing rationales only took roughly twice the time (Task 3 vs. Task 1), even though it meant marking an average of 5-11 rationales in addition to the class. Why this low overhead? Because marking the class already required the Task 1 annotator to read the document and find some rationales, even if s/he did not mark them. The only extra work in Task 3 is in making them explicit. This synergy between class annotation and rationale annotation is demonstrated by the fact that doing both at once (Task 3) was faster than doing them separately (Tasks 1+2).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 32, |
| "end": 39, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Annotation time", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We remark that this task-binary classification on full documents-seems to be almost a worst-case scenario for the annotation of rationales. At a purely mechanical level, it was rather heroic of A0 to attach 8-9 new rationale phrases r ij to every bit y i of ordinary annotation. Imagine by contrast a more local task of identifying entities or relations. Each lower-level annotation y i will tend to have fewer rationales r ij , while y i itself will be more complex and hence more difficult to mark. Thus, we expect that the overhead of collecting rationales will be less in many scenarios than the factor of 2 we measured.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation time", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Annotation overhead could be further reduced. For a multi-class problem like relation detection, one could ask the annotator to provide rationales only for the rarer classes. This small amount of extra time where the data is sparsest would provide extra guidance where it was most needed. Another possibility is passive collection of rationales via eye tracking.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation time", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Although this dataset seems to demand discourselevel features that contextualize bits of praise and criticism, we exactly follow Pang et al. (2002) and Pang and Lee (2004) in merely using binary unigram features, corresponding to the 17,744 unstemmed word or punctuation types with count \u2265 4 in the full 2000-document corpus. Thus, each document is reduced to a 0-1 vector with 17,744 dimensions, which is then normalized to unit length. 7 We used the method of section 3 to place additional constraints on a linear classifier. Given a training document, we create several contrast documents, each by deleting exactly one rationale substring from the training document. Converting documents to feature vectors, we obtained an original example \u2212 \u2192 x i and several contrast examples \u2212 \u2192 v i1 , \u2212 \u2192 v i2 , . . .. 8 Again, our training method required each original document to be classified more confidently (by a margin \u00b5) than its contrast documents.", |
| "cite_spans": [ |
| { |
| "start": 129, |
| "end": 147, |
| "text": "Pang et al. (2002)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 152, |
| "end": 171, |
| "text": "Pang and Lee (2004)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 438, |
| "end": 439, |
| "text": "7", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature extraction", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "If we were using more than unigram features, then simply deleting a rationale substring would not always be the best way to create a contrast document, as the resulting ungrammatical sentences might cause deep feature extraction to behave strangely (e.g., parse errors during preprocessing). The goal in creating the contrast document is merely to suppress 7 The vectors are normalized before prepending the 1 corresponding to the bias term feature (mentioned in section 3).", |
| "cite_spans": [ |
| { |
| "start": 357, |
| "end": 358, |
| "text": "7", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature extraction", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "8 The contrast examples were not normalized to precisely unit length, but instead were normalized by the same factor used to normalize \u2212 \u2192 xi . This conveniently ensured that the pseudoex-", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature extraction", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "amples \u2212 \u2192 xij def = x i \u2212 \u2212 \u2192 v ij \u00b5", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature extraction", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "were sparse vectors, with 0 coordinates for all words not in the j th rationale. features (n-grams, parts of speech, syntactic dependencies . . . ) that depend in part on material in one or more rationales. This could be done directly by modifying the feature extractors, or if one prefers to use existing feature extractors, by \"masking\" rather than deleting the rationale substring-e.g., replacing each of its word tokens with a special MASK token that is treated as an out-of-vocabulary word.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature extraction", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We transformed this problem to an SVM problem (see section 3) and applied SVM light for training and testing, using the default linear kernel. We used only A0's rationales and the true classifications.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training and testing procedures", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Fold 9 was reserved as a test set. All accuracy results reported in the paper are the result of testing on fold 9, after training on subsets of folds 0-8.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training and testing procedures", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Our learning curves show accuracy after training on T < 9 folds (i.e., 200T documents), for various T . To reduce the noise in these results, the accuracy we report for training on T folds is actually the average of 9 different experiments with different (albeit overlapping) training sets that cover folds 0-8:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training and testing procedures", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "1 9 8 i=0 acc(F 9 | \u03b8 * , F i+1 \u222a . . . \u222a F i+T )", |
| "eq_num": "(7)" |
| } |
| ], |
| "section": "Training and testing procedures", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "where F j denotes the fold numbered j mod 9, and acc(Z | \u03b8, Y ) means classification accuracy on the set Z after training on Y with hyperparameters \u03b8.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training and testing procedures", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "To evaluate whether two different training methods A and B gave significantly different averageaccuracy values, we used a paired permutation test (generalizing a sign test). The test assumes independence among the 200 test examples but not among the 9 overlapping training sets. For each of the 200 test examples in fold 9, we measured (a i , b i ), where a i (respectively b i ) is the number of the 9 training sets under which A (respectively B) classified the example correctly. The p value is the probability that the absolute difference between the average-accuracy values would reach or exceed the observed absolute difference, namely", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training and testing procedures", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "| 1 200 200 i=1 a i \u2212b i 9 |, if each (a i , b i )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training and testing procedures", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "had an independent 1/2 chance of being replaced with (b i , a i ), as per the null hypothesis that A and B are indistinguishable.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training and testing procedures", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "For any given value of T and any given training method, we chose hyperparameters \u03b8 * = (C, \u00b5, C contrast ) to maximize the following crossvalidation performance: 9", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training and testing procedures", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "\u03b8 * = argmax \u03b8 8 i=0 acc(F i | \u03b8, F i+1 \u222a . . . \u222a F i+T ) (8)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training and testing procedures", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We used a simple alternating optimization procedure that begins at \u03b8 0 = (1.0, 1.0, 1.0) and cycles repeatedly through the three dimensions, optimizing along each dimension by a local grid search with resolution 0.1. 10 Of course, when training without rationales, we did not have to optimize \u00b5 or C contrast .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training and testing procedures", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The top curve (S1) in Figure 2 shows that performance does increase when we introduce rationales for the training examples as contrast examples (section 3). S1 is significantly higher than the baseline curve (S2) immediately below it, which trains an ordinary SVM classifier without using rationales. At the largest training set size, rationales raise the accuracy from 88.5% to 92.2%, a 32% error reduction. 9 One might obtain better performance (across all methods being compared) by choosing a separate \u03b8 * for each of the 9 training sets. However, to simulate real limited-data training conditions, one should then find the \u03b8 * for each {i, ..., j} using a separate cross-validation within {i, ..., j} only; this would slow down the experiments considerably.", |
| "cite_spans": [ |
| { |
| "start": 409, |
| "end": 410, |
| "text": "9", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 22, |
| "end": 30, |
| "text": "Figure 2", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "The value of rationales", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "10 For optimizing along the C dimension, one could use the efficient method of Beineke et al. (2004) , but not in SVM light .", |
| "cite_spans": [ |
| { |
| "start": 79, |
| "end": 100, |
| "text": "Beineke et al. (2004)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The value of rationales", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "The lower three curves (S3-S5) show that learning is separately helped by the rationale and the non-rationale portions of the documents. S3-S5 are degraded versions of the baseline S2: they are ordinary SVM classifiers that perform significantly worse than S2 (p < 0.001).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The value of rationales", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Removing the rationale phrases from the training documents (S3) made the test documents much harder to discriminate (compared to S2). This suggests that annotator A0's rationales often covered most of the usable evidence for the true class.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The value of rationales", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "However, the pieces to solving the classification puzzle cannot be found solely in the short rationale phrases. Removing all non-rationale text from the training documents (S5) was even worse than removing the rationales (S3). In other words, we cannot hope to do well simply by training on just the rationales (S5), although that approach is improved somewhat in S4 by treating each rationale (similarly to S1) as a separate SVM training example.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The value of rationales", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "This presents some insight into why our method gives the best performance. The classifier in S1 is able to extract subtle patterns from the corpus, like S2, S3, or any other standard machine learning method, but it is also able to learn from a human annotator's decision-making strategy.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The value of rationales", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "In practice, one might annotate rationales for only some training documents-either when annotating a new corpus or when adding rationales post hoc to an existing corpus. Thus, a range of options can be found between curves S2 and S1 of Figure 2 . Figure 3 explores this space, showing how far the learning curve S2 moves upward if one has time to annotate rationales for a fixed number of documents R. The key useful discovery is that much of the benefit can actually be obtained with relatively few rationales. For example, with 800 training documents, annotating (0%, 50%, 100%) of them with rationales gives accuracies of (86.9%, 89.2%, 89.3%). With the maximum of 1600 training documents, annotating (0%, 50%, 100%) with rationales gives (88.5%, 91.7%, 92.2%).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 236, |
| "end": 244, |
| "text": "Figure 2", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 247, |
| "end": 255, |
| "text": "Figure 3", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Using fewer rationales", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "To make this point more broadly, we find that the R = 200 curve is significantly above the R = 0 curve (p < 0.05) at all T \u2264 1200. By contrast, the R = 800, R = 1000, . . . R = 1600 points at each T The figure also suggests that rationales and documents may be somewhat orthogonal in their benefit. When one has many documents and few rationales, there is no longer much benefit in adding more documents (the curve is flattening out), but adding more rationales seems to provide a fresh benefit: rationales have not yet reached their point of diminishing returns. (While this fresh benefit was often statistically significant, and greater than the benefit from more documents, our experiments did not establish that it was significantly greater.)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Using fewer rationales", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "The above experiments keep all of A0's rationales on a fraction of training documents. We also experimented with keeping a fraction of A0's rationales (chosen randomly with randomized rounding) on all training documents. This yielded no noteworthy or statistically significant differences from Figure 3 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 294, |
| "end": 302, |
| "text": "Figure 3", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Using fewer rationales", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "These latter experiments simulate a \"lazy annotator\" who is less assiduous than A0. Such annotators may be common in the real world. We also suspect that they will be more desirable. First, they should be able to add more rationales per hour than the A0style annotator from Figure 3 : some rationales are simply more noticeable than others, and a lazy annotator will quickly find the most noticeable ones without wasting time tracking down the rest. Second, the \"most noticeable\" rationales that they mark may be the most effective ones for learning, although our random simulation of laziness could not test that.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 274, |
| "end": 282, |
| "text": "Figure 3", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Using fewer rationales", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "Our rationales resemble \"side information\" in machine learning-supplementary information about the target function that is available at training time. Side information is sometimes encoded as \"virtual examples\" like our contrast examples or pseudoexamples. However, past work generates these by automatically transforming the training examples in ways that are expected to preserve or alter the classification (Abu-Mostafa, 1995) . In another formulation, virtual examples are automatically generated but must be manually annotated (Kuusela and Ocone, 2004) . Our approach differs because a human helps to generate the virtual examples. Enforcing a margin between ordinary examples and contrast examples also appears new.", |
| "cite_spans": [ |
| { |
| "start": 410, |
| "end": 429, |
| "text": "(Abu-Mostafa, 1995)", |
| "ref_id": null |
| }, |
| { |
| "start": 532, |
| "end": 557, |
| "text": "(Kuusela and Ocone, 2004)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Other researchers have considered how to reduce annotation effort. In active learning, the annotator classifies only documents where the system so far is less confident (Lewis and Gale, 1994) , or in an information extraction setting, incrementally corrects details of the system's less confident entity segmentations and labelings (Culotta and McCallum, 2005) . Raghavan et al. (2005) asked annotators to identify globally \"relevant\" features. In contrast, our approach does not force the annotator to evaluate the importance of features individually, nor in a global context outside any specific document, nor even to know the learner's feature space. Annotators only mark text that supports their classification decision. Our methods then consider the combined effect of this text on the feature vector, which may include complex features not known to the annotator.", |
| "cite_spans": [ |
| { |
| "start": 169, |
| "end": 191, |
| "text": "(Lewis and Gale, 1994)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 332, |
| "end": 360, |
| "text": "(Culotta and McCallum, 2005)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 363, |
| "end": 385, |
| "text": "Raghavan et al. (2005)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Our SVM contrast method (section 3) is not the only possible way to use rationales. We would like to explicitly model rationale annotation as a noisy process that reflects, imperfectly and incompletely, the annotator's internal decision procedure.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Future Work: Generative models", |
| "sec_num": "8" |
| }, |
| { |
| "text": "A natural approach would start with log-linear models in place of SVMs. We can define a probabilistic classifier", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Future Work: Generative models", |
| "sec_num": "8" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "p \u03b8 (y | x) def = 1 Z(x) exp k h=1 \u03b8 h f h (x, y)", |
| "eq_num": "(9)" |
| } |
| ], |
| "section": "Future Work: Generative models", |
| "sec_num": "8" |
| }, |
| { |
| "text": "where f (\u2022) extracts a feature vector from a classified document.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Future Work: Generative models", |
| "sec_num": "8" |
| }, |
| { |
| "text": "A standard training method would be to choose \u03b8 to maximize the conditional likelihood of the training classifications:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Future Work: Generative models", |
| "sec_num": "8" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "argmax \u03b8 n i=1 p \u03b8 (y i | x i )", |
| "eq_num": "(10)" |
| } |
| ], |
| "section": "Future Work: Generative models", |
| "sec_num": "8" |
| }, |
| { |
| "text": "When a rationale r i is also available for each (x i , y i ), we propose to maximize a likelihood that tries to predict these rationale data as well:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Future Work: Generative models", |
| "sec_num": "8" |
| }, |
| { |
| "text": "argmax \u03b8 n i=1 p \u03b8 (y i | x i ) \u2022 p \u03b8 (r i | x i , y i , \u03b8) (11)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Future Work: Generative models", |
| "sec_num": "8" |
| }, |
| { |
| "text": "Notice that a given guess of \u03b8 might make equation (10) large, yet accord badly with the annotator's rationales. In that case, the second term of equation (11) will exert pressure on \u03b8 to change to something that conforms more closely to the rationales. If the annotator is correct, such a \u03b8 will generalize better beyond the training data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Future Work: Generative models", |
| "sec_num": "8" |
| }, |
| { |
| "text": "In equation 11, p \u03b8 models the stochastic process of rationale annotation. What is an annotator actually doing when she annotates rationales? In particular, how do her rationales derive from the true value of \u03b8 and thereby tell us about \u03b8? Building a good model p \u03b8 of rationale annotation will require some exploratory data analysis. Roughly, we expect that if \u03b8 h f h (x i , y) is much higher for y = y i than for other values of y, then the annotator's r i is correspondingly more likely to indicate in some way that feature f h strongly influenced annotation y i . However, we must also model the annotator's limited patience (she may not annotate all important features), sloppiness (she may indicate only indirectly that f h is important), and bias (tendency to annotate some kinds of features at the expense of others).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Future Work: Generative models", |
| "sec_num": "8" |
| }, |
| { |
| "text": "One advantage of this generative approach is that it eliminates the need for contrast examples. Consider a non-textual example in which an annotator highlights the line crossing in a digital image of the digit \"8\" to mark the rationale that distinguishes it from \"0.\" In this case it is not clear how to mask out that highlighted rationale to create a contrast example in which relevant features would not fire. 11", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Future Work: Generative models", |
| "sec_num": "8" |
| }, |
| { |
| "text": "11 One cannot simply flip those highlighted pixels to white", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Future Work: Generative models", |
| "sec_num": "8" |
| }, |
| { |
| "text": "We have proposed a quite simple approach to improving machine learning by exploiting the cleverness of annotators, asking them to provide enriched annotations for training. We developed and tested a particular discriminative method that can use \"annotator rationales\"-even on a fraction of the training set-to significantly improve sentiment classification of movie reviews.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "9" |
| }, |
| { |
| "text": "We found fairly good annotator agreement on the rationales themselves. Most annotators provided several rationales per classification without taking too much extra time, even in our text classification scenario, where the rationales greatly outweigh the classifications in number and complexity. Greater speed might be possible through an improved user interface or passive feedback (e.g., eye tracking).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "9" |
| }, |
| { |
| "text": "In principle, many machine learning methods might be modified to exploit rationale data. While our experiments in this paper used a discriminative SVM, we plan to explore generative approaches.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "9" |
| }, |
| { |
| "text": "To understand the local maximum issue, consider the hard problem of training a standard 3-layer feed-forward neural network. If the activations of the \"hidden\" layer's features (nodes) were observed at training time, then the network would decompose into a pair of independent 2-layer perceptrons. This turns an NP-hard problem with local maxima(Blum and Rivest, 1992) to a polytime-solvable convex problem. Although rationales might only provide indirect evidence of the hidden layer, this would still modify the objective function (see section 8) in a way that tended to make the correct weights easier to discover.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Taking Ccontrast to be constant means that all rationales are equally valuable. One might instead choose, for example, to reduce Ccontrast for examples xi that have many rationales, to prevent xi's contrast examples vij from together dominating the optimization. However, in this paper we assume that an xi with more rationales really does provide more evidence about the true classifier w.3 Polarity dataset version 2.0.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Available at http://cs.jhu.edu/\u223cozaidan/rationales.5 For our controlled study of annotation time (section 4.2), different examples were given with full document context.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "The sentimental factor: Improving review classification via humanprovided information", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Beineke", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Hastie", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Vaithyanathan", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proc. of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "263--270", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Beineke, T. Hastie, and S. Vaithyanathan. 2004. The sen- timental factor: Improving review classification via human- provided information. In Proc. of ACL, pages 263-270.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Training a 3-node neural network is NP-complete", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [ |
| "L" |
| ], |
| "last": "Blum", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "L" |
| ], |
| "last": "Rivest", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Neural Networks", |
| "volume": "5", |
| "issue": "1", |
| "pages": "117--127", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. L. Blum and R. L. Rivest. 1992. Training a 3-node neural network is NP-complete. Neural Networks, 5(1):117-127.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Reducing labeling effort for structured prediction tasks", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Culotta", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "AAAI", |
| "volume": "", |
| "issue": "", |
| "pages": "746--751", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Culotta and A. McCallum. 2005. Reducing labeling effort for structured prediction tasks. In AAAI, pages 746-751.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Using SVMs for text categorization", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Dumais", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "IEEE Intelligent Systems Magazine", |
| "volume": "13", |
| "issue": "4", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Dumais. 1998. Using SVMs for text categorization. IEEE Intelligent Systems Magazine, 13(4), July/August.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Text categorization with support vector machines: Learning with many relevant features", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Joachims", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proc. of the European Conf. on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "137--142", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "T. Joachims. 1998. Text categorization with support vector machines: Learning with many relevant features. In Proc. of the European Conf. on Machine Learning, pages 137-142.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Learning with side information: PAC learning bounds", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Kuusela", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Ocone", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "J. of Computer and System Sciences", |
| "volume": "68", |
| "issue": "3", |
| "pages": "521--545", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Kuusela and D. Ocone. 2004. Learning with side informa- tion: PAC learning bounds. J. of Computer and System Sci- ences, 68(3):521-545, May.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "A sequential algorithm for training text classifiers", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [ |
| "D" |
| ], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [ |
| "A" |
| ], |
| "last": "Gale", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Proc. of ACM-SIGIR", |
| "volume": "", |
| "issue": "", |
| "pages": "3--12", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. D. Lewis and W. A. Gale. 1994. A sequential algorithm for training text classifiers. In Proc. of ACM-SIGIR, pages 3-12.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Pang", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proc. of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "271--278", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. Pang and L. Lee. 2004. A sentimental education: Sen- timent analysis using subjectivity summarization based on minimum cuts. In Proc. of ACL, pages 271-278.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Thumbs up? Sentiment classification using machine learning techniques", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Pang", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Vaithyanathan", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proc. of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "79--86", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. Pang, L. Lee, and S. Vaithyanathan. 2002. Thumbs up? Sentiment classification using machine learning techniques. In Proc. of EMNLP, pages 79-86.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "or black, since that would cause new features to fire. Possibly one could simply suppress any feature that depends in any way on the highlighted pixels, but this would take away too many important features", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Raghavan", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Madani", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Jones", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proc. of IJCAI", |
| "volume": "", |
| "issue": "", |
| "pages": "41--46", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "H. Raghavan, O. Madani, and R. Jones. 2005. Interactive fea- ture selection. In Proc. of IJCAI, pages 41-46. or black, since that would cause new features to fire. Possibly one could simply suppress any feature that depends in any way on the highlighted pixels, but this would take away too many important features, including global features.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "text": "Classification accuracy under five different experimental setups (S1-S5). At each training size, the 5 accuracies are pairwise significantly different (paired permutation test, p < 0.02; see section 5.2), except for {S3,S4} or {S4,S5} at some sizes.", |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF1": { |
| "num": null, |
| "text": "Classification accuracy for T \u2208 {200, 400, ..., 1600} training documents (x-axis) when only R \u2208 {0, 200, ..., T } of them are annotated with rationales (different curves). The R = 0 curve above corresponds to the baseline S2 fromFigure 2. S1's points are found above as the leftmost points on the other curves, where R = T . value are all-pairs statistically indistinguishable.", |
| "type_str": "figure", |
| "uris": null |
| } |
| } |
| } |
| } |