ACL-OCL / Base_JSON /prefixD /json /D10 /D10-1037.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D10-1037",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:52:28.089466Z"
},
"title": "Incorporating Content Structure into Text Analysis Applications",
"authors": [
{
"first": "Christina",
"middle": [],
"last": "Sauper",
"suffix": "",
"affiliation": {
"laboratory": "Artificial Intelligence Laboratory",
"institution": "Massachusetts Institute of Technology",
"location": {}
},
"email": "csauper@csail.mit.edu"
},
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": "",
"affiliation": {
"laboratory": "Artificial Intelligence Laboratory",
"institution": "Massachusetts Institute of Technology",
"location": {}
},
"email": "aria42@csail.mit.edu"
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": "",
"affiliation": {
"laboratory": "Artificial Intelligence Laboratory",
"institution": "Massachusetts Institute of Technology",
"location": {}
},
"email": "regina@csail.mit.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we investigate how modeling content structure can benefit text analysis applications such as extractive summarization and sentiment analysis. This follows the linguistic intuition that rich contextual information should be useful in these tasks. We present a framework which combines a supervised text analysis application with the induction of latent content structure. Both of these elements are learned jointly using the EM algorithm. The induced content structure is learned from a large unannotated corpus and biased by the underlying text analysis task. We demonstrate that exploiting content structure yields significant improvements over approaches that rely only on local context. 1",
"pdf_parse": {
"paper_id": "D10-1037",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we investigate how modeling content structure can benefit text analysis applications such as extractive summarization and sentiment analysis. This follows the linguistic intuition that rich contextual information should be useful in these tasks. We present a framework which combines a supervised text analysis application with the induction of latent content structure. Both of these elements are learned jointly using the EM algorithm. The induced content structure is learned from a large unannotated corpus and biased by the underlying text analysis task. We demonstrate that exploiting content structure yields significant improvements over approaches that rely only on local context. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In this paper, we demonstrate that leveraging document structure significantly benefits text analysis applications. As a motivating example, consider the excerpt from a DVD review shown in Table 1 . This review discusses multiple aspects of a product, such as audio and video properties. While the word \"pleased\" is a strong indicator of positive sentiment, the sentence in which it appears does not specify the aspect to which it relates. Resolving this ambiguity requires information about global document structure.",
"cite_spans": [],
"ref_spans": [
{
"start": 189,
"end": 196,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A central challenge in utilizing such information lies in finding a relevant representation of content structure for a specific text analysis task. For Audio Audio choices are English, Spanish and French Dolby Digital 5.1 ... Bass is still robust and powerful, giving weight to just about any scene -most notably the film's exciting final fight. Fans should be pleased with the presentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Extras This single-disc DVD comes packed in a black amaray case with a glossy slipcover. Cover art has clearly been designed to appeal the Twilight crowd ... Finally, we've got a deleted scenes reel. Most of the excised scenes are actually pretty interesting. instance, when performing single-aspect sentiment analysis, the most relevant aspect of content structure is whether a given sentence is objective or subjective (Pang and Lee, 2004) . In a multi-aspect setting, however, information about the sentence topic is required to determine the aspect to which a sentiment-bearing word relates (Snyder and Barzilay, 2007 ). As we can see from even these closely related applications, the content structure representation should be intimately tied to a specific text analysis task.",
"cite_spans": [
{
"start": 421,
"end": 441,
"text": "(Pang and Lee, 2004)",
"ref_id": "BIBREF18"
},
{
"start": 595,
"end": 621,
"text": "(Snyder and Barzilay, 2007",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we present an approach in which a content model is learned jointly with a text analysis task. We assume complete annotations for the task itself, but we learn the content model from raw, unannotated text. Our approach is implemented in a discriminative framework using latent variables to represent facets of content structure. In this framework, the original task features (e.g., lexical ones) are conjoined with latent variables to enrich the features with global contextual information. For example, in Table 1 , the feature associated with the word \"pleased\" should contribute most strongly to the sentiment of the audio aspect when it is augmented with a relevant topic indicator.",
"cite_spans": [],
"ref_spans": [
{
"start": 520,
"end": 527,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The coupling of the content model and the taskspecific model allows the two components to mutually influence each other during learning. The content model leverages unannotated data to improve the performance of the task-specific model, while the task-specific model provides feedback to improve the relevance of the content model. The combined model can be learned effectively using a novel EM-based method for joint training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We evaluate our approach on two complementary text analysis tasks. Our first task is a multi-aspect sentiment analysis task, where a system predicts the aspect-specific sentiment ratings (Snyder and Barzilay, 2007) . Second, we consider a multi-aspect extractive summarization task in which a system extracts key properties for a pre-specified set of aspects. On both tasks, our method for incorporating content structure consistently outperforms structureagnostic counterparts. Moreover, jointly learning content and task parameters yields additional gains over independently learned models.",
"cite_spans": [
{
"start": 187,
"end": 214,
"text": "(Snyder and Barzilay, 2007)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Prior research has demonstrated the usefulness of content models for discourse-level tasks. Examples of such tasks include sentence ordering (Barzilay and Lee, 2004; Elsner et al., 2007) , extraction-based summarization (Haghighi and Vanderwende, 2009) and text segmentation . Since these tasks are inherently tied to document structure, a content model is essential to performing them successfully. In contrast, the applications considered in this paper are typically developed without any discourse information, focusing on capturing sentencelevel relations. Our goal is to augment these models with document-level content information.",
"cite_spans": [
{
"start": 141,
"end": 165,
"text": "(Barzilay and Lee, 2004;",
"ref_id": "BIBREF0"
},
{
"start": 166,
"end": 186,
"text": "Elsner et al., 2007)",
"ref_id": "BIBREF8"
},
{
"start": 220,
"end": 252,
"text": "(Haghighi and Vanderwende, 2009)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Several applications in information extraction and sentiment analysis are close in spirit to our work (Pang and Lee, 2004; Patwardhan and Riloff, 2007; McDonald et al., 2007) . These approaches consider global contextual information when determining whether a given sentence is relevant to the underlying analysis task. All assume that relevant sentences have been annotated. For instance, Pang and Lee (2004) refine the accuracy of sentiment analysis by considering only the subjective sentences of a review as determined by an independent classifier. Patwardhan and Riloff (2007) take a similar approach in the context of information extraction. Rather than applying their extractor to all the sentences in a document, they limit it to eventrelevant sentences. Since these sentences are more likely to contain information of interest, the extraction performance increases.",
"cite_spans": [
{
"start": 102,
"end": 122,
"text": "(Pang and Lee, 2004;",
"ref_id": "BIBREF18"
},
{
"start": 123,
"end": 151,
"text": "Patwardhan and Riloff, 2007;",
"ref_id": "BIBREF20"
},
{
"start": 152,
"end": 174,
"text": "McDonald et al., 2007)",
"ref_id": "BIBREF17"
},
{
"start": 390,
"end": 409,
"text": "Pang and Lee (2004)",
"ref_id": "BIBREF18"
},
{
"start": 553,
"end": 581,
"text": "Patwardhan and Riloff (2007)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Another approach, taken by Choi and Cardie (2008) and Somasundaran et al. (2009) uses linguistic resources to create a latent model in a taskspecific fashion to improve performance, rather than assuming sentence-level task relevancy. Choi and Cardie (2008) address a sentiment analysis task by using a heuristic decision process based on wordlevel intermediate variables to represent polarity. Somasundaran et al. (2009) similarly uses a bootstrapped local polarity classifier to identify sentence polarity.",
"cite_spans": [
{
"start": 27,
"end": 49,
"text": "Choi and Cardie (2008)",
"ref_id": "BIBREF5"
},
{
"start": 54,
"end": 80,
"text": "Somasundaran et al. (2009)",
"ref_id": "BIBREF23"
},
{
"start": 234,
"end": 256,
"text": "Choi and Cardie (2008)",
"ref_id": "BIBREF5"
},
{
"start": 394,
"end": 420,
"text": "Somasundaran et al. (2009)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "McDonald et al. 2007propose a model which jointly identifies global polarity as well as paragraph-and sentence-level polarity, all of which are observed in training data. While our approach uses a similar hierarchy, McDonald et al. (2007) is concerned with recovering the labels at all levels, whereas in this work we are interested in using latent document content structure as a means to benefit task predictions.",
"cite_spans": [
{
"start": 216,
"end": 238,
"text": "McDonald et al. (2007)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "While our method also incorporates contextual information into existing text analysis applications, our approach is markedly different from the above approaches. First, our representation of context encodes more than the relevance-based binary distinction considered in the past work. Our algorithm adjusts the content model dynamically for a given task rather than pre-specifying it. Second, while previous work is fully supervised, in our case relevance annotations are readily available for only a few applications and are prohibitively expensive to obtain for many others. To overcome this drawback, our method induces a content model in an unsupervised fashion and connects it via latent variables to the target model. This design not only eliminates the need for additional annotations, but also allows the algorithm to leverage large quantities of raw data for training the content model. The tight coupling of rel-evance learning with the target analysis task leads to further performance gains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Finally, our work relates to supervised topic models in Blei and McAullife (2007) . In this work, latent topic variables are used to generate text as well as a supervised sentiment rating for the document. However, this architecture does not permit the usage of standard discriminative models which condition freely on textual features.",
"cite_spans": [
{
"start": 56,
"end": 81,
"text": "Blei and McAullife (2007)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section, we describe a model which incorporates content information into a multi-aspect summarization task. 2 Our approach assumes that at training time we have a collection of labeled documents D L , each consisting of the document text s and true task-specific labeling y * . For the multiaspect summarization task, y * consists of sequence labels (e.g., value or service) for the tokens of a document. Specifically, the document text s is composed of sentences s 1 , . . . , s n and the labelings y * consists of corresponding label sequences y 1 , . . . , y n . 3 As is common in related work, we model each y i using a CRF which conditions on the observed document text. In this work, we also assume a content model, which we fix to be the document-level HMM as used in Barzilay and Lee (2004) . In this content model, each sentence s i is associated with a hidden topic variable T i which generates the words of the sentence. We will use T = (T 1 , . . . , T n ) to refer to the hidden topic sequence for a document. We fix the number of topics to a pre-specified constant K.",
"cite_spans": [
{
"start": 574,
"end": 575,
"text": "3",
"ref_id": null
},
{
"start": 783,
"end": 806,
"text": "Barzilay and Lee (2004)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model 3.1 Problem Formulation",
"sec_num": "3"
},
{
"text": "Our model, depicted in Figure 1 , proceeds as follows: First the document-level HMM generates a hidden content topic sequence T for the sentences of a document. This content component is parametrized by \u03b8 and decomposes in the standard Figure 1 : A graphical depiction of our model for sequence labeling tasks. The T i variable represents the content model topic for the ith sentence",
"cite_spans": [],
"ref_spans": [
{
"start": 23,
"end": 31,
"text": "Figure 1",
"ref_id": null
},
{
"start": 236,
"end": 244,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Overview",
"sec_num": "3.2"
},
{
"text": "y 1 i y m i y 2 i . . . T i w 1 i w m i w 2 i . . . T i\u22121 T i+1 (w 2 i = pleased) \u2227 (T i = 3) w 2 i = pleased ... s i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Overview",
"sec_num": "3.2"
},
{
"text": "s i . The words of s i , (w 1 i , . . . , w m i ), each have a task label (y 1 i , . . . , y m i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Overview",
"sec_num": "3.2"
},
{
"text": "Note that each token label has an undirected edge to a factor containing the words of the current sentence, s i as well as the topic of the current sentence T i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Overview",
"sec_num": "3.2"
},
{
"text": "HMM fashion: 4 P \u03b8 (s, T ) = n i=1 P \u03b8 (T i |T i\u22121 ) w\u2208s i P \u03b8 (w|T i ) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Overview",
"sec_num": "3.2"
},
{
"text": "Then the label sequences for each sentence in the document are independently modeled as CRFs which condition on both the sentence features and the sentence topic:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Overview",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P \u03c6 (y|s, T ) = n i=1 P \u03c6 (y i |s i , T i )",
"eq_num": "(2)"
}
],
"section": "Model Overview",
"sec_num": "3.2"
},
{
"text": "Each sentence CRF is parametrized by \u03c6 and takes the standard form: Figure 2 : A graphical depiction of the generative process for a labeled document at training time (See Section 3); shaded nodes indicate variables which are observed at training time. First the latent underlying content structure T is drawn. Then, the document text s is drawn conditioned on the content structure utilizing content parameters \u03b8. Finally, the observed task labels for the document are modeled given s and T using the task parameters \u03c6. Note that the arrows for the task labels are undirected since they are modeled discriminatively.",
"cite_spans": [],
"ref_spans": [
{
"start": 68,
"end": 76,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Overview",
"sec_num": "3.2"
},
{
"text": "P \u03c6 (y|s, T ) \u221d exp \uf8f1 \uf8f2 \uf8f3 j \u03c6 T f N (y j , s, T ) + f E (y j , y j+1 ) \uf8fc \uf8fd \uf8fe T s y * \u03b8 \u03c6 Content Parameters Task Parameters Task Labels Text Content Structure",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Overview",
"sec_num": "3.2"
},
{
"text": "where f N (\u2022) and f E (\u2022) are feature functions associated with CRF nodes and transitions respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Overview",
"sec_num": "3.2"
},
{
"text": "Allowing the CRF to condition on the sentence topic T i permits predictions to be more sensitive to content. For instance, using the example from Table 1, we could have a feature that indicates the word \"pleased\" conjoined with the segment topic (see Figure 1) . These topic-specific features serve to disambiguate word usage.",
"cite_spans": [],
"ref_spans": [
{
"start": 251,
"end": 260,
"text": "Figure 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Overview",
"sec_num": "3.2"
},
{
"text": "This joint process, depicted graphically in Figure 2, is summarized as:",
"cite_spans": [],
"ref_spans": [
{
"start": 44,
"end": 50,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Overview",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (T , s, y * ) = P \u03b8 (T , s)P \u03c6 (y * |s, T )",
"eq_num": "(3)"
}
],
"section": "Model Overview",
"sec_num": "3.2"
},
{
"text": "Note that this probability decomposes into a document-level HMM term (the content component) as well as a product of CRF terms (the task component).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Overview",
"sec_num": "3.2"
},
{
"text": "During learning, we would like to find the document-level HMM parameters \u03b8 and the summarization task CRF parameters \u03c6 which maximize the likelihood of the labeled documents. The only observed elements of a labeled document are the document text s and the aspect labels y * . This objective is given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.3"
},
{
"text": "L L (\u03c6, \u03b8) = (s,y * )\u2208D L log P (s, y * ) = (s,y * )\u2208D L log T P (T , s, y * )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.3"
},
{
"text": "We use the EM algorithm to optimize this objective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.3"
},
{
"text": "Step The E-Step in EM requires computing the posterior distribution over latent variables. In this model, the only latent variables are the sentence topics T . To compute this term, we utilize the decomposition in Equation 3and rearrange HMM and CRF terms to obtain:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E-",
"sec_num": null
},
{
"text": "P (T , s, y * ) = P \u03b8 (T , s)P \u03c6 (y * |T , s) = n i=1 P \u03b8 (T i |T i\u22121 ) w\u2208s i P \u03b8 (w|T i ) \u2022 n i=1 P \u03c6 (y * i |s i , T i ) = n i=1 P \u03b8 (T i |T i\u22121 )\u2022 w\u2208s i P \u03b8 (w|T i )P \u03c6 (y * i |s i , T i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E-",
"sec_num": null
},
{
"text": "We note that this expression takes the same form as the document-level HMM, except that in addition to emitting the words of a sentence, we also have an observation associated with the sentence sequence labeling. We treat each P \u03c6 (y * i |s i , T i ) as part of the node potential associated with the document-level HMM. We utilize the Forward-Backward algorithm as one would with the document-level HMM in isolation, except that each node potential incorporates this CRF term.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E-",
"sec_num": null
},
{
"text": "Step We perform separate M-Steps for content and task parameters. The M-Step for the content parameters is identical to the document-level HMM content model: topic emission and transition distributions are updated with expected counts derived from E-Step topic posteriors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "M-",
"sec_num": null
},
{
"text": "The M-Step for the task parameters does not have a closed-form solution. Recall that in the M-Step, we maximize the log probability of all random variables given expectations of latent variables. Using the decomposition in Equation 3, it is clear that the only component of the joint labeled document probability which relies upon the task parameters is log P \u03c6 (y * |s, T ). Thus for the M-Step, it is sufficient to optimize the following with respect to \u03c6:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "M-",
"sec_num": null
},
{
"text": "E T |s, y * log P \u03c6 (y * |s, T ) = n i=1 E T i |s i , y * i log P \u03c6 (y * i |s i , T i ) = n i=1 K k=1 P (T i = k|s i , y * i ) log P \u03c6 (y * i |s i , T i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "M-",
"sec_num": null
},
{
"text": "The first equality follows from the decomposition of the task component into independent CRFs (see Equation 2). Optimizing this objective is equivalent to a weighted version of the conditional likelihood objective used to train the CRF in isolation. An intuitive explanation of this process is that there are multiple CRF instances, one for each possible hidden topic T . Each utilizes different content features to explain the sentence sequence labeling. These instances are weighted according to the posterior over T obtained during the E-Step. While this objective is non-convex due to the summation over T , we can still optimize it using any gradient-based optimization solver; in our experiments, we used the LBFGS algorithm (Liu et al., 1989) .",
"cite_spans": [
{
"start": 731,
"end": 749,
"text": "(Liu et al., 1989)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "M-",
"sec_num": null
},
{
"text": "We must predict a label sequence y for each sentence s of the document. We assume a loss function over a sequence labeling y and a proposed labelin\u011d y, which decomposes as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.4"
},
{
"text": "L(y,\u0177) = j L(y j ,\u0177 j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.4"
},
{
"text": "where each position loss is sensitive to the kind of error which is made. Failing to extract a token is penalized to a greater extent than extracting it with an incorrect label:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.4"
},
{
"text": "L(y j ,\u0177 j ) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 0 if\u0177 j = y j c if y j = NONE and\u0177 j = NONE 1 otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.4"
},
{
"text": "In this definition, NONE represents the background label which is reserved for tokens which do not correspond to labels of interest. The constant c represents a user-defined trade-off between precision and recall errors. For our multi-aspect summarization task, we select c = 4 for Yelp and c = 5 for Amazon to combat the high-precision bias typical of conditional likelihood models. At inference time, we select the single labeling which minimizes the expected loss with respect to model posterior over label sequences:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.4"
},
{
"text": "y = min y E y|s L(y,\u0177) = min y j=1 E y j |s L(y j ,\u0177 j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.4"
},
{
"text": "In our case, we must marginalize out the sentence topic T :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.4"
},
{
"text": "P (y j |s) = T P (y j , T |s) = T P \u03b8 (T |s)P \u03c6 (y j |s, T )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.4"
},
{
"text": "This minimum risk criterion has been widely used in NLP applications such as parsing (Goodman, 1999) and machine translation (DeNero et al., 2009) . Note that the above formulation differs from the standard CRF due to the latent topic variables. Otherwise the inference task could be accomplished by directly obtaining posteriors over each y j state using the Forward-Backwards algorithm on the sentence CRF. Finding\u0177 can be done efficiently. First, we obtain marginal token posteriors as above. Then, the expected loss of a token prediction is computed as follows:",
"cite_spans": [
{
"start": 85,
"end": 100,
"text": "(Goodman, 1999)",
"ref_id": "BIBREF12"
},
{
"start": 125,
"end": 146,
"text": "(DeNero et al., 2009)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.4"
},
{
"text": "\u0177 j P (y j |s)L(y j ,\u0177 j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.4"
},
{
"text": "Once we obtain expected losses of each token prediction, we compute the minimum risk sequence labeling by running the Viterbi algorithm. The potential for each position and prediction is given by the negative expected loss. The maximal scoring sequence according to these potentials minimizes the expected risk.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.4"
},
{
"text": "Our model allows us to incorporate unlabeled documents, denoted D U , to improve the learning of the content model. For an unlabeled document we only observe the document text s and assume it is drawn from the same content model as our labeled documents. The objective presented in Section 3.3 assumed that all documents were labeled; here we supplement this objective by capturing the likelihood of unlabeled documents according to the content model:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Leveraging unannotated data",
"sec_num": "3.5"
},
{
"text": "L U (\u03b8) = s\u2208D U log P \u03b8 (s) = s\u2208D U log T P \u03b8 (s, T )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Leveraging unannotated data",
"sec_num": "3.5"
},
{
"text": "Our overall objective function is to maximize the likelihood of both our labeled and unlabeled data. This objective corresponds to:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Leveraging unannotated data",
"sec_num": "3.5"
},
{
"text": "L(\u03c6, \u03b8) =L U (\u03b8) + L L (\u03c6, \u03b8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Leveraging unannotated data",
"sec_num": "3.5"
},
{
"text": "This objective can also be optimized using the EM algorithm, where the E-Step for labeled and unlabeled documents is outlined above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Leveraging unannotated data",
"sec_num": "3.5"
},
{
"text": "The approach outlined can be applied to a wider range of task components. For instance, in Section 4.1 we apply this approach to multi-aspect sentiment analysis. In this task, the target y consists of numeric sentiment ratings (y 1 , . . . , y K ) for each of K aspects. The task component consists of independent linear regression models for each aspect sentiment rating. For the content model, we associate a topic with each paragraph; T consists of assignments of topics to each document paragraph.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalization",
"sec_num": "3.6"
},
{
"text": "The model structure still decomposes as in Figure 2, but the details of learning are slightly different. For instance, because the task label (aspect sentiment ratings) is not localized to any region of the document, all content model variables influence the target response. Conditioned on the target label, all topic variables become correlated. Thus when learning, the E-Step requires computing a posterior over paragraph topic tuples T :",
"cite_spans": [],
"ref_spans": [
{
"start": 43,
"end": 49,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Generalization",
"sec_num": "3.6"
},
{
"text": "P (T |y, s) \u221d P (s, T )P (y|T , s)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalization",
"sec_num": "3.6"
},
{
"text": "For the case of our multi-aspect sentiment task, this computation can be done exactly by enumerating T tuples, since the number of sentences and possible topics is relatively small. If summation is intractable, the posterior may be approximated using variational techniques (Bishop, 2006) , which is applicable to a broad range of potential applications.",
"cite_spans": [
{
"start": 274,
"end": 288,
"text": "(Bishop, 2006)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generalization",
"sec_num": "3.6"
},
{
"text": "We apply our approach to two text analysis tasks that stand to benefit from modeling content structure: multi-aspect sentiment analysis and multi-aspect review summarization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Set-Up",
"sec_num": "4"
},
{
"text": "In the following section, we define each task in detail, explain the task-specific adaptation of the model and describe the data sets used in the experiments. Table 2 summarizes statistics for all the data sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 159,
"end": 166,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tasks",
"sec_num": "4.1"
},
{
"text": "For all tasks, when using a content model with a task model, we utilize a new set of features which include all the original features as well as a copy of each feature conjoined with the content topic assignment (see Figure 1) . We also include a feature which indicates whether a given word was most likely emitted from the underlying topic or from a background distribution.",
"cite_spans": [],
"ref_spans": [
{
"start": 217,
"end": 226,
"text": "Figure 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tasks",
"sec_num": "4.1"
},
{
"text": "Multi-Aspect Sentiment Ranking The goal of multi-aspect sentiment classification is to predict a set of numeric ranks that reflects the user satisfaction for each aspect (Snyder and Barzilay, 2007) . One of the challenges in this task is to attribute sentimentbearing words to the aspects they describe. Information about document structure has the potential to greatly reduce this ambiguity.",
"cite_spans": [
{
"start": 170,
"end": 197,
"text": "(Snyder and Barzilay, 2007)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tasks",
"sec_num": "4.1"
},
{
"text": "Following standard sentiment ranking approaches (Wilson et al., 2004; Pang and Lee, 2005; Goldberg and Zhu, 2006; Snyder and Barzilay, 2007) , we employ ordinary linear regression to independently map bag-of-words representations into predicted aspect ranks. In addition to commonly used lexical features, this set is augmented Table 2 : This table summarizes the size of each corpus. In each case, the unlabeled texts of both labeled and unlabeled documents are used for training the content model, while only the labeled training corpus is used to train the task model. Note that the entire data set for the multi-aspect sentiment analysis task is labeled. with content features as described above. For this application, we fix the number of HMM states to be equal to the predefined number of aspects. We test our sentiment ranker on a set of DVD reviews from the website IGN.com. 5 Each review is accompanied by 1-10 scale ratings in four categories that assess the quality of a movie's content, video, audio, and DVD extras. In this data set, segments corresponding to each of the aspects are clearly delineated in each document. Therefore, we can compare the performance of the algorithm using automatically induced content models against the gold standard structural information.",
"cite_spans": [
{
"start": 48,
"end": 69,
"text": "(Wilson et al., 2004;",
"ref_id": "BIBREF25"
},
{
"start": 70,
"end": 89,
"text": "Pang and Lee, 2005;",
"ref_id": "BIBREF19"
},
{
"start": 90,
"end": 113,
"text": "Goldberg and Zhu, 2006;",
"ref_id": "BIBREF11"
},
{
"start": 114,
"end": 140,
"text": "Snyder and Barzilay, 2007)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 328,
"end": 335,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tasks",
"sec_num": "4.1"
},
{
"text": "Multi-Aspect Review Summarization The goal of this task is to extract informative phrases that identify information relevant to several predefined aspects of interest. In other words, we would like our system to both extract important phrases (e.g., cheap food) and label it with one of the given aspects (e.g., value). For concrete examples and lists of aspects for each data set, see Figures 3b and 3c. Variants of this task have been considered in review summarization in previous work (Kim and Hovy, 2006; . This task has elements of both information extraction and phrase-based summarization -the phrases we wish to extract are broader in scope than in standard template-driven IE, but at the same time, the type of selected information is restricted to the defined aspects, similar to query-based summarization. The difficulty here is that phrase selection is highly context-dependent. For instance, in TV reviews such as in Figure 3b , the highlighted phrase \"easy to read\" might refer to either the menu or the remote; broader 5 http://dvd.ign.com/index/reviews.html context is required for correct labeling.",
"cite_spans": [
{
"start": 489,
"end": 509,
"text": "(Kim and Hovy, 2006;",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 931,
"end": 940,
"text": "Figure 3b",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tasks",
"sec_num": "4.1"
},
{
"text": "We evaluated our approach for this task on two data sets: Amazon TV reviews (Figure 3b ) and Yelp restaurant reviews (Figure 3c ). To eliminate noisy reviews, we only retain documents that have been rated \"helpful\" by the users of the site; we also remove reviews which are abnormally short or long.",
"cite_spans": [],
"ref_spans": [
{
"start": 76,
"end": 86,
"text": "(Figure 3b",
"ref_id": null
},
{
"start": 117,
"end": 127,
"text": "(Figure 3c",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tasks",
"sec_num": "4.1"
},
{
"text": "Each data set was manually annotated with aspect labels using Mechanical Turk, which has been used in previous work to annotate NLP data (Snow et al., 2008 ). Since we cannot select high-quality annotators directly, we included a control document which had been previously annotated by a native speaker among the documents assigned to each annotator. The work of any annotator who exhibited low agreement on the control document annotation was excluded from the corpus. To test task annotation agreement, we use Cohen's Kappa (Cohen, 1960) . On the Amazon data set, two native speakers annotated a set of four documents. The agreement between the judges was 0.54. On the Yelp data set, we simply computed the agreement between all pairs of reviewers who received the same control documents; the agreement was 0.49.",
"cite_spans": [
{
"start": 137,
"end": 155,
"text": "(Snow et al., 2008",
"ref_id": "BIBREF21"
},
{
"start": 526,
"end": 539,
"text": "(Cohen, 1960)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tasks",
"sec_num": "4.1"
},
{
"text": "Baselines For all the models, we obtain a baseline system by eliminating content features and only using a task model with the set of features described above. We also compare against a simplified variant of our method wherein a content model is induced in isolation rather than learned jointly in the context of the underlying task. In our experiments, we refer to the two methods as the No Content Model (NoCM) and Independent Content Model (IndepCM) settings, respectively. The Joint Content M = Movie V = Video A = Audio E = Extras M This collection certainly offers some nostalgic fun, but at the end of the day, the shows themselves, for the most part, just don't hold up. 5V Regardless, this is a fairly solid presentation, but it's obvious there was room for improvement. 7A Bass is still robust and powerful. Fans should be pleased with this presentation. 8E The deleted scenes were quite lengthy, but only shelled out a few extra laughs. (c) Sample labeled text from the Yelp multi-aspect summarization corpus Figure 3 : Excerpts from the three corpora with the corresponding labels. Note that sentences from the multi-aspect summarization corpora generally focus on only one or two aspects. The multi-aspect sentiment corpus has labels per paragraph rather than per sentence.",
"cite_spans": [],
"ref_spans": [
{
"start": 1020,
"end": 1028,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Baseline Comparison and Evaluation",
"sec_num": "4.2"
},
{
"text": "Model (JointCM) setting refers to our full model described in Section 3, where content and task components are learned jointly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Comparison and Evaluation",
"sec_num": "4.2"
},
{
"text": "Evaluation Metrics For multi-aspect sentiment ranking, we report the average L 2 (squared difference) and L 1 (absolute difference) between system prediction and true 1-10 sentiment rating across test documents and aspects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Comparison and Evaluation",
"sec_num": "4.2"
},
{
"text": "For the multi-aspect summarization task, we measure average token precision and recall of the label assignments (Multi-label). For the Amazon corpus, we also report a coarser metric which measures extraction precision and recall while ignoring labels (Binary labels) as well as ROUGE (Lin, 2004 Table 4 : Results for multi-aspect summarization on the Yelp corpus. Marked precision and recall are statistically significant with p < 0.05: * over the previous model and \u2020 over NoCM.",
"cite_spans": [
{
"start": 284,
"end": 294,
"text": "(Lin, 2004",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 295,
"end": 302,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Baseline Comparison and Evaluation",
"sec_num": "4.2"
},
{
"text": "each system to predict the same number of tokens as the original labeled document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Comparison and Evaluation",
"sec_num": "4.2"
},
{
"text": "Our metrics of statistical significance vary by task. For the sentiment task, we use Student's ttest. For the multi-aspect summarization task, we perform chi-square analysis on the ROUGE scores as well as on precision and recall separately, as is commonly done in information extraction (Freitag, 2004; Weeds et al., 2004; Finkel and Manning, 2009) .",
"cite_spans": [
{
"start": 287,
"end": 302,
"text": "(Freitag, 2004;",
"ref_id": "BIBREF10"
},
{
"start": 303,
"end": 322,
"text": "Weeds et al., 2004;",
"ref_id": "BIBREF24"
},
{
"start": 323,
"end": 348,
"text": "Finkel and Manning, 2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Comparison and Evaluation",
"sec_num": "4.2"
},
{
"text": "In this section, we present the results of the methods on the tasks described above (see Tables 3, 4, and 5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Baseline Comparisons Adding a content model significantly outperforms the NoCM baseline on both tasks. The highest F 1 error reduction -14.7% -is achieved on multi-aspect summarization on the Yelp corpus, followed by the reduction of 11.5% and 8.75%, on multi-aspect summarization on the Amazon corpus and multi-aspect sentiment ranking, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "We also observe a consistent performance boost when comparing against the IndepCM baseline. This result confirms our hypothesis about the ad- Table 5 : Results for multi-aspect summarization on the Amazon corpus. Marked ROUGE, precision, and recall are statistically significant with p < 0.05: * over the previous model and \u2020 over NoCM.",
"cite_spans": [],
"ref_spans": [
{
"start": 142,
"end": 149,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "vantages of jointly learning the content model in the context of the underlying task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "One alternative to an explicit content model is to simply incorporate additional features into NoCM as a proxy for contextual information. In the multi-aspect summarization case, this can be accomplished by adding unigram features from the sentences before and after the current one. 6 When testing this approach, however, the performance of NoCM actually decreases on both Amazon (to 15.0% F 1 ) and Yelp (to 24.5% F 1 ) corpora. This result is not surprising for this particular taskby adding these features, we substantially increase the feature space without increasing the amount of training data. An advantage of our approach is that our learned representation of context is coarse, and we can leverage large quantities of unannotated training data. Impact of content model quality on task performance In the multi-aspect sentiment ranking task, we have access to gold standard documentlevel content structure annotation. This affords us the ability to compare the ideal content structure, provided by the document authors, with one that is learned automatically. As Table 3 shows, the manually created document structure segmentation yields the best results. However, the performance of our JointCM model is not far behind the gold standard content structure.",
"cite_spans": [],
"ref_spans": [
{
"start": 1073,
"end": 1080,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison with additional context features",
"sec_num": null
},
{
"text": "The quality of the induced content model is determined by the amount of training data. As Figure 4 shows, the multi-aspect summarizer improves with the increase in the size of raw data available for learning content model. Compensating for annotation sparsity We hypothesize that by incorporating rich contextual information, we can reduce the need for manual task annotation. We test this by reducing the amount of annotated data available to the model and measuring performance at several quantities of unannotated data. As Figure 5 shows, the performance increase achieved by doubling the amount of annotated data can also be achieved by adding only 12.5% of the unlabeled data.",
"cite_spans": [],
"ref_spans": [
{
"start": 87,
"end": 98,
"text": "As Figure 4",
"ref_id": "FIGREF1"
},
{
"start": 526,
"end": 534,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison with additional context features",
"sec_num": null
},
{
"text": "In this paper, we demonstrate the benefits of incorporating content models in text analysis tasks. We also introduce a framework to allow the joint learning of an unsupervised latent content model with a supervised task-specific model. On multiple tasks and datasets, our results empirically connect model quality and task performance, suggesting that fur- 22.6 18.9 Figure 5 : Results on the Amazon corpus using half of the annotated training documents. The content model is trained with 0%, 12.5%, and 25% of additional unlabeled data. 7 The dashed horizontal line represents NoCM with the complete annotated set.",
"cite_spans": [
{
"start": 538,
"end": 539,
"text": "7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 367,
"end": 375,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "ther improvements in content modeling may yield even further gains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Code and processed data presented here are available at http://groups.csail.mit.edu/rbg/code/content structure.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In Section 3.6, we discuss how this framework can be used for other text analysis applications.3 Note that each yi is a label sequence across the words in si, rather than an individual label.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We also utilize a hierarchical emission model so that each topic distribution interpolates between a topic-specific distribution as well as a shared background model; this is intended to capture domain-specific stop words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This type of feature is not applicable to our multi-aspect sentiment ranking task, as we already use unigram features from the entire document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Because we append the unlabeled versions of the labeled data to the unlabeled set, even with 0% additional unlabeled data, there is a small data set to train the content model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors acknowledge the support of the NSF (CAREER grant IIS-0448168) and NIH (grant 5-R01-LM009723-02). Thanks to Peter Szolovits and the MIT NLP group for their helpful comments. Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Catching the drift: Probabilistic content models, with applications to generation and summarization",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the NAACL/HLT",
"volume": "",
"issue": "",
"pages": "113--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Regina Barzilay and Lillian Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. In Proceedings of the NAACL/HLT, pages 113-120.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Pattern Recognition and Machine Learning (Information Science and Statistics)",
"authors": [
{
"first": "Christopher",
"middle": [
"M"
],
"last": "Bishop",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher M. Bishop. 2006. Pattern Recognition and Machine Learning (Information Science and Statis- tics). Springer-Verlag New York, Inc.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Supervised Topic Models",
"authors": [
{
"first": "M",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Jon",
"middle": [
"D"
],
"last": "Blei",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mcaullife",
"suffix": ""
}
],
"year": 2007,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei and Jon D. McAullife. 2007. Supervised Topic Models. In NIPS.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning document-level semantic properties from free-text annotations",
"authors": [
{
"first": "S",
"middle": [
"R K"
],
"last": "Branavan",
"suffix": ""
},
{
"first": "Harr",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2009,
"venue": "JAIR",
"volume": "34",
"issue": "",
"pages": "569--603",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. R. K. Branavan, Harr Chen, Jacob Eisenstein, and Regina Barzilay. 2009. Learning document-level se- mantic properties from free-text annotations. JAIR, 34:569-603.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Content modeling using latent permutations",
"authors": [
{
"first": "S",
"middle": [
"R K"
],
"last": "Harr Chen",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Branavan",
"suffix": ""
},
{
"first": "David",
"middle": [
"R"
],
"last": "Barzilay",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Karger",
"suffix": ""
}
],
"year": 2009,
"venue": "JAIR",
"volume": "36",
"issue": "",
"pages": "129--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harr Chen, S. R. K. Branavan, Regina Barzilay, and David R. Karger. 2009. Content modeling using la- tent permutations. JAIR, 36:129-163.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning with compositional semantics as structural inference for subsentential sentiment analysis",
"authors": [
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the EMNLP",
"volume": "",
"issue": "",
"pages": "793--801",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yejin Choi and Claire Cardie. 2008. Learning with com- positional semantics as structural inference for sub- sentential sentiment analysis. In Proceedings of the EMNLP, pages 793-801.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A Coefficient of Agreement for Nominal Scales",
"authors": [
{
"first": "J",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 1960,
"venue": "Educational and Psychological Measurement",
"volume": "20",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Cohen. 1960. A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement, 20(1):37.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Fast consensus decoding over translation forests",
"authors": [
{
"first": "John",
"middle": [],
"last": "Denero",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the ACL/IJCNLP",
"volume": "",
"issue": "",
"pages": "567--575",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John DeNero, David Chiang, and Kevin Knight. 2009. Fast consensus decoding over translation forests. In Proceedings of the ACL/IJCNLP, pages 567-575.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A unified local and global model for discourse coherence",
"authors": [
{
"first": "Micha",
"middle": [],
"last": "Elsner",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Austerweil",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the NAACL/HLT",
"volume": "",
"issue": "",
"pages": "436--443",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Micha Elsner, Joseph Austerweil, and Eugene Charniak. 2007. A unified local and global model for discourse coherence. In Proceedings of the NAACL/HLT, pages 436-443.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Joint parsing and named entity recognition",
"authors": [
{
"first": "Jenny",
"middle": [
"Rose"
],
"last": "Finkel",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jenny Rose Finkel and Christopher D. Manning. 2009. Joint parsing and named entity recognition. In Pro- ceedings of the NAACL.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Trained named entity recognition using distributional clusters",
"authors": [
{
"first": "Dayne",
"middle": [],
"last": "Freitag",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the EMNLP",
"volume": "",
"issue": "",
"pages": "262--269",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dayne Freitag. 2004. Trained named entity recogni- tion using distributional clusters. In Proceedings of the EMNLP, pages 262-269.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Seeing stars when there aren't many stars: Graph-based semi-supervised learning for sentiment categorization",
"authors": [
{
"first": "B",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Xiaojin",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the NAACL/HLT Workshop on TextGraphs",
"volume": "",
"issue": "",
"pages": "45--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew B. Goldberg and Xiaojin Zhu. 2006. See- ing stars when there aren't many stars: Graph-based semi-supervised learning for sentiment categoriza- tion. In Proceedings of the NAACL/HLT Workshop on TextGraphs, pages 45-52.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Semiring parsing",
"authors": [
{
"first": "Joshua",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 1999,
"venue": "Computational Linguistics",
"volume": "25",
"issue": "4",
"pages": "573--605",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joshua Goodman. 1999. Semiring parsing. Computa- tional Linguistics, 25(4):573-605.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Exploring content models for multi-document summarization",
"authors": [
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vanderwende",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the NAACL/HLT",
"volume": "",
"issue": "",
"pages": "362--370",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aria Haghighi and Lucy Vanderwende. 2009. Exploring content models for multi-document summarization. In Proceedings of the NAACL/HLT, pages 362-370.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Automatic identification of pro and con reasons in online reviews",
"authors": [
{
"first": "Min",
"middle": [],
"last": "Soo",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the COLING/ACL",
"volume": "",
"issue": "",
"pages": "483--490",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soo-Min Kim and Eduard Hovy. 2006. Automatic iden- tification of pro and con reasons in online reviews. In Proceedings of the COLING/ACL, pages 483-490.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "ROUGE: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the ACL",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Proceedings of the ACL, pages 74-81.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "On the limited memory bfgs method for large scale optimization",
"authors": [
{
"first": "C",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Dong",
"middle": [
"C"
],
"last": "Nocedal",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nocedal",
"suffix": ""
}
],
"year": 1989,
"venue": "Mathematical Programming",
"volume": "45",
"issue": "",
"pages": "503--528",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dong C. Liu, Jorge Nocedal, Dong C. Liu, and Jorge No- cedal. 1989. On the limited memory bfgs method for large scale optimization. Mathematical Programming, 45:503-528.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Structured models for fine-to-coarse sentiment analysis",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Kerry",
"middle": [],
"last": "Hannan",
"suffix": ""
},
{
"first": "Tyler",
"middle": [],
"last": "Neylon",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Wells",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Reynar",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the ACL",
"volume": "",
"issue": "",
"pages": "432--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Kerry Hannan, Tyler Neylon, Mike Wells, and Jeff Reynar. 2007. Structured models for fine-to-coarse sentiment analysis. In Proceedings of the ACL, pages 432-439.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the ACL",
"volume": "",
"issue": "",
"pages": "271--278",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the ACL, pages 271-278.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the ACL",
"volume": "",
"issue": "",
"pages": "115--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the ACL, pages 115-124.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Effective information extraction with semantic affinity patterns and relevant regions",
"authors": [
{
"first": "Siddharth",
"middle": [],
"last": "Patwardhan",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the EMNLP/CoNLL",
"volume": "",
"issue": "",
"pages": "717--727",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siddharth Patwardhan and Ellen Riloff. 2007. Effec- tive information extraction with semantic affinity pat- terns and relevant regions. In Proceedings of the EMNLP/CoNLL, pages 717-727.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Cheap and fast -but is it good? evaluating non-expert annotations for natural language tasks",
"authors": [
{
"first": "Rion",
"middle": [],
"last": "Snow",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rion Snow, Brendan O'Connor, Daniel Jurafsky, and An- drew Y. Ng. 2008. Cheap and fast -but is it good? evaluating non-expert annotations for natural language tasks. In Proceedings of the EMNLP.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Multiple aspect ranking using the good grief algorithm",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Snyder",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the NAACL/HLT",
"volume": "",
"issue": "",
"pages": "300--307",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Snyder and Regina Barzilay. 2007. Multiple aspect ranking using the good grief algorithm. In Pro- ceedings of the NAACL/HLT, pages 300-307.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Supervised and unsupervised methods in employing discourse relations for improving opinion polarity classification",
"authors": [
{
"first": "Swapna",
"middle": [],
"last": "Somasundaran",
"suffix": ""
},
{
"first": "Galileo",
"middle": [],
"last": "Namata",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Lise",
"middle": [],
"last": "Getoor",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the EMNLP",
"volume": "",
"issue": "",
"pages": "170--179",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Swapna Somasundaran, Galileo Namata, Janyce Wiebe, and Lise Getoor. 2009. Supervised and unsupervised methods in employing discourse relations for improv- ing opinion polarity classification. In Proceedings of the EMNLP, pages 170-179.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Characterising measures of lexical distributional similarity",
"authors": [
{
"first": "Julie",
"middle": [],
"last": "Weeds",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weir",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Mccarthy",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julie Weeds, David Weir, and Diana McCarthy. 2004. Characterising measures of lexical distributional simi- larity. In Proceedings of the COLING, page 1015.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Just how mad are you? finding strong and weak opinion clauses",
"authors": [
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Hwa",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the AAAI",
"volume": "",
"issue": "",
"pages": "761--769",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Theresa Wilson, Janyce Wiebe, and Rebecca Hwa. 2004. Just how mad are you? finding strong and weak opin- ion clauses. In Proceedings of the AAAI, pages 761- 769.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "Sample labeled text from the multi-aspect sentiment corpus [R Big multifunction remote] with [R easy-toread keys]. The on-screen menu is [M easy to use] and you [M can rename the inputs] to one of several options (DVD, Cable, etc.). TV because the [V overall picture quality is good] and it's [A unbelievably thin]. [I Plenty of inputs], including [I 2 HDMI ports], which is [E unheard of in this price range]. (b) Sample labeled text from the Amazon multi-aspect summarization corpus [F All the ingredients are fresh], [V the sizes are huge] and [V the price is cheap]. This place rocks!] [V Pricey, but worth it] . [A The place is a pretty good size] and [S the staff is super friendly].",
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"uris": null,
"text": "Results on the Amazon corpus using the complete annotated set with varying amounts of additional unlabeled data. 7",
"type_str": "figure"
},
"TABREF0": {
"html": null,
"text": "An excerpt from a DVD review.",
"content": "<table/>",
"num": null,
"type_str": "table"
},
"TABREF2": {
"html": null,
"text": "The error rate on the multi-aspect sentiment ranking. We report mean L 1 and L 2 between system prediction and true values over all aspects. Marked results are statistically significant with p < 0.05: * over the previous model and \u2020 over NoCM.",
"content": "<table><tr><td/><td/><td>L 1</td><td>L 2</td></tr><tr><td/><td>NoCM</td><td>1.37</td><td>3.15</td></tr><tr><td/><td colspan=\"3\">IndepCM 1.28 \u2020* 2.80 \u2020*</td></tr><tr><td/><td colspan=\"2\">JointCM 1.25 \u2020</td><td>2.65 \u2020*</td></tr><tr><td/><td>Gold</td><td colspan=\"2\">1.18 \u2020* 2.48 \u2020*</td></tr><tr><td colspan=\"2\">Table 3: F 1</td><td>F 2</td><td>Prec.</td><td>Recall</td></tr><tr><td>NoCM</td><td colspan=\"3\">28.8% 34.8% 22.4%</td><td>40.3%</td></tr><tr><td colspan=\"5\">IndepCM 37.9% 43.7% 31.1% \u2020* 48.6% \u2020*</td></tr><tr><td colspan=\"5\">JointCM 39.2% 44.4% 32.9% \u2020* 48.6% \u2020</td></tr><tr><td>). To</td><td/><td/><td/></tr><tr><td>compute ROUGE, we control for length by limiting</td><td/><td/><td/></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF3": {
"html": null,
"text": "5% 23.8% 25.8% \u2020* 23.3% \u2020* 43.0% 41.8% 45.3% \u2020* 40.9% \u2020* 47.4% \u2020* JointCM 28.2% 31.3% 24.3% \u2020 33.7% \u2020* 47.8% 53.0% 41.2% \u2020 57.1% \u2020* 47.6% \u2020*",
"content": "<table><tr><td/><td/><td/><td>Multi-label</td><td/><td/><td/><td>Binary labels</td><td/></tr><tr><td/><td>F 1</td><td>F 2</td><td>Prec.</td><td>Recall</td><td>F 1</td><td>F 2</td><td>Prec.</td><td>Recall</td><td>ROUGE</td></tr><tr><td>NoCM</td><td colspan=\"3\">18.9% 18.0% 20.4%</td><td>17.5%</td><td colspan=\"3\">35.1% 33.6% 38.1%</td><td>32.6%</td><td>43.8%</td></tr><tr><td colspan=\"2\">IndepCM 24.</td><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"num": null,
"type_str": "table"
}
}
}
}