ACL-OCL / Base_JSON /prefixP /json /P14 /P14-1030.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P14-1030",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:05:54.909619Z"
},
"title": "Extracting Opinion Targets and Opinion Words from Online Reviews with Graph Co-ranking",
"authors": [
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "Chinese Academy of Sciences",
"location": {
"postCode": "100190",
"settlement": "Beijing",
"country": "China"
}
},
"email": "kliu@nlpr.ia.ac.cn"
},
{
"first": "Liheng",
"middle": [],
"last": "Xu",
"suffix": "",
"affiliation": {
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "Chinese Academy of Sciences",
"location": {
"postCode": "100190",
"settlement": "Beijing",
"country": "China"
}
},
"email": "lhxu@nlpr.ia.ac.cn"
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "Chinese Academy of Sciences",
"location": {
"postCode": "100190",
"settlement": "Beijing",
"country": "China"
}
},
"email": "jzhao@nlpr.ia.ac.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Extracting opinion targets and opinion words from online reviews are two fundamental tasks in opinion mining. This paper proposes a novel approach to collectively extract them with graph coranking. First, compared to previous methods which solely employed opinion relations among words, our method constructs a heterogeneous graph to model two types of relations, including semantic relations and opinion relations. Next, a co-ranking algorithm is proposed to estimate the confidence of each candidate, and the candidates with higher confidence will be extracted as opinion targets/words. In this way, different relations make cooperative effects on candidates' confidence estimation. Moreover, word preference is captured and incorporated into our coranking algorithm. In this way, our coranking is personalized and each candidate's confidence is only determined by its preferred collocations. It helps to improve the extraction precision. The experimental results on three data sets with different sizes and languages show that our approach achieves better performance than state-of-the-art methods.",
"pdf_parse": {
"paper_id": "P14-1030",
"_pdf_hash": "",
"abstract": [
{
"text": "Extracting opinion targets and opinion words from online reviews are two fundamental tasks in opinion mining. This paper proposes a novel approach to collectively extract them with graph coranking. First, compared to previous methods which solely employed opinion relations among words, our method constructs a heterogeneous graph to model two types of relations, including semantic relations and opinion relations. Next, a co-ranking algorithm is proposed to estimate the confidence of each candidate, and the candidates with higher confidence will be extracted as opinion targets/words. In this way, different relations make cooperative effects on candidates' confidence estimation. Moreover, word preference is captured and incorporated into our coranking algorithm. In this way, our coranking is personalized and each candidate's confidence is only determined by its preferred collocations. It helps to improve the extraction precision. The experimental results on three data sets with different sizes and languages show that our approach achieves better performance than state-of-the-art methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In opinion mining, extracting opinion targets and opinion words are two fundamental subtasks. Opinion targets are objects about which users' opinions are expressed, and opinion words are words which indicate opinions' polarities. Extracting them can provide essential information for obtaining fine-grained analysis on customers' opinions. Thus, it has attracted a lot of attentions (Hu and Liu, 2004b; Moghaddam and Ester, 2011; Mukherjee and Liu, 2012) .",
"cite_spans": [
{
"start": 383,
"end": 402,
"text": "(Hu and Liu, 2004b;",
"ref_id": "BIBREF4"
},
{
"start": 403,
"end": 429,
"text": "Moghaddam and Ester, 2011;",
"ref_id": "BIBREF12"
},
{
"start": 430,
"end": 454,
"text": "Mukherjee and Liu, 2012)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To this end, previous work usually employed a collective extraction strategy (Qiu et al., 2009; Hu and Liu, 2004b; Liu et al., 2013b) . Their intuition is: opinion words usually co-occur with opinion targets in sentences, and there are strong modification relationship between them (called opinion relation in ). If a word is an opinion word, other words with which that word having opinion relations will have highly probability to be opinion targets, and vice versa. In this way, extraction is alternatively performed and mutual reinforced between opinion targets and opinion words. Although this strategy has been widely employed by previous approaches, it still has several limitations. 1) Only considering opinion relations is insufficient. Previous methods mainly focused on employing opinion relations among words for opinion target/word co-extraction. They have investigated a series of techniques to enhance opinion relations identification performance, such as nearest neighbor rules (Liu et al., 2005) , syntactic patterns Popescu and Etzioni, 2005) , word alignment models Liu et al., 2013b; Liu et al., 2013a) , etc. However, we are curious that whether merely employing opinion relations among words is enough for opinion target/word extraction? We note that there are additional types of relations among words. For example, \"LCD\" and \"LED\" both denote the same aspect \"screen\" in TV set domain, and they are topical related. We call such relations between homogeneous words as semantic relations. If we have known \"LCD\" to be an opinion target, \"LED\" is naturally to be an opinion target. Intuitively, besides opinion relations, semantic relations may provide additional rich clues for indicating opinion targets/words. Which kind of relations is more effective for opinion targets/words extraction? Is it beneficial to consider these two types of relations together for the extraction? To our best knowl-edge, these problems have seldom been studied before (see Section 2).",
"cite_spans": [
{
"start": 77,
"end": 95,
"text": "(Qiu et al., 2009;",
"ref_id": "BIBREF18"
},
{
"start": 96,
"end": 114,
"text": "Hu and Liu, 2004b;",
"ref_id": "BIBREF4"
},
{
"start": 115,
"end": 133,
"text": "Liu et al., 2013b)",
"ref_id": "BIBREF10"
},
{
"start": 994,
"end": 1012,
"text": "(Liu et al., 2005)",
"ref_id": "BIBREF7"
},
{
"start": 1034,
"end": 1060,
"text": "Popescu and Etzioni, 2005)",
"ref_id": "BIBREF17"
},
{
"start": 1085,
"end": 1103,
"text": "Liu et al., 2013b;",
"ref_id": "BIBREF10"
},
{
"start": 1104,
"end": 1122,
"text": "Liu et al., 2013a)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2) Ignoring word preference. When employing opinion relations to perform mutual reinforcing extraction between opinion targets and opinion words, previous methods depended on opinion associations among words, but seldom considered word preference. Word preference denotes a word's preferred collocations. Intuitively, the confidence of a candidate being an opinion target (opinion word) should mostly be determined by its word preferences rather than all words having opinion relations with it. For example \"This camera's price is expensive for me.\" \"It's price is good.\" \"Canon 40D has a good price.\" In these three sentences, \"price\" is modified by \"good\" more times than \"expensive\". In traditional extraction strategy, opinion associations are usually computed based on the co-occurrence frequency. Thus, \"good\" has more strong opinion association with \"price\" than \"expensive\", and it would have more contributions on determining \"price\" to be an opinion target or not. It's unreasonable. \"Expensive\" actually has more relatedness with \"price\" than \"good\", and \"expensive\" is likely to be a word preference for \"price\". The confidence of \"price\" being an opinion target should be influenced by \"expensive\" in greater extent than \"good\". In this way, we argue that the extraction will be more precise. Figure 1: Heterogeneous Graph: OC means opinion word candidates. T C means opinion target candidates. Solid curves and dotted lines respectively mean semantic relations and opinion relations between two candidates. Thus, to resolve these two problems, we present a novel approach with graph co-ranking. The collective extraction of opinion targets/words is performed in a co-ranking process. First, we operate over a heterogeneous graph to model semantic relations and opinion relations into a unified model. Specifically, our heterogeneous graph is composed of three subgraphs which model different relation types and candidates, as shown in Figure 1. The first subgraph G tt represents semantic relations among opinion target candidates, and the second subgraph G oo models semantic relations among opinion word candidates. The third part is a bipartite subgraph G to , which models opinion relations among different candidate types and connects the above two subgraphs together. Then we perform a random walk algorithm on G tt , G oo and G to separately, to estimate all candidates' confidence, and the entries with higher confidence than a threshold are correspondingly extracted as opinion targets/words. The results could reflect which type of relation is more useful for the extraction.",
"cite_spans": [],
"ref_spans": [
{
"start": 1949,
"end": 1955,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Second, a co-ranking algorithm, which incorporates three separate random walks on G tt , G oo and G to into a unified process, is proposed to perform candidate confidence estimation. Different relations may cooperatively affect candidate confidence estimation and generate more global ranking results. Moreover, we discover each candidate's preferences through topics. Such word preference will be different for different candidates. We add word preference information into our algorithm and make our co-ranking algorithm be personalized. A candidate's confidence would mainly absorb the contributions from its word preferences rather than its all neighbors with opinion relations, which may be beneficial for improving extraction precision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We perform experiments on real-world datasets from different languages and different domains. Results show that our approach effectively improves extraction performance compared to the state-of-the-art approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are many significant research efforts on opinion targets/words extraction (sentence level and corpus level). In sentence level extraction, previous methods (Wu et al., 2009; Ma and Wan, 2010; Yang and Cardie, 2013) mainly aimed to identify all opinion target/word mentions in sentences. They regarded it as a sequence labeling task, where several classical models were used, such as CRFs and SVM (Wu et al., 2009) . This paper belongs to corpus level extraction, and aims to generate a sentiment lexicon and a target list rather than to identify mentions in sen-tences. Most of previous corpus-level methods adopted a co-extraction framework, where opinion targets and opinion words reinforce each other according to their opinion relations. Thus, how to improve opinion relations identification performance was their main focus. (Hu and Liu, 2004a) exploited nearest neighbor rules to mine opinion relations among words. (Popescu and Etzioni, 2005) and (Qiu et al., 2011) designed syntactic patterns to perform this task. promoted Qiu's method. They adopted some special designed patterns to increase recall. Liu et al., 2013a; Liu et al., 2013b) employed word alignment model to capture opinion relations rather than syntactic parsing. The experimental results showed that these alignment-based methods are more effective than syntax-based approaches for online informal texts. However, all aforementioned methods only employed opinion relations for the extraction, but ignore considering semantic relations among homogeneous candidates. Moreover, they all ignored word preference in the extraction process.",
"cite_spans": [
{
"start": 162,
"end": 179,
"text": "(Wu et al., 2009;",
"ref_id": "BIBREF23"
},
{
"start": 180,
"end": 197,
"text": "Ma and Wan, 2010;",
"ref_id": "BIBREF11"
},
{
"start": 198,
"end": 220,
"text": "Yang and Cardie, 2013)",
"ref_id": "BIBREF24"
},
{
"start": 402,
"end": 419,
"text": "(Wu et al., 2009)",
"ref_id": "BIBREF23"
},
{
"start": 836,
"end": 855,
"text": "(Hu and Liu, 2004a)",
"ref_id": "BIBREF3"
},
{
"start": 928,
"end": 955,
"text": "(Popescu and Etzioni, 2005)",
"ref_id": "BIBREF17"
},
{
"start": 960,
"end": 978,
"text": "(Qiu et al., 2011)",
"ref_id": "BIBREF19"
},
{
"start": 1116,
"end": 1134,
"text": "Liu et al., 2013a;",
"ref_id": "BIBREF9"
},
{
"start": 1135,
"end": 1153,
"text": "Liu et al., 2013b)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In terms of considering semantic relations among words, our method is related with several approaches based on topic model (Zhao et al., 2010; Moghaddam and Ester, 2011; Moghaddam and Ester, 2012a; Moghaddam and Ester, 2012b; Mukherjee and Liu, 2012) . The main goals of these methods weren't to extract opinion targets/words, but to categorize all given aspect terms and sentiment words. Although these models could be used for our task according to the associations between candidates and topics, solely employing semantic relations is still one-sided and insufficient to obtain expected performance.",
"cite_spans": [
{
"start": 226,
"end": 250,
"text": "Mukherjee and Liu, 2012)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Furthermore, there is little work which considered these two types of relations globally (Su et al., 2008; Hai et al., 2012; Bross and Ehrig, 2013) . They usually captured different relations using cooccurrence information. That was too coarse to obtain expected results . In addition, (Hai et al., 2012) extracted opinion targets/words in a bootstrapping process, which had an error propagation problem. In contrast, we perform extraction with a global graph co-ranking process, where error propagation can be effectively alleviated. (Su et al., 2008) used heterogeneous relations to find implicit sentiment associations among words. Their aim was only to perform aspect terms categorization but not to extract opinion targets/words. They extracted opinion targets/words in advanced through simple phrase detection. Thus, the extraction performance is far from expectation.",
"cite_spans": [
{
"start": 89,
"end": 106,
"text": "(Su et al., 2008;",
"ref_id": "BIBREF20"
},
{
"start": 107,
"end": 124,
"text": "Hai et al., 2012;",
"ref_id": "BIBREF1"
},
{
"start": 125,
"end": 147,
"text": "Bross and Ehrig, 2013)",
"ref_id": "BIBREF0"
},
{
"start": 286,
"end": 304,
"text": "(Hai et al., 2012)",
"ref_id": "BIBREF1"
},
{
"start": 535,
"end": 552,
"text": "(Su et al., 2008)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section, we propose our method in detail. We formulate opinion targets/words extraction as a co-ranking task. All nouns/noun phrases are regarded as opinion target candidates, and all adjectives/verbs are regarded as opinion word candidates, which are widely adopted by pervious methods (Hu and Liu, 2004a; Qiu et al., 2011; Wang and Wang, 2008; . Then each candidate will be assigned a confidence and ranked, and the candidates with higher confidence than a threshold will be extracted as the results.",
"cite_spans": [
{
"start": 295,
"end": 314,
"text": "(Hu and Liu, 2004a;",
"ref_id": "BIBREF3"
},
{
"start": 315,
"end": 332,
"text": "Qiu et al., 2011;",
"ref_id": "BIBREF19"
},
{
"start": 333,
"end": 353,
"text": "Wang and Wang, 2008;",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Method",
"sec_num": "3"
},
{
"text": "Different from traditional methods, besides opinion relations among words, we additionally capture semantic relations among homogeneous candidates. To this end, a heterogeneous undirected graph",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Method",
"sec_num": "3"
},
{
"text": "G = (V, E) is constructed. V = V t \u222a V o denotes the vertex set, which includes opinion target candidates v t \u2208 V t and opinion word candidates v o \u2208 V o",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Method",
"sec_num": "3"
},
{
"text": ". E denotes the edge set, where e ij \u2208 E means that there is a relation between two vertices. E tt \u2282 E represents the semantic relations between two opinion target candidates. E oo \u2282 E represents the semantic relations between two opinion word candidates. E to \u2282 E represents the opinion relations between opinion target candidates and opinion word candidates. Based on different relation types, we used three matrices",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Method",
"sec_num": "3"
},
{
"text": "M tt \u2208 R |V t |\u00d7|V t | , M oo \u2208 R |V o |\u00d7|V o | and M to \u2208 R |V t |\u00d7|V o |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Method",
"sec_num": "3"
},
{
"text": "to record the association weights between any two vertices, respectively. Section 3.4 will illustrate how to construct them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Method",
"sec_num": "3"
},
{
"text": "To estimate the confidence of each candidate, we use a random walk algorithm on our graph to perform co-ranking. Most previous methods (Hu and Liu, 2004a; Qiu et al., 2011; Wang and Wang, 2008; only considered opinion relations among words. Their basic assumption is as follows.",
"cite_spans": [
{
"start": 135,
"end": 154,
"text": "(Hu and Liu, 2004a;",
"ref_id": "BIBREF3"
},
{
"start": 155,
"end": 172,
"text": "Qiu et al., 2011;",
"ref_id": "BIBREF19"
},
{
"start": 173,
"end": 193,
"text": "Wang and Wang, 2008;",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Only Considering Opinion Relations",
"sec_num": "3.1"
},
{
"text": "Assumption 1: If a word is likely to be an opinion word, the words which it has opinion relation with will have higher confidence to be opinion targets, and vice versa.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Only Considering Opinion Relations",
"sec_num": "3.1"
},
{
"text": "In this way, candidates' confidences (v t or v o ) are collectively determined by each other iteratively. It equals to making random walk on subgraph",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Only Considering Opinion Relations",
"sec_num": "3.1"
},
{
"text": "G to = (V, E to ) of G. Thus we have C t = (1 \u2212 \u00b5) \u00d7 M to \u00d7 C o + \u00b5 \u00d7 I t C o = (1 \u2212 \u00b5) \u00d7 M T to \u00d7 C t + \u00b5 \u00d7 I o (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Only Considering Opinion Relations",
"sec_num": "3.1"
},
{
"text": "where C t and C o respectively represent confidences of opinion targets and opinion words. m to i,j \u2208 M to means the association weight between the ith opinion target and the jth opinion word according to their opinion relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Only Considering Opinion Relations",
"sec_num": "3.1"
},
{
"text": "It's worthy noting that I t and I o respectively denote prior confidences of opinion target candidates and opinion word candidates. We argue that opinion targets are usually domain-specific, and there are remarkably distribution difference of them on different domains (in-domain D in vs. out-domain D out ). If a candidate is salient in D in but common in D out , it's likely to be an opinion target in D in . Thus, we use a domain relevance measure (DR) (Hai et al., 2013) to compute I t .",
"cite_spans": [
{
"start": 456,
"end": 474,
"text": "(Hai et al., 2013)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Only Considering Opinion Relations",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "DR(t) = R(t, D in ) R(t, D out )",
"eq_num": "(2)"
}
],
"section": "Only Considering Opinion Relations",
"sec_num": "3.1"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Only Considering Opinion Relations",
"sec_num": "3.1"
},
{
"text": "R(t, D) =w t st \u00d7 N j=1 (w tj \u2212 1 W j \u00d7 W j k=1 w kj ) represents candidate relevance with domain D. w tj = (1 + logT F tj ) \u00d7 log N DFt",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Only Considering Opinion Relations",
"sec_num": "3.1"
},
{
"text": "is a TF-IDF-like weight of candidate t in document j. T F tj is the frequency of the candidate t in the jth document, and DF t is document frequency. N means the document number in domain D. R(t, D) includes two measures to reflect the salient of a candidate in D. 1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Only Considering Opinion Relations",
"sec_num": "3.1"
},
{
"text": "w tj \u2212 1 W j \u00d7 W j k=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Only Considering Opinion Relations",
"sec_num": "3.1"
},
{
"text": "w kj reflects how frequently a term is mentioned in a particular document. W j denotes the word number in document j. 2)w t st quantifies how significantly a term is mentioned across all documents in D.w t = 1 N \u00d7 N k=1 w tk denotes average weight across all documents for",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Only Considering Opinion Relations",
"sec_num": "3.1"
},
{
"text": "t. s t = 1 N \u00d7 N j=1 (w tj \u2212w j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Only Considering Opinion Relations",
"sec_num": "3.1"
},
{
"text": "2 denotes the standard variance of term t. We use the given reviews as in-domain collection D in and Google n-gram corpus 1 as out-domain collection D out . Finally, each entry in I t is a normalized DR(t) score. In contrast, opinion words are usually domain-independent. Users may use same words to express theirs opinions, like \"good\", \"bad\", etc. But there are still some domain-dependent opinion 1 http://books.google.com/ngrams/datasets words, like \"delicious\" in the restaurant domain, \"powerful\" in the car domain. It's difficult to discriminate them from other words by using statistical information. So we simply set all entries in I o to be 1. \u00b5 \u2208 [0, 1] in Eq.1 determines the impact of the prior confidence on results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Only Considering Opinion Relations",
"sec_num": "3.1"
},
{
"text": "To estimate candidates' confidences by only considering semantic relations among words, we make two separately random walks on the subgraphs of G, G tt = (V, E tt ) and G oo = (V, E oo ). The basic assumption is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Only Considering Semantic Relations",
"sec_num": "3.2"
},
{
"text": "Assumption 2: If a word is likely to be an opinion target (opinion word), the words which it has strong semantic relation with will have higher confidence to be opinion targets (opinion words).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Only Considering Semantic Relations",
"sec_num": "3.2"
},
{
"text": "In this way, the confidence of the candidate is determined only by its homogeneous neighbours. There is no mutual reinforcement between opinion targets and opinion words. Thus we have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Only Considering Semantic Relations",
"sec_num": "3.2"
},
{
"text": "C t = (1 \u2212 \u03bd) \u00d7 M tt \u00d7 C t + \u03bd \u00d7 I t C o = (1 \u2212 \u03bd) \u00d7 M oo \u00d7 C o + \u03bd \u00d7 I o (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Only Considering Semantic Relations",
"sec_num": "3.2"
},
{
"text": "where \u03bd has the same role as \u00b5 in Eq.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Only Considering Semantic Relations",
"sec_num": "3.2"
},
{
"text": "To jointly model semantic relations and opinion relations for opinion targets/words extraction, we couple two random walking algorithms mentioned above together. Here, Assumption 1 and Assumption 2 are both satisfied. Thus, an opinion target/word candidate's confidence is collectively determined by its neighbours according to different relation types. Meanwhile, each item may make influence on it's neighbours. It's an iterative reinforcement process. Thus, we have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Considering Semantic Relations and Opinion Relations Together",
"sec_num": "3.3"
},
{
"text": "C t = (1 \u2212 \u03bb \u2212 \u00b5) \u00d7 M to \u00d7 C o + \u03bb \u00d7 M tt \u00d7 C t + \u00b5 \u00d7 I t C o = (1 \u2212 \u03bb \u2212 \u00b5) \u00d7 M T to \u00d7 C t + \u03bb \u00d7 M oo \u00d7 C o + \u00b5 \u00d7 I o (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Considering Semantic Relations and Opinion Relations Together",
"sec_num": "3.3"
},
{
"text": "where \u03bb \u2208 [0, 1] determines which type of relations dominates candidate confidence estimation. \u03bb = 0 means that each candidate's confidence is estimated by only considering opinion relations among words, which equals to Eq.1. Otherwise, when \u03bb = 1, candidate confidence estimation only considers semantic relations among words, which equals to Eq.3. \u00b5, I o and I t have the same meaning in Eq.1. Our algorithm will run iteratively until it converges or in a fixed iteration number Iter. In experiments, we set Iter = 200.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Considering Semantic Relations and Opinion Relations Together",
"sec_num": "3.3"
},
{
"text": "Obtaining Word Preference. The co-ranking algorithm in Eq.4 is based on a standard random walking algorithm, which randomly selects a link according to the association matrix M to , M tt and M oo , or jumps to a random node with prior confidence value. However, it generates a global ranking over all candidates without taking the node preference (word preference) into account. As mentioned in the first section, each opinion target/word has its preferred collocations, it's reasonable that the confidence of an opinion target (opinion word) candidate should be preferentially determined by its preferences, rather than all of its neighbors with opinion relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Considering Semantic Relations and Opinion Relations Together",
"sec_num": "3.3"
},
{
"text": "To obtain the word preference, we resort to topics. We believe that if an opinion word v i o is topical related with a target word v j t , v i o can be regarded as a word preference for v j t , and vice versa. For example, \"price\" and \"expensive\" are topically related in phone's domain, so they are a word preference for each other.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Considering Semantic Relations and Opinion Relations Together",
"sec_num": "3.3"
},
{
"text": "Specifically, we use a vector P",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Considering Semantic Relations and Opinion Relations Together",
"sec_num": "3.3"
},
{
"text": "T i = [P T i 1 , ..., P T i k , ..., P T i |V o | ] 1\u00d7|V o |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Considering Semantic Relations and Opinion Relations Together",
"sec_num": "3.3"
},
{
"text": "to represent word preference of the ith opinion target candidate. P T i k means the preferred probability of the ith potential opinion target for the kth potential opinion words. To compute P T i k , we first use Kullback-Leibler divergence to measure the semantic distance between any two candidates on the bridge of topics. Thus, we have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Considering Semantic Relations and Opinion Relations Together",
"sec_num": "3.3"
},
{
"text": "D(v i , v j ) = 1 2 \u03a3 z (KL z (v i ||v j ) + KL z (v j ||v i ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Considering Semantic Relations and Opinion Relations Together",
"sec_num": "3.3"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Considering Semantic Relations and Opinion Relations Together",
"sec_num": "3.3"
},
{
"text": "KL z (v i ||v j ) = p(z|v i )log p(z|v i ) p(z|v j ) means the KL-divergence from candidate v i to v j based on topic z. p(z|v) = p(v|z) p(z) p(v)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Considering Semantic Relations and Opinion Relations Together",
"sec_num": "3.3"
},
{
"text": ", where p(v|z) is the probability of the candidate v to topic z (see Section 3.4). p(z) is the probability that topic z in reviews. p(v) is the probability that a candidate occurs in reviews. Then, a logistic function is used to map D(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Considering Semantic Relations and Opinion Relations Together",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v i , v j ) into [0, 1]. SA(v i , v j ) = 1 1 + e D(v i ,v j )",
"eq_num": "(5)"
}
],
"section": "Considering Semantic Relations and Opinion Relations Together",
"sec_num": "3.3"
},
{
"text": "Then, we calculate",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Considering Semantic Relations and Opinion Relations Together",
"sec_num": "3.3"
},
{
"text": "P T i k by normalize SA(v i , v j ) score, i.e. P T i k = SA(v t i ,v o k ) |V o | p=1 SA(v t i ,v o p )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Considering Semantic Relations and Opinion Relations Together",
"sec_num": "3.3"
},
{
"text": ". For demon-stration, we give some examples in Table 1 , where each entry denotes a SA(v i , v j ) score between two candidates. We can see that using topics can successfully capture the preference information for each opinion target/word. ",
"cite_spans": [],
"ref_spans": [
{
"start": 47,
"end": 54,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Considering Semantic Relations and Opinion Relations Together",
"sec_num": "3.3"
},
{
"text": "= [P O j 1 , ..., P O j q , ..., P O j |V t | ] 1\u00d7|V t |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Considering Semantic Relations and Opinion Relations Together",
"sec_num": "3.3"
},
{
"text": "to represent the preference information of the jth opinion word candidate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Considering Semantic Relations and Opinion Relations Together",
"sec_num": "3.3"
},
{
"text": "Similarly, we have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Considering Semantic Relations and Opinion Relations Together",
"sec_num": "3.3"
},
{
"text": "P O j q = SA(v t q ,v o j ) |V t | k=1 SA(v t k ,v o j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Considering Semantic Relations and Opinion Relations Together",
"sec_num": "3.3"
},
{
"text": ".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Considering Semantic Relations and Opinion Relations Together",
"sec_num": "3.3"
},
{
"text": "Incorporating Word Preference into Coranking. To consider such word preference in our co-ranking algorithm, we incorporate it into the random walking on G to . Intuitively, preference vectors will be different for different candidates. Thus, the co-ranking algorithm would be personalized. It allows that the candidate confidence propagates to other candidates only in its preference cluster. Specifically, we make modification on original transition matrix M to = (M to 1 , M to 2 , ..., M to |V t | ) and add each candidate's preference in it. LetM to = (M to 1 ,M to 2 , ...,M to |V t | ) be the modified transition matrix, which records the associations between opinion target candidates and opinion word candidates. Here M to k \u2208 R 1\u00d7|V o | andM to k \u2208 R 1\u00d7|V o | denotes the kth column vector in M to andM to , respectively. And let Diag(P T k ) denote a diagonal matrix whose eigenvalue is vector P T k , we hav\u00ea",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Considering Semantic Relations and Opinion Relations Together",
"sec_num": "3.3"
},
{
"text": "M to k = M to k Diag(P T k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Considering Semantic Relations and Opinion Relations Together",
"sec_num": "3.3"
},
{
"text": "Similarly, let U to k \u2208 R 1\u00d7|V t | and\u00db to k \u2208 R 1\u00d7|V t | denotes the kth row vector in M T to andM T to , respectively. Diag(P O k ) denote a diagonal matrix whose eigenvalue is vector P O k . Then we hav\u00ea",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Considering Semantic Relations and Opinion Relations Together",
"sec_num": "3.3"
},
{
"text": "U to k = U to k Diag(P O k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Considering Semantic Relations and Opinion Relations Together",
"sec_num": "3.3"
},
{
"text": "In this way, each candidate's preference is incorporated into original associations based on opinion relation M to through Diag(P O k ) and Diag(P T k ). And candidates' confidences will mainly come from the contributions of its preferences. Thus, C t and C o in Eq.4 become:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Considering Semantic Relations and Opinion Relations Together",
"sec_num": "3.3"
},
{
"text": "C t = (1 \u2212 \u03bb \u2212 \u00b5) \u00d7M to \u00d7 C o + \u03bb \u00d7 M tt \u00d7 C t + \u00b5 \u00d7 I t C o = (1 \u2212 \u03bb \u2212 \u00b5) \u00d7M T to \u00d7 C t + \u03bb \u00d7 M oo \u00d7 C o + \u00b5 \u00d7 I o (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Considering Semantic Relations and Opinion Relations Together",
"sec_num": "3.3"
},
{
"text": "In this section, we explain how to capture semantic relations and opinion relations for constructing transition matrices M tt , M oo and M to . Capturing Semantic Relations: For capturing semantic relations among homogenous candidates, we employ topics. We believe that if two candidates share similar topics in the corpus, there is a strong semantic relation between them. Thus, we employ a LDA variation (Mukherjee and Liu, 2012) , an extension of (Zhao et al., 2010), to discover topic distribution on words, which sampled all words into two separated observations: opinion targets and opinion words. It's because that we are only interested in topic distribution of opinion targets/words, regardless of other useless words, including conjunctions, prepositions etc. This model has been proven to be better than the standard LDA model and other LDA variations for opinion mining (Mukherjee and Liu, 2012) .",
"cite_spans": [
{
"start": 406,
"end": 431,
"text": "(Mukherjee and Liu, 2012)",
"ref_id": "BIBREF16"
},
{
"start": 882,
"end": 907,
"text": "(Mukherjee and Liu, 2012)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Capturing Semantic and Opinion Relations",
"sec_num": "3.4"
},
{
"text": "After topic modeling, we obtain the probability of the candidates (v t and v o ) to topic z, i.e. p(z|v t ) and p(z|v o ), and topic distribution p(z). Then, a symmetric Kullback-Leibler divergence as same as Eq.5 is used to calculate the semantical associations between any two homogenous candidates. Thus, we obtain SA(v t , v t ) and SA(v o , v o ), which correspond to the entries in M tt and M oo , respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Capturing Semantic and Opinion Relations",
"sec_num": "3.4"
},
{
"text": "Capturing Opinion Relations: To capture opinion relations among words and construct the transition matrix M to , we used an alignmentbased method proposed in (Liu et al., 2013b) . This approach models capturing opinion relations as a monolingual word alignment process. Each opinion target can find its corresponding modifiers in sentences through alignment, in which multiple factors are considered globally, such as co-occurrence information, word position in sentence, etc. Moreover, this model adopted a partially supervised framework to combine syntactic information with alignment results, which has been proven to be more precise than the state-ofthe-art approaches for opinion relations identification (Liu et al., 2013b) .",
"cite_spans": [
{
"start": 158,
"end": 177,
"text": "(Liu et al., 2013b)",
"ref_id": "BIBREF10"
},
{
"start": 710,
"end": 729,
"text": "(Liu et al., 2013b)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Capturing Semantic and Opinion Relations",
"sec_num": "3.4"
},
{
"text": "After performing word alignment, we obtain a set of word pairs composed of a noun (noun phrase) and its corresponding modified word. Then, we simply employ Pointwise Mutual Information (PMI) to calculate the opinion associations among words as the entries in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Capturing Semantic and Opinion Relations",
"sec_num": "3.4"
},
{
"text": "M to . OA(v t , v o ) = log p(v t ,v o ) p(v t )p(v o )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Capturing Semantic and Opinion Relations",
"sec_num": "3.4"
},
{
"text": ", where v t and v o denote an opinion target candidate and an opinion word candidate, respectively. p(v t , v o ) is the co-occurrence probability of v t and v o based on the opinion relation identification results. p(v t ) and p(v o ) give the independent occurrence probability of of v t and v o , respectively 4 Experiments",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Capturing Semantic and Opinion Relations",
"sec_num": "3.4"
},
{
"text": "Datasets: To evaluate the proposed method, we used three datasets. The first one is Customer Review Datasets (CRD), used in (Hu and Liu, 2004a) , which contains reviews about five products. The second one is COAE2008 dataset2 2 , which contains Chinese reviews about four products. The third one is Large, also used in (Wang et al., 2011; Liu et al., 2013a) , where two domains are selected (Mp3 and Hotel). As mentioned in , Large contains 6,000 sentences for each domain. Opinion targets/words are manually annotated, where three annotators were involved. Two annotators were required to annotate out opinion words/targets in reviews. When conflicts occur, the third annotator make final judgement. In total, we respectively obtain 1,112, 1,241 opinion targets and 334, 407 opinion words in Hotel, MP3.",
"cite_spans": [
{
"start": 124,
"end": 143,
"text": "(Hu and Liu, 2004a)",
"ref_id": "BIBREF3"
},
{
"start": 319,
"end": 338,
"text": "(Wang et al., 2011;",
"ref_id": "BIBREF22"
},
{
"start": 339,
"end": 357,
"text": "Liu et al., 2013a)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and Evaluation Metrics",
"sec_num": "4.1"
},
{
"text": "Pre-processing: All sentences are tagged to obtain words' part-of-speech tags using Stanford NLP tool 3 . And noun phrases are identified using the method in (Zhu et al., 2009) before extraction.",
"cite_spans": [
{
"start": 158,
"end": 176,
"text": "(Zhu et al., 2009)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and Evaluation Metrics",
"sec_num": "4.1"
},
{
"text": "Evaluation Metrics: We select precision(P), recall(R) and f-measure(F) as metrics. And a significant test is performed, i.e., a t-test with a default significant level of 0.05.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and Evaluation Metrics",
"sec_num": "4.1"
},
{
"text": "To prove the effectiveness of the proposed method, we select some state-of-the-art methods for comparison as follows: Hu extracted opinion targets/words using association mining rules (Hu and Liu, 2004a) .",
"cite_spans": [
{
"start": 184,
"end": 203,
"text": "(Hu and Liu, 2004a)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Our Method vs. The State-of-the-art Methods",
"sec_num": "4.2"
},
{
"text": "DP used syntax-based patterns to capture opinion relations in sentences, and then used a bootstrapping process to extract opinion targets/words (Qiu et al., 2011) ,.",
"cite_spans": [
{
"start": 144,
"end": 162,
"text": "(Qiu et al., 2011)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Our Method vs. The State-of-the-art Methods",
"sec_num": "4.2"
},
{
"text": "Zhang is proposed by . They also used syntactic patterns to capture opinion relations between words. Then a HITS (Kleinberg, 1999) algorithm is employed to extract opinion targets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Method vs. The State-of-the-art Methods",
"sec_num": "4.2"
},
{
"text": "Liu is proposed by (Liu et al., 2013a) , an extension of . They employed a word alignment model to capture opinion relations among words, and then used a random walking algorithm to extract opinion targets.",
"cite_spans": [
{
"start": 19,
"end": 38,
"text": "(Liu et al., 2013a)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Our Method vs. The State-of-the-art Methods",
"sec_num": "4.2"
},
{
"text": "Hai is proposed by (Hai et al., 2012) , which is similar to our method. They employed both of semantic relations and opinion relations to extract opinion words/targets in a bootstrapping framework. But they captured relations only using cooccurrence statistics. Moreover, word preference was not considered.",
"cite_spans": [
{
"start": 19,
"end": 37,
"text": "(Hai et al., 2012)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Our Method vs. The State-of-the-art Methods",
"sec_num": "4.2"
},
{
"text": "SAS is proposed by (Mukherjee and Liu, 2012) , an extended lda-based model of (Zhao et al., 2010). The top K items for each aspect are extracted as opinion targets/words. It means that only semantic relations among words are considered in SAS. And we set aspects number to be 9 as same as (Mukherjee and Liu, 2012) .",
"cite_spans": [
{
"start": 19,
"end": 44,
"text": "(Mukherjee and Liu, 2012)",
"ref_id": "BIBREF16"
},
{
"start": 289,
"end": 314,
"text": "(Mukherjee and Liu, 2012)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Our Method vs. The State-of-the-art Methods",
"sec_num": "4.2"
},
{
"text": "CR: is the proposed method in this paper by using co-ranking, referring to Eq.4. CR doesn't consider word preference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Method vs. The State-of-the-art Methods",
"sec_num": "4.2"
},
{
"text": "CR WP: is the full implementation of our method, referring to Eq.6. Hu, DP, Zhang and Liu are the methods which only consider opinion relations among words. SAS is the methods which only consider semantic relations among words. Hai, CR and CR WP consider these two types of relations together. The parameter settings of state-of-the-art methods are same as their original paper. In CR and CR WP, we set \u03bb = 0.4 and \u00b5 = 0.1. The experimental results are shown in Table 2 , 3, 4 and 5, where the last column presents the average F-measure scores for multiple domains. Since Liu and Zhang aren't designed for opinion words extraction, we don't present their results in Table 4 and 5. From experimental results, we can see.",
"cite_spans": [],
"ref_spans": [
{
"start": 462,
"end": 469,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 666,
"end": 673,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Our Method vs. The State-of-the-art Methods",
"sec_num": "4.2"
},
{
"text": "1) Our methods (CR and CR WP) outperform other methods not only on opinion targets extraction but on opinion words extraction in most domains. It proves the effectiveness of the proposed method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Method vs. The State-of-the-art Methods",
"sec_num": "4.2"
},
{
"text": "2) CR and CR WP have much better performance than Liu and Zhang, especially on Recall. Liu and Zhang also use a ranking framework like ours, but they only employ opinion relations for extraction. In contrast, besides opinion relations, CR and CR WP further take semantic relations into account. Thus, more opinion targets/words can be extracted. Furthermore, we observe that CR and CR WP outperform SAS. SAS only exploits semantic relations, but ignores opinion relations among words. Its extraction is performed separately and neglects the reinforcement between opinion targets and opinion words. Thus, SAS has worse performance than our methods. It demonstrates the usefulness of considering multiple relation types. 3) CR and CR WP both outperform Hai. We believe the reasons are as follows. First, CR and CR WP considers multiple relations in a unified process by using graph co-ranking. In contrast, Hai adopts a bootstrapping framework which performs extraction step by step and may have the problem of error propagation. It demonstrates that our graph co-ranking is more suitable for this task than bootstrapping-based strategy. Second, our method captures semantic relations using topic modeling and captures opinion relations through word alignments, which are more precise than Hai which merely uses co-occurrence information to indicate such relations among words. In addition, word preference is not handled in Hai, but processed in CR WP. The results show the usefulness of word preference for opinion targets/words extraction. 4) CR WP outperforms CR, especially on precision. The only difference between them is that CR WP considers word preference when performing graph ranking for candidate confidence estimation, but CR does not. Each candidate confidence estimation in CR WP gives more weights for this candidate's preferred words than CR. Thus, the precision can be improved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Method vs. The State-of-the-art Methods",
"sec_num": "4.2"
},
{
"text": "In this section, we discuss which relation type is more effective for this task. For comparison, we design two baselines, called OnlySA and On-lyOA. OnlyOA only employs opinion relations among words, which equals to Eq.1. OnlySA only employs semantic relations among words, which equals to Eq.3. Moreover, Combine is our method which considers both of opinion relations and semantic relations together, referring to Eq.4 with The left graph presents opinion targets extraction results and the right graph presents opinion words extraction results. Because of space limitation, we only shown the results of four domains (MP3, Hotel, Laptop and Phone). From results, we observe that OnlyOA outperforms OnlySA in all domains. It demonstrates that employing opinion relations are more useful than semantic relations for co-extracting opinion targets/words. And it is necessary to utilize the mutual reinforcement relationship between opinion words and opinion targets. Moreover, Combine outperforms OnlySA and OnlyOA in all domains. It indicates that combining different relations among words together is effective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Relation vs. Opinion Relation",
"sec_num": "4.3"
},
{
"text": "In this section, we try to prove the necessity of considering word preference in Eq.6. Besides the comparison between CR and CR WP performed in the main experiment in Section 4.2, we further incorporate word preference in aforementioned OnlyOA, named as OnlyOA WP, which only employs opinion relations among words and equals to Eq.6 with \u03bb = 0. Experimental results are shown in Figure 3 . Because of space limitation, we only show the results of the same domains in section 4.3, Form results, we observe that CR WP outperforms CR, and OnlyOA WP outperforms On-lyOA in all domains, especially on precision. These observations demonstrate that considering word preference is very important for opinion targets/words extraction. We believe the reason is that exploiting word preference can provide more fine information for opinion target/word candidates' confidence estimation. Thus the performance can be improved. ",
"cite_spans": [],
"ref_spans": [
{
"start": 379,
"end": 387,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "The Effectiveness of Considering Word Preference",
"sec_num": "4.4"
},
{
"text": "In this subsection, we discuss the variation of extraction performance when changing \u03bb and \u00b5 in Eq.6. Due to space limitation, we only show the F-measure of CR WP on four domains. Experimental results are shown in Figure 4 and Figure 5 . The left graphs in Figure 4 and 5 present the performance variation of CR WP with varying \u03bb from 0 to 0.9 and fixing \u00b5 = 0.1. The right graphs in Figure 4 and 5 present the performance variation of CR WP with varying \u00b5 from 0 to 0.6 and fixing \u03bb = 0.4.",
"cite_spans": [],
"ref_spans": [
{
"start": 214,
"end": 222,
"text": "Figure 4",
"ref_id": null
},
{
"start": 227,
"end": 236,
"text": "Figure 5",
"ref_id": "FIGREF3"
},
{
"start": 258,
"end": 266,
"text": "Figure 4",
"ref_id": null
},
{
"start": 385,
"end": 393,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parameter Sensitivity",
"sec_num": "4.5"
},
{
"text": "In the left graphs in Figure 4 and 5, we observe the best performance is obtained when \u03bb = 0.4. It indicates that opinion relations and semantic relations are both useful for extracting opinion targets/words. The extraction performance is benefi-cial from their combination. In the right graphs in Figure 4 and 5, the best performance is obtained when \u00b5 = 0.1. It indicates prior knowledge is useful for extraction. When \u00b5 increases, performance, however, decreases. It demonstrates that incorporating more prior knowledge into our algorithm would restrain other useful clues on estimating candidate confidence, and hurt the performance. ",
"cite_spans": [],
"ref_spans": [
{
"start": 22,
"end": 30,
"text": "Figure 4",
"ref_id": null
},
{
"start": 298,
"end": 306,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parameter Sensitivity",
"sec_num": "4.5"
},
{
"text": "This paper presents a novel method with graph coranking to co-extract opinion targets/words. We model extracting opinion targets/words as a coranking process, where multiple heterogenous relations are modeled in a unified model to make cooperative effects on the extraction. In addition, we especially consider word preference in co-ranking process to perform more precise extraction. Compared to the state-of-the-art methods, experimental results prove the effectiveness of our method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "http://ir-china.org.cn/coae2008.html 3 http://nlp.stanford.edu/software/tagger.shtml",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automatic construction of domain and aspect specific sentiment lexicons for customer review mining",
"authors": [
{
"first": "Juergen",
"middle": [],
"last": "Bross",
"suffix": ""
},
{
"first": "Heiko",
"middle": [],
"last": "Ehrig",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 22nd ACM international conference on Conference on information & knowledge management, CIKM '13",
"volume": "",
"issue": "",
"pages": "1077--1086",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juergen Bross and Heiko Ehrig. 2013. Automatic con- struction of domain and aspect specific sentiment lexicons for customer review mining. In Proceed- ings of the 22nd ACM international conference on Conference on information & knowledge man- agement, CIKM '13, pages 1077-1086, New York, NY, USA. ACM.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "One seed to find them all: mining opinion features via association",
"authors": [
{
"first": "Zhen",
"middle": [],
"last": "Hai",
"suffix": ""
},
{
"first": "Kuiyu",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Gao",
"middle": [],
"last": "Cong",
"suffix": ""
}
],
"year": 2012,
"venue": "CIKM",
"volume": "",
"issue": "",
"pages": "255--264",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhen Hai, Kuiyu Chang, and Gao Cong. 2012. One seed to find them all: mining opinion features via association. In CIKM, pages 255-264.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Identifying features in opinion mining via intrinsic and extrinsic domain relevance",
"authors": [
{
"first": "Zhen",
"middle": [],
"last": "Hai",
"suffix": ""
},
{
"first": "Kuiyu",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Jung-Jae",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"C"
],
"last": "Yang",
"suffix": ""
}
],
"year": 2013,
"venue": "IEEE Transactions on Knowledge and Data Engineering",
"volume": "99",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhen Hai, Kuiyu Chang, Jung-Jae Kim, and Christo- pher C. Yang. 2013. Identifying features in opinion mining via intrinsic and extrinsic domain relevance. IEEE Transactions on Knowledge and Data Engi- neering, 99(PrePrints):1.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Mining opinion features in customer reviews",
"authors": [
{
"first": "Mingqin",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of Conference on Artificial Intelligence (AAAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mingqin Hu and Bing Liu. 2004a. Mining opinion fea- tures in customer reviews. In Proceedings of Con- ference on Artificial Intelligence (AAAI).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Mining and summarizing customer reviews",
"authors": [
{
"first": "Minqing",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, KDD '04",
"volume": "",
"issue": "",
"pages": "168--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minqing Hu and Bing Liu. 2004b. Mining and summa- rizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowl- edge discovery and data mining, KDD '04, pages 168-177, New York, NY, USA. ACM.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Authoritative sources in a hyperlinked environment",
"authors": [
{
"first": "Jon",
"middle": [
"M"
],
"last": "Kleinberg",
"suffix": ""
}
],
"year": 1999,
"venue": "J. ACM",
"volume": "46",
"issue": "5",
"pages": "604--632",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jon M. Kleinberg. 1999. Authoritative sources in a hyperlinked environment. J. ACM, 46(5):604-632, September.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Structure-aware review mining and summarization",
"authors": [
{
"first": "Fangtao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Chao",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Yingju",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Shu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2010,
"venue": "COL-ING",
"volume": "",
"issue": "",
"pages": "653--661",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fangtao Li, Chao Han, Minlie Huang, Xiaoyan Zhu, Yingju Xia, Shu Zhang, and Hao Yu. 2010. Structure-aware review mining and summarization. In Chu-Ren Huang and Dan Jurafsky, editors, COL- ING, pages 653-661. Tsinghua University Press.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Opinion observer: analyzing and comparing opinions on the web",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Minqing",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Junsheng",
"middle": [],
"last": "Cheng",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "342--351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bing Liu, Minqing Hu, and Junsheng Cheng. 2005. Opinion observer: analyzing and comparing opin- ions on the web. In Allan Ellis and Tatsuya Hagino, editors, WWW, pages 342-351. ACM.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Opinion target extraction using word-based translation model",
"authors": [
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Liheng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "1346--1356",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kang Liu, Liheng Xu, and Jun Zhao. 2012. Opin- ion target extraction using word-based translation model. In Proceedings of the 2012 Joint Confer- ence on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1346-1356, Jeju Island, Korea, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Opinion target extraction using partially supervised word alignment model",
"authors": [
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Liheng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kang Liu, Liheng Xu, Yang Liu, and Jun Zhao. 2013a. Opinion target extraction using partially supervised word alignment model.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Syntactic patterns versus word alignment: Extracting opinion targets from online reviews",
"authors": [
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Liheng",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kang Liu, Liheng Xu, and Jun Zhao. 2013b. Syntactic patterns versus word alignment: Extracting opinion targets from online reviews.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Opinion target extraction in chinese news comments",
"authors": [
{
"first": "Tengfei",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
}
],
"year": 2010,
"venue": "COLING (Posters)",
"volume": "",
"issue": "",
"pages": "782--790",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tengfei Ma and Xiaojun Wan. 2010. Opinion tar- get extraction in chinese news comments. In Chu- Ren Huang and Dan Jurafsky, editors, COLING (Posters), pages 782-790. Chinese Information Pro- cessing Society of China.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Ilda: Interdependent lda model for learning latent aspects and their ratings from online product reviews",
"authors": [
{
"first": "Samaneh",
"middle": [],
"last": "Moghaddam",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Ester",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '11",
"volume": "",
"issue": "",
"pages": "665--674",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samaneh Moghaddam and Martin Ester. 2011. Ilda: Interdependent lda model for learning latent aspects and their ratings from online product reviews. In Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Infor- mation Retrieval, SIGIR '11, pages 665-674, New York, NY, USA. ACM.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Aspect-based opinion mining from product reviews",
"authors": [],
"year": null,
"venue": "Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '12",
"volume": "",
"issue": "",
"pages": "1184--1184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aspect-based opinion mining from product reviews. In Proceedings of the 35th International ACM SIGIR Conference on Research and Development in In- formation Retrieval, SIGIR '12, pages 1184-1184, New York, NY, USA. ACM.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "On the design of lda models for aspect-based opinion mining",
"authors": [
{
"first": "Samaneh",
"middle": [],
"last": "Moghaddam",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Ester",
"suffix": ""
}
],
"year": 2012,
"venue": "CIKM",
"volume": "",
"issue": "",
"pages": "803--812",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samaneh Moghaddam and Martin Ester. 2012b. On the design of lda models for aspect-based opinion mining. In CIKM, pages 803-812.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Aspect extraction through semi-supervised modeling",
"authors": [
{
"first": "Arjun",
"middle": [],
"last": "Mukherjee",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers",
"volume": "1",
"issue": "",
"pages": "339--348",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arjun Mukherjee and Bing Liu. 2012. Aspect extrac- tion through semi-supervised modeling. In Proceed- ings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers -Vol- ume 1, ACL '12, pages 339-348, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Extracting product features and opinions from reviews",
"authors": [
{
"first": "Ana-Maria",
"middle": [],
"last": "Popescu",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT '05",
"volume": "",
"issue": "",
"pages": "339--346",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ana-Maria Popescu and Oren Etzioni. 2005. Ex- tracting product features and opinions from reviews. In Proceedings of the conference on Human Lan- guage Technology and Empirical Methods in Natu- ral Language Processing, HLT '05, pages 339-346, Stroudsburg, PA, USA. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Expanding domain sentiment lexicon through double propagation",
"authors": [
{
"first": "Guang",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Bu",
"suffix": ""
},
{
"first": "Chun",
"middle": [],
"last": "Che",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guang Qiu, Bing Liu, Jiajun Bu, and Chun Che. 2009. Expanding domain sentiment lexicon through dou- ble propagation.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Opinion word expansion and target extraction through double propagation",
"authors": [
{
"first": "Guang",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu 0001",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Bu",
"suffix": ""
},
{
"first": "Chun",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2011,
"venue": "Computational Linguistics",
"volume": "37",
"issue": "1",
"pages": "9--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guang Qiu, Bing Liu 0001, Jiajun Bu, and Chun Chen. 2011. Opinion word expansion and target extraction through double propagation. Computational Lin- guistics, 37(1):9-27.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Hidden sentiment association in chinese web opinion mining",
"authors": [
{
"first": "Qi",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Xinying",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Honglei",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Zhili",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Xian",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Xiaoxun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Swen",
"suffix": ""
},
{
"first": "Zhong",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "959--968",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qi Su, Xinying Xu, Honglei Guo, Zhili Guo, Xian Wu, Xiaoxun Zhang, Bin Swen, and Zhong Su. 2008. Hidden sentiment association in chinese web opinion mining. In Jinpeng Huai, Robin Chen, Hsiao-Wuen Hon, Yunhao Liu, Wei-Ying Ma, An- drew Tomkins, and Xiaodong Zhang 0001, editors, WWW, pages 959-968. ACM.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Bootstrapping both product features and opinion words from chinese customer reviews with cross-inducing",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Houfeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Wang and Houfeng Wang. 2008. Bootstrapping both product features and opinion words from chi- nese customer reviews with cross-inducing.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Latent aspect rating analysis without aspect keyword supervision",
"authors": [
{
"first": "Hongning",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "618--626",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongning Wang, Yue Lu, and ChengXiang Zhai. 2011. Latent aspect rating analysis without aspect key- word supervision. In Chid Apt, Joydeep Ghosh, and Padhraic Smyth, editors, KDD, pages 618-626. ACM.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Phrase dependency parsing for opinion mining",
"authors": [
{
"first": "Yuanbin",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Lide",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2009,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "1533--1541",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuanbin Wu, Qi Zhang, Xuanjing Huang, and Lide Wu. 2009. Phrase dependency parsing for opinion min- ing. In EMNLP, pages 1533-1541. ACL.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Joint inference for fine-grained opinion extraction",
"authors": [
{
"first": "Bishan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1640--1649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bishan Yang and Claire Cardie. 2013. Joint infer- ence for fine-grained opinion extraction. In Pro- ceedings of the 51st Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 1640-1649, Sofia, Bulgaria, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Extracting and ranking product features in opinion documents",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Suk",
"middle": [
"Hwan"
],
"last": "Lim",
"suffix": ""
},
{
"first": "Eamonn O'",
"middle": [],
"last": "Brien-Strain",
"suffix": ""
}
],
"year": 2010,
"venue": "COLING (Posters)",
"volume": "",
"issue": "",
"pages": "1462--1470",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lei Zhang, Bing Liu, Suk Hwan Lim, and Eamonn O'Brien-Strain. 2010. Extracting and ranking product features in opinion documents. In Chu- Ren Huang and Dan Jurafsky, editors, COLING (Posters), pages 1462-1470. Chinese Information Processing Society of China.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Jointly modeling aspects and opinions with a maxent-lda hybrid",
"authors": [
{
"first": "Jing",
"middle": [],
"last": "Wayne Xin Zhao",
"suffix": ""
},
{
"first": "Hongfei",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Xiaoming",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP '10",
"volume": "",
"issue": "",
"pages": "56--65",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wayne Xin Zhao, Jing Jiang, Hongfei Yan, and Xiaom- ing Li. 2010. Jointly modeling aspects and opin- ions with a maxent-lda hybrid. In Proceedings of the 2010 Conference on Empirical Methods in Nat- ural Language Processing, EMNLP '10, pages 56- 65, Stroudsburg, PA, USA. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Multi-aspect opinion polling from textual reviews",
"authors": [
{
"first": "Jingbo",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Huizhen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [
"K"
],
"last": "Tsou",
"suffix": ""
},
{
"first": "Muhua",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2009,
"venue": "CIKM",
"volume": "",
"issue": "",
"pages": "1799--1802",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jingbo Zhu, Huizhen Wang, Benjamin K. Tsou, and Muhua Zhu. 2009. Multi-aspect opinion polling from textual reviews. In David Wai-Lok Cheung, Il-Yeol Song, Wesley W. Chu, Xiaohua Hu, and Jimmy J. Lin, editors, CIKM, pages 1799-1802. ACM.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"num": null,
"text": "Semantic Relations vs. Opinion Relations \u03bb = 0.5. Figure 2 presents experimental results.",
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"num": null,
"text": "Experimental results when considering word preference",
"type_str": "figure",
"uris": null
},
"FIGREF3": {
"num": null,
"text": "Opinion words extraction results",
"type_str": "figure",
"uris": null
},
"TABREF1": {
"num": null,
"text": "Examples of Calculated Word PreferenceAnd we use a vector P O j",
"content": "<table/>",
"type_str": "table",
"html": null
},
"TABREF3": {
"num": null,
"text": "Results of Opinion Targets Extraction on Customer Review Dataset",
"content": "<table><tr><td>Methods</td><td>P</td><td>Camera R</td><td>F</td><td>P</td><td>Car R</td><td>F</td><td>P</td><td>Laptop R</td><td>F</td><td>P</td><td>Phone R</td><td>F</td><td>P</td><td>Mp3 R</td><td>F</td><td>P</td><td>Hotel R</td><td>F</td><td>Avg. F</td></tr><tr><td>Hu</td><td>0.63</td><td>0.65</td><td>0.64</td><td>0.62</td><td>0.58</td><td>0.60</td><td>0.51</td><td>0.67</td><td>0.58</td><td>0.69</td><td>0.60</td><td>0.64</td><td>0.61</td><td>0.68</td><td>0.64</td><td>0.60</td><td>0.65</td><td>0.62</td><td>0.587</td></tr><tr><td>DP</td><td>0.71</td><td>0.70</td><td>0.70</td><td>0.72</td><td>0.65</td><td>0.68</td><td>0.58</td><td>0.69</td><td>0.63</td><td>0.78</td><td>0.66</td><td>0.72</td><td>0.69</td><td>0.70</td><td>0.69</td><td>0.67</td><td>0.69</td><td>0.68</td><td>0.683</td></tr><tr><td>Zhang</td><td>0.71</td><td>0.78</td><td>0.74</td><td>0.69</td><td>0.68</td><td>0.68</td><td>0.57</td><td>0.80</td><td>0.67</td><td>0.80</td><td>0.71</td><td>0.75</td><td>0.67</td><td>0.77</td><td>0.72</td><td>0.67</td><td>0.76</td><td>0.71</td><td>0.712</td></tr><tr><td>SAS</td><td>0.72</td><td>0.72</td><td>0.72</td><td>0.71</td><td>0.64</td><td>0.67</td><td>0.59</td><td>0.72</td><td>0.65</td><td>0.78</td><td>0.69</td><td>0.73</td><td>0.69</td><td>0.75</td><td>0.72</td><td>0.69</td><td>0.74</td><td>0.71</td><td>0.700</td></tr><tr><td>Liu</td><td>0.75</td><td>0.81</td><td>0.78</td><td>0.71</td><td>0.71</td><td>0.71</td><td>0.61</td><td>0.85</td><td>0.71</td><td>0.83</td><td>0.74</td><td>0.78</td><td>0.70</td><td>0.82</td><td>0.76</td><td>0.71</td><td>0.80</td><td>0.75</td><td>0.749</td></tr><tr><td>Hai</td><td>0.68</td><td>0.84</td><td>0.76</td><td>0.69</td><td>0.75</td><td>0.72</td><td>0.58</td><td>0.86</td><td>0.72</td><td>0.75</td><td>0.76</td><td>0.76</td><td>0.65</td><td>0.83</td><td>0.74</td><td>0.62</td><td>0.82</td><td>0.75</td><td>0.742</td></tr><tr><td>CR</td><td>0.75</td><td>0.83</td><td>0.79</td><td>0.72</td><td>0.74</td><td>0.73</td><td>0.60</td><td>0.85</td><td>0.70</td><td>0.83</td><td>0.77</td><td>0.80</td><td>0.70</td><td>0.84</td><td>0.76</td><td>0.71</td><td>0.83</td><td>0.77</td><td>0.758</td></tr><tr><td>CR WP</td><td>0.78</td><td>0.84</td><td>0.81</td><td>0.74</td><td>0.75</td><td>0.74</td><td>0.64</td><td>0.85</td><td>0.73</td><td>0.84</td><td>0.76</td><td>0.80</td><td>0.74</td><td>0.84</td><td>0.79</td><td>0.74</td><td>0.82</td><td>0.78</td><td>0.773</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF4": {
"num": null,
"text": "Results of Opinion Targets Extraction on COAE 2008 and Large",
"content": "<table/>",
"type_str": "table",
"html": null
},
"TABREF6": {
"num": null,
"text": "Results of Opinion Words Extraction on Customer Review Dataset",
"content": "<table><tr><td>Methods</td><td>P</td><td>Camera R</td><td>F</td><td>P</td><td>Car R</td><td>F</td><td>P</td><td>Laptop R</td><td>F</td><td>P</td><td>Phone R</td><td>F</td><td>P</td><td>Mp3 R</td><td>F</td><td>P</td><td>Hotel R</td><td>F</td><td>Avg. F</td></tr><tr><td>Hu</td><td>0.72</td><td>0.74</td><td>0.73</td><td>0.70</td><td>0.71</td><td>0.70</td><td>0.66</td><td>0.70</td><td>0.68</td><td>0.70</td><td>0.70</td><td>0.70</td><td>0.48</td><td>0.67</td><td>0.56</td><td>0.52</td><td>0.69</td><td>0.59</td><td>0.660</td></tr><tr><td>DP</td><td>0.80</td><td>0.73</td><td>0.76</td><td>0.79</td><td>0.71</td><td>0.75</td><td>0.75</td><td>0.69</td><td>0.72</td><td>0.78</td><td>0.68</td><td>0.73</td><td>0.60</td><td>0.65</td><td>0.62</td><td>0.61</td><td>0.66</td><td>0.63</td><td>0.702</td></tr><tr><td>SAS</td><td>0.73</td><td>0.70</td><td>0.71</td><td>0.75</td><td>0.68</td><td>0.71</td><td>0.72</td><td>0.68</td><td>0.69</td><td>0.71</td><td>0.66</td><td>0.68</td><td>0.64</td><td>0.62</td><td>0.63</td><td>0.66</td><td>0.61</td><td>0.63</td><td>0.675</td></tr><tr><td>Hai</td><td>0.76</td><td>0.74</td><td>0.75</td><td>0.72</td><td>0.74</td><td>0.73</td><td>0.69</td><td>0.72</td><td>0.70</td><td>0.72</td><td>0.70</td><td>0.71</td><td>0.61</td><td>0.69</td><td>0.64</td><td>0.59</td><td>0.68</td><td>0.64</td><td>0.690</td></tr><tr><td>CR</td><td>0.80</td><td>0.75</td><td>0.77</td><td>0.77</td><td>0.74</td><td>0.75</td><td>0.73</td><td>0.71</td><td>0.72</td><td>0.75</td><td>0.71</td><td>0.73</td><td>0.63</td><td>0.69</td><td>0.64</td><td>0.63</td><td>0.68</td><td>0.66</td><td>0.710</td></tr><tr><td>CR WP</td><td>0.80</td><td>0.75</td><td>0.77</td><td>0.80</td><td>0.74</td><td>0.77</td><td>0.77</td><td>0.71</td><td>0.74</td><td>0.78</td><td>0.72</td><td>0.75</td><td>0.66</td><td>0.68</td><td>0.67</td><td>0.67</td><td>0.69</td><td>0.68</td><td>0.730</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF7": {
"num": null,
"text": "Results of Opinion Words Extraction on COAE 2008 and Large",
"content": "<table/>",
"type_str": "table",
"html": null
}
}
}
}