ACL-OCL / Base_JSON /prefixO /json /O17 /O17-2003.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "O17-2003",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:59:49.049167Z"
},
"title": "An Approach to Extract Product Features from Chinese Consumer Reviews and Establish Product Feature Structure Tree",
"authors": [
{
"first": "Xinsheng",
"middle": [],
"last": "Xu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Jiliang University",
"location": {
"country": "China"
}
},
"email": ""
},
{
"first": "Jing",
"middle": [],
"last": "Lin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Jiliang University",
"location": {
"country": "China"
}
},
"email": "linjing@cjlu.edu"
},
{
"first": "Ying",
"middle": [],
"last": "Xiao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Jiliang University",
"location": {
"country": "China"
}
},
"email": "xiaoying@cjlu.edu"
},
{
"first": "Jianzhe",
"middle": [],
"last": "Yu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Jiliang University",
"location": {
"country": "China"
}
},
"email": "yujianzhe@cjlu.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "With the progress of e-commerce and web technology, a large volume of consumer reviews for products are generated from time to time, which contain rich information regarding consumer requirements and preferences. Although China has the largest e-commerce market in the world, but few of researchers investigated how to extract product feature from Chinese consumer reviews effectively, not to analyze the relations among product features which are very significant to implement comprehensive applications. In this research, a framework is proposed to extract product features from Chinese consumer reviews and construct product feature structure tree. Through three filtering algorithms and two-stage optimizing word segmantation process, phrases are identified from consumer reviews. And the expanded rule template, which consists of elements: phrase, POS, dependency relation, governing word, and opinion, is constructed to train the model of conditional random filed (CRF). Then the product features are extracted based on CRF. Besides, two index are defined to describe product feature quantitatively such as frequency and sentiment score. Based on these, product feature structure tree is established through a potential parent node searching process. Furthermore, categories of extensive experiments are conducted based on 5,806 experimental corpuses from taobao.com, suning.com, and zhongguancun.com. The results from these experiments provide evidences to guide product feature extraction process. Finally, an application of analyzing the influences among product features is conducted based on product feature structure tree. It provides valuable management connotations for designer, manufacturer, or retailer.",
"pdf_parse": {
"paper_id": "O17-2003",
"_pdf_hash": "",
"abstract": [
{
"text": "With the progress of e-commerce and web technology, a large volume of consumer reviews for products are generated from time to time, which contain rich information regarding consumer requirements and preferences. Although China has the largest e-commerce market in the world, but few of researchers investigated how to extract product feature from Chinese consumer reviews effectively, not to analyze the relations among product features which are very significant to implement comprehensive applications. In this research, a framework is proposed to extract product features from Chinese consumer reviews and construct product feature structure tree. Through three filtering algorithms and two-stage optimizing word segmantation process, phrases are identified from consumer reviews. And the expanded rule template, which consists of elements: phrase, POS, dependency relation, governing word, and opinion, is constructed to train the model of conditional random filed (CRF). Then the product features are extracted based on CRF. Besides, two index are defined to describe product feature quantitatively such as frequency and sentiment score. Based on these, product feature structure tree is established through a potential parent node searching process. Furthermore, categories of extensive experiments are conducted based on 5,806 experimental corpuses from taobao.com, suning.com, and zhongguancun.com. The results from these experiments provide evidences to guide product feature extraction process. Finally, an application of analyzing the influences among product features is conducted based on product feature structure tree. It provides valuable management connotations for designer, manufacturer, or retailer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "With the rapid expansion of e-commerce business, the Web has become an excellent source for gathering consumer reviews about products (Turney, 2002; Dave, Lawrence & Pennock, 2003; Dellarocas, 2003; Godes & Mayzlin, 2004; Hu & Liu, 2004a, b; Liu, Hu & Cheng, 2005; Duan, Gu & Whinston, 2008; Forman, Ghose & Wiesenfeld, 2008) . Many product review websites (e.g., Amazon.com, Taobao.com) have been established to collect consumer opinions about products. Consumers also comment on products in their blogs, which are then aggregated by Blogstreet.com and AllConsuming.net etc. In addition, it has become a common practice for retailers (e.g., Amazon.com, taobao.com, jd.com) or manufacturers to provide online forums that allow consumers to express their opinions about products they have purchased or in which they are interested. Consumer reviews are essential for both retailers and product manufacturers to understand the general responses of consumers to their products. Proper analysis and summarization of consumer reviews can further enable retailers or product manufacturers to insight consumers' opinions about specific features of products (Liu et al., 2005) . Consumer reviews also offer retailers a better understanding of the specific preferences of individual customers. Furthermore, from a consumer perspective, consumer reviews provide valuable information for purchasing decisions.",
"cite_spans": [
{
"start": 134,
"end": 148,
"text": "(Turney, 2002;",
"ref_id": "BIBREF44"
},
{
"start": 149,
"end": 180,
"text": "Dave, Lawrence & Pennock, 2003;",
"ref_id": "BIBREF6"
},
{
"start": 181,
"end": 198,
"text": "Dellarocas, 2003;",
"ref_id": "BIBREF9"
},
{
"start": 199,
"end": 221,
"text": "Godes & Mayzlin, 2004;",
"ref_id": "BIBREF14"
},
{
"start": 222,
"end": 241,
"text": "Hu & Liu, 2004a, b;",
"ref_id": "BIBREF16"
},
{
"start": 242,
"end": 264,
"text": "Liu, Hu & Cheng, 2005;",
"ref_id": "BIBREF29"
},
{
"start": 265,
"end": 291,
"text": "Duan, Gu & Whinston, 2008;",
"ref_id": "BIBREF10"
},
{
"start": 292,
"end": 325,
"text": "Forman, Ghose & Wiesenfeld, 2008)",
"ref_id": "BIBREF13"
},
{
"start": 1150,
"end": 1168,
"text": "(Liu et al., 2005)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "As the number of consumer reviews expands, however, it becomes more difficult for users (e.g., product designer & manufacturers, consumers) to obtain a comprehensive view of consumer opinions pertaining to the products through a manual analysis. Consequently, an efficient and effective analysis technique that is capable of extracting the product features stated by consumers and summarizing the sentiments pertaining to specific product features automatically becomes desirable. This analysis essentially consists of two main tasks: product feature extraction from consumer reviews and opinion orientation identification for these product features (Hu & Liu, 2004a, b; Jindal & Liu, 2006; Wei, Chen, Yang & Yang, 2010) .",
"cite_spans": [
{
"start": 650,
"end": 670,
"text": "(Hu & Liu, 2004a, b;",
"ref_id": "BIBREF16"
},
{
"start": 671,
"end": 690,
"text": "Jindal & Liu, 2006;",
"ref_id": "BIBREF21"
},
{
"start": 691,
"end": 720,
"text": "Wei, Chen, Yang & Yang, 2010)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Product feature extraction is crucial to sentiment analysis, because its effectiveness significantly affects the performance of opinion orientation identification. Several product feature extraction techniques have been proposed in the literatures (Hu & Liu, 2004a, b; Kobayashi, Inui, Matsumoto, Tateishi & Fukushima, 2004; Kobayashi, Iida, Inui & Matsumotto, 2005; Wong & Lam, 2005 Bahu & Das, 2015) . However, product feature extraction and opinion orientation identification suffer huge challenges for Chinese consumer reviews because of the natural complexity of Chinese language (Zhang, Yu, Xu & Shi, 2011; Song, Yan & Liu, 2012; Li, 2013; Zhou, Wan & Xiao, An Approach to Extract Product Features from 55 Chinese Consumer Reviews and Establish Product Feature Structure Tree 2013; Liu, Song, Wang, Li & Lu, 2014; . First, there is always no interval between the words of Chinese sentences. It leads to the difficulty of distinguishing Chinese phrases. Besides, some Chinese phrases have synonyms e.g. \"\u7535\u677f\" (electroplax), which exactly appears at Chinese consumer reviews although it is rare, is a synonym of \"\u7535\u6c60\" (Battery). This kind of product features cannot be recognized and extracted based on frequency item method. Moreover, the syntactic and grammar of Chinese sentences are very complex as well as their structures, e.g. the consumer review \"\u7535\u6c60/noun \u8fd8/adverb \u53ef\u4ee5 /verb\" (The battery is good) always expresses the positive evaluation of consumers for the \"\u7535 \u6c60\"(Battery). The phrase \"\u53ef\u4ee5\" (can) 1 is a verb but it acts as an opinion word that modifies the phrase \"\u7535\u6c60\" (Battery). That means, on the context of Chinese language, verbs may also modify nouns or noun phrases and express opinion orientation. Thus, the existing methods that find product features based on adjective are also not enough for Chinese consumer reviews. In addition, there are some specific correlations among product features according to our observations. Some product features extracted from consumer reviews are the attributes of the product, components, or parts such as function, performance, quality, material, and service while some product features are product, components or parts itself. For example, \"\u6444\u50cf\u5934 (cameral)\" and \" \u50cf \u7d20 (pixel)\" are two product features. The \" \u6444 \u50cf \u5934 (cameral)\" is a component of intelligent mobile phone while the \"\u50cf\u7d20(pixel)\" is the attribute of \"\u6444\u50cf\u5934 (cameral)\". There is a description relation between the \"\u50cf\u7d20(pixel)\" and the \"\u6444\u50cf\u5934 (cameral)\". Therefore, it is always interrelated among product features. How to extract product features effectively from Chinese consumer reviews and establish the interrelations among product features are difficult tasks and huge challenges. This paper focuses on such a text mining issue of Chinese consumer reviews. More specifically, we will establish a structure tree of product features and infer the key factors of influencing the sentiment scores of product features from consumers. The goal is to provide evidences for the designer & manufacturer to improve and update their products effectively.",
"cite_spans": [
{
"start": 248,
"end": 268,
"text": "(Hu & Liu, 2004a, b;",
"ref_id": "BIBREF16"
},
{
"start": 269,
"end": 324,
"text": "Kobayashi, Inui, Matsumoto, Tateishi & Fukushima, 2004;",
"ref_id": "BIBREF22"
},
{
"start": 325,
"end": 366,
"text": "Kobayashi, Iida, Inui & Matsumotto, 2005;",
"ref_id": "BIBREF24"
},
{
"start": 367,
"end": 383,
"text": "Wong & Lam, 2005",
"ref_id": "BIBREF50"
},
{
"start": 384,
"end": 401,
"text": "Bahu & Das, 2015)",
"ref_id": "BIBREF1"
},
{
"start": 585,
"end": 612,
"text": "(Zhang, Yu, Xu & Shi, 2011;",
"ref_id": "BIBREF56"
},
{
"start": 613,
"end": 635,
"text": "Song, Yan & Liu, 2012;",
"ref_id": "BIBREF41"
},
{
"start": 636,
"end": 645,
"text": "Li, 2013;",
"ref_id": "BIBREF28"
},
{
"start": 646,
"end": 711,
"text": "Zhou, Wan & Xiao, An Approach to Extract Product Features from 55",
"ref_id": null
},
{
"start": 729,
"end": 787,
"text": "Reviews and Establish Product Feature Structure Tree 2013;",
"ref_id": null
},
{
"start": 788,
"end": 819,
"text": "Liu, Song, Wang, Li & Lu, 2014;",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "With these considerations, a technique framework of extracting product features from Chinese consumer reviews and its applications are proposed in which a two-stages optimizing word segmentation solution is proposed to improve the correct rate of word segmentation for supporting product feature extraction from Chinese consumer reviews, and an expanded rule template for CRF, in which two new elements namely governing word and opinion word are added, is developed to deal with complex syntaxes and grammars of Chinese language and implicit opinion words. This increases the precision of product feature extraction and is also helpful for the sentiment analysis for product features. Furthermore, product feature structure tree is constructed considering the natural internal correlations among product features, and an application of inferring the key factors, that influence the preference of consumers for a product feature, is proposed based on Bayes theory whose results can be used as evidences for designers, manufacturers, or retailers to product improvement, market management, etc. Finally, 5,806 consumer reviews from taobao.com, suning.com, and zhongguancun.com are retrieved and used as corpus to explain the applications of these principles and methods proposed in this work. It is innovative method of implementing comprehensive applications based on Chinese consumer reviews at product feature level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The remainder of this article is organized as follows: In Sect. 2, we review existing product feature extraction techniques and discuss their fundamental limitations to highlight our research motivation. Subsequently, a technique framework of extracting product features from Chinese consumer reviews and its applications are proposed in Sects. 3. Sects. 4 investigate the methods of extracting product features based on CRF. The quantitative characters of product feature including frequency and sentiment score are explored in Sects. 5. On the basis of these, product feature structure tree is constructed in Sects. 6. Categories of extensive experiments are conducted in Sects. 7. Sects. 8 give an example to illustrate the applications of the methods mentioned in this work. Secs.9 discuss our research works. Finally, we conclude with a summary and some future research directions in Sect. 10.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Some researchers have devoted to analyzing consumer reviews for valuable information and implementing applications based on it. These analyses and applications essentially consist of two aspects: product feature extraction and opinion orientation identification. Product feature extraction is the foundation of opinion orientation identification, and opinion orientation identification is the application based on product features. Hu and Liu (2004a, b) assumes that product features must be nouns or noun phrases and employs the association rule mining algorithm (Agrawal & Srikant, 1994; Srikant & Agrawal, 1995) to discover all frequent itemsets (i.e., frequently occurring nouns or noun phrases) within a target set of consumer reviews. In addition to association rule mining, other information-extraction-based product feature extraction techniques have also been proposed (Kobayashi, Inui, Matsumoto, Tateishi & Fukushima, 2004; Kobayashi, Iida, Inui & Matsumotto, 2005 ). Popescu and Etzioni employ KnowItAll and propose OPINE to extract product features from consumer reviews automatically . Using a set of domain-independent extraction patterns predefined in KnowItAll, OPINE instantiates specific extraction rules for each product class under examination and then An Approach to Extract Product Features from 57 Chinese Consumer Reviews and Establish Product Feature Structure Tree uses these rules to extract possible product features from the input consumer reviews. Wong & Lam (2005 employ Hidden Markov Models and CRF, respectively, as the underlying learning method to extract product features from auction websites. Liu, Wu & Yao (2006) adopted supervised method to extract product features and compare variety of products for consumers based on them. Choi and Cardie (2009) presented the methods of recognizing the product feature from consumer reviews based on CRF.",
"cite_spans": [
{
"start": 432,
"end": 453,
"text": "Hu and Liu (2004a, b)",
"ref_id": "BIBREF16"
},
{
"start": 564,
"end": 589,
"text": "(Agrawal & Srikant, 1994;",
"ref_id": "BIBREF0"
},
{
"start": 590,
"end": 614,
"text": "Srikant & Agrawal, 1995)",
"ref_id": "BIBREF42"
},
{
"start": 878,
"end": 934,
"text": "(Kobayashi, Inui, Matsumoto, Tateishi & Fukushima, 2004;",
"ref_id": "BIBREF22"
},
{
"start": 935,
"end": 975,
"text": "Kobayashi, Iida, Inui & Matsumotto, 2005",
"ref_id": "BIBREF24"
},
{
"start": 1479,
"end": 1495,
"text": "Wong & Lam (2005",
"ref_id": "BIBREF50"
},
{
"start": 1632,
"end": 1652,
"text": "Liu, Wu & Yao (2006)",
"ref_id": "BIBREF32"
},
{
"start": 1768,
"end": 1790,
"text": "Choi and Cardie (2009)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Literature Review",
"sec_num": "2."
},
{
"text": "Opinion orientation identification is to determine the sentiments of consumers for product features. Therefore, product feature extraction and opinion orientation identification cannot be separated in practice. Li et al. (2010) researched the extraction methods of opinion words for product features by integrating two CRF variables such as Skip-CRF and Tree-CRF. Htay and Lynn (2013) extracted product features and opinion words using pattern knowledge in customer reviews. Yi and Niblack (2005) worked on identifying the specific product features and opinion sentences by extracting noun phrases of specific patterns. Zhuang, Feng and Zhu (2006) proposed a supervised learning method based on dependency grammatical graph to extract product feature and opinion information. Yin and Peng (2009) studied the sentiment analysis for product features in Chinese reviews based on semantic association. Ouyang, Liu, Zhang and Yang (2015) investigated features-level sentiment analysis of movie reviews. And Chen, Qi and Wang (2012) extracted multiple types of feature-level information from consumer reviews.",
"cite_spans": [
{
"start": 211,
"end": 227,
"text": "Li et al. (2010)",
"ref_id": "BIBREF26"
},
{
"start": 364,
"end": 384,
"text": "Htay and Lynn (2013)",
"ref_id": "BIBREF15"
},
{
"start": 475,
"end": 496,
"text": "Yi and Niblack (2005)",
"ref_id": "BIBREF54"
},
{
"start": 620,
"end": 647,
"text": "Zhuang, Feng and Zhu (2006)",
"ref_id": "BIBREF60"
},
{
"start": 776,
"end": 795,
"text": "Yin and Peng (2009)",
"ref_id": "BIBREF55"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Opinion Orientation Identification",
"sec_num": "2.2"
},
{
"text": "In addition, topic/opinion summary is also an important aspect based on product feature extraction and opinion orientation identification. For example, Miao, Li and Zeng (2010) executed the topic extraction from movie reviews based on CRF. Turney (2002) investigated the unsupervised classification of reviews based on semantic orientation.",
"cite_spans": [
{
"start": 152,
"end": 176,
"text": "Miao, Li and Zeng (2010)",
"ref_id": "BIBREF36"
},
{
"start": 240,
"end": 253,
"text": "Turney (2002)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Opinion Orientation Identification",
"sec_num": "2.2"
},
{
"text": "However, the existing product feature extraction and application techniques for English language cannot be used to deal with Chinese language directly because of the natural complexity of Chinese language mentioned above. Then some experts explore the product feature extractions and applications from Chinese consumer reviews. Li, Ye, Li and Law (2009) and Zu and Wang (2014) researched product feature extracting methods from Chinese customer online reviews. Liu and Wang (2013) proposed a keywords extraction method based on semantic dictionary and lexical chain. Ma and Yan (2014) presented the product features extraction of online reviews based on LDA model. In order to process Chinese language sentences effectively, Liu and Ma (2009) investigated the Chinese automatic syntactic parsing issues. Similarly, Li (2013) researched the Chinese Dependency Parsing for product feature extraction. Jiang et al. (2012) also proposed a method to enhance the feature engineering for CRF by using unlabeled data. From the perspective of applications, Chang, Chu, Chen and Hsu (2016) investigated the linguistic template extraction for reader-emotion features based on Chinese text. Wang and Meng (2011) studied the opinion object extraction based on the syntax analysis and dependency analysis. Lv, Zhong, Cai and Wu (2014) investigated the task of aspect-level opinion mining including the extraction of product entities from Chinese consumer reviews. Besides, Hu, Zheng, Wu and Chen (2013) developed a method of extracting product characteristic from consumer reviews to provide users with accurate product recommendation. Dai, Tsai and Hsu (2014) presented a joint learning method of entity linking constraints from Chinese consumer reviews based on markov-logic network. Wang and Wang (2016) investigated comparative network for product competition in feature-levels through sentiment analysis. These literatures exactly proposed some effective methods of extracting product features from Chinese text, and used them at specific research tasks. These methods can be classified into two major approaches: supervised and unsupervised.",
"cite_spans": [
{
"start": 336,
"end": 353,
"text": "Li and Law (2009)",
"ref_id": "BIBREF27"
},
{
"start": 358,
"end": 376,
"text": "Zu and Wang (2014)",
"ref_id": "BIBREF61"
},
{
"start": 461,
"end": 480,
"text": "Liu and Wang (2013)",
"ref_id": "BIBREF30"
},
{
"start": 567,
"end": 584,
"text": "Ma and Yan (2014)",
"ref_id": "BIBREF35"
},
{
"start": 725,
"end": 742,
"text": "Liu and Ma (2009)",
"ref_id": "BIBREF33"
},
{
"start": 815,
"end": 824,
"text": "Li (2013)",
"ref_id": "BIBREF28"
},
{
"start": 899,
"end": 918,
"text": "Jiang et al. (2012)",
"ref_id": "BIBREF20"
},
{
"start": 1060,
"end": 1079,
"text": "Chen and Hsu (2016)",
"ref_id": "BIBREF2"
},
{
"start": 1179,
"end": 1199,
"text": "Wang and Meng (2011)",
"ref_id": "BIBREF47"
},
{
"start": 1292,
"end": 1320,
"text": "Lv, Zhong, Cai and Wu (2014)",
"ref_id": "BIBREF34"
},
{
"start": 1622,
"end": 1646,
"text": "Dai, Tsai and Hsu (2014)",
"ref_id": "BIBREF5"
},
{
"start": 1772,
"end": 1792,
"text": "Wang and Wang (2016)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Opinion Orientation Identification",
"sec_num": "2.2"
},
{
"text": "Supervised product feature extraction techniques require a set of preannotated review sentences as training examples while unsupervised product feature extraction approach automatically extracts product features from consumer reviewers without involving training examples. Generally, the supervised methods have better results at the precision, recall or F-score than those of the unsupervised methods because it can set the training samples according to specific research or application goals (Li et al., 2009; Zu & Wang, 2014; Ma & Yan, 2014) . This work focuses on supervised product feature extraction issues and its applications.",
"cite_spans": [
{
"start": 494,
"end": 511,
"text": "(Li et al., 2009;",
"ref_id": "BIBREF27"
},
{
"start": 512,
"end": 528,
"text": "Zu & Wang, 2014;",
"ref_id": "BIBREF61"
},
{
"start": 529,
"end": 544,
"text": "Ma & Yan, 2014)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Opinion Orientation Identification",
"sec_num": "2.2"
},
{
"text": "Aiming at Chinese consumer reviews, a technique framework of product feature extraction is proposed that consists of three key phases: word segmentation and optimization, product feature extraction based on CRF, and the quantitative descriptions of product features. The proposed technique begins with the preprocessing of the inputting consumer reviews, where the preprocessing task includes word segmenting & POS tagging, reconstructing noun phrase based on N-gram, filtering and optimizing. Subsequently, product feature extraction process employs CRF to identify product features in which a train set and a rule template for constructing the model of CRF are developed. Based on the extracted product features and the results of word segmentation, the quantitative descriptions for product features including the frequency of product feature and the sentiment score of product feature, are constructed. On the basis of these, product feature structure tree is established based on the fact that product features are interrelated. Figure 1 presents the framework of product feature extraction techniques for Chinese consumer reviews. In the following subsections, we will depict the detailed design and implementation of each phase. ",
"cite_spans": [],
"ref_spans": [
{
"start": 1034,
"end": 1042,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Product Feature Extraction Technique for Chinese Consumer Reviews",
"sec_num": "3."
},
{
"text": "Preprocessing techniques consist of word segmenting and POS tagging, reconstructing noun phrase based on N-gram, filtering and optimizing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing Techniques",
"sec_num": "3.1"
},
{
"text": "Word segmenting and POS tagging start with the inputting review sentence , and end with the pairs ( , ), where is the th word contained in sentence , and is the POS tagging result of the . For the convenience of presentation and measure, phrase (word), sentence, and consumer review are defined respectively as Figure 2 . In this work, the word refers to phrase in general unless there are specific instructions. For the review sentence \"\u624b\u673a\u7684\u5c4f\u5e55\u5f88\u6a21\u7cca(The screen of this phone is very indistinct)\", the word segmenting and its POS tagging are as follows: (\u624b\u673a(phone), n), (\u7684 2 , ude1) , (\u5c4f\u5e55(screen), n), (\u5f88 (very), d), ( \u6a21\u7cca(indistinct), a) illustrated in Figure 2 . At the same time, the dependency relations among these words and their governing words are also identified through syntactic parsing process based on consumer review (Liu & Ma, 2009; Wang & Meng, 2011; Li, 2013; Dai et al., 2014) . The objective of this phase is to divide the review sentences into discrete phrases and annotate its POS tag, and provide the data resource for the next analysis phases.",
"cite_spans": [
{
"start": 826,
"end": 842,
"text": "(Liu & Ma, 2009;",
"ref_id": "BIBREF33"
},
{
"start": 843,
"end": 861,
"text": "Wang & Meng, 2011;",
"ref_id": "BIBREF47"
},
{
"start": 862,
"end": 871,
"text": "Li, 2013;",
"ref_id": "BIBREF28"
},
{
"start": 872,
"end": 889,
"text": "Dai et al., 2014)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 311,
"end": 319,
"text": "Figure 2",
"ref_id": null
},
{
"start": 649,
"end": 657,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Phase A Word Segmenting and POS Tagging",
"sec_num": null
},
{
"text": "Xinsheng Xu et al.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "60",
"sec_num": null
},
{
"text": "Word segmenting process may generate some incorrect results sometimes. For example, the phrase \"\u5206\u8fa8\u7387(resolution)\" always be divided into three kinds of independent phrases \"\u5206 (divide)\", \"\u5206\u8fa8(distinguish)\", and \"\u7387(rate)\". However, the phrase \"\u5206\u8fa8\u7387(resolution)\" should be a complete phrase for digital product e.g. intelligent mobile phone. Obviously, it is an incorrect result of word segmenting. In order to deal with this problem, it is necessary to recombine these fragmental phrases into its correct form. A reconstruct method based on n-gram is introduced which consists of two steps: (a) Identifying the number n of the n-gram method reasonably; (b) Constructing new phrases according to giving number n. Using as an example and assuming n=3, then new phrases can be generated by recombining it with adjacent words from left and right directions, respectively. For example, the reconstructing new phrases based on 3-gram method are as follows ( , , , , , . After this reconstructing process, the phrase \"\u5206\u8fa8\u7387 (resolution)\" that was incorrect segmented will be restored to its correct from. Likeness, all other incorrect segmented phrases can also be restored to their correct forms through this kind of reconstructing process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2. Word segmenting and its POS tagging for a case Phase B Reconstructing Noun Phrase based on N-gram",
"sec_num": null
},
{
"text": "Unfortunately, this phase may also lead to other error phrases due to over-combination. Thus we also need to optimize the results generated from reconstructing phase. (\u2160) Frequency filtering. In general, some new combination phrases which are incorrect such as \"\u5c4f\u5e55\u5f88(screen very)\" or \"\u7684\u5c4f\u5e55('s screen)\" seldom occurrence at consumer reviews. Therefore, we can remove them through frequency filtering process by setting a reasonable threshold. An expression for frequency filtering is generalized as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2. Word segmenting and its POS tagging for a case Phase B Reconstructing Noun Phrase based on N-gram",
"sec_num": null
},
{
"text": "If then remove it from \u2126 (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2. Word segmenting and its POS tagging for a case Phase B Reconstructing Noun Phrase based on N-gram",
"sec_num": null
},
{
"text": "where is a phrase generated from Phase B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2. Word segmenting and its POS tagging for a case Phase B Reconstructing Noun Phrase based on N-gram",
"sec_num": null
},
{
"text": "And \u2126 is the phrase group of 's. is the function that calculates the number of the appearing at consumer reviews.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2. Word segmenting and its POS tagging for a case Phase B Reconstructing Noun Phrase based on N-gram",
"sec_num": null
},
{
"text": "is the threshold of frequency filtering process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2. Word segmenting and its POS tagging for a case Phase B Reconstructing Noun Phrase based on N-gram",
"sec_num": null
},
{
"text": "This filtering rule means that the whose frequency appearing at consumer reviews less than will be removed from \u2126.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2. Word segmenting and its POS tagging for a case Phase B Reconstructing Noun Phrase based on N-gram",
"sec_num": null
},
{
"text": "(\u2161) Cohesive filtering. However, there are another kind of phrases such as \"\u5c31\u8fd9\u6837(That's it)\" which consist of two frequency words \"\u5c31 3 \" and \"\u8fd9\u6837(this/it)\", and is also a frequency phrase because of the expression habit of Chinese. But it is not a valid phrase. This kind of phrases still cannot be removed through frequency filtering process only.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2. Word segmenting and its POS tagging for a case Phase B Reconstructing Noun Phrase based on N-gram",
"sec_num": null
},
{
"text": "According to our observation, the constitute elements of a phrase, for example \"\u5206\u8fa8 (distinguish)\" and \"\u7387(rate)\" are two constitute elements of the phrase \"\u5206\u8fa8\u7387(resolution)\", are always strongly coupled among them. That means the cohesive among them is very strong. However, the cohesive among the constitute elements of the over-combination phrases generated from Phase B is weak because the combination form of these elements is seldom or may not exist at consumer reviews at all. Therefore, we can use cohesive to remove these phrases from the results of Phase B. The cohesive among the constitute elements of a phrase is generalized as follows (Li et al., 2009) :",
"cite_spans": [
{
"start": 646,
"end": 663,
"text": "(Li et al., 2009)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2. Word segmenting and its POS tagging for a case Phase B Reconstructing Noun Phrase based on N-gram",
"sec_num": null
},
{
"text": "\" 1 and \" \u2282 (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2. Word segmenting and its POS tagging for a case Phase B Reconstructing Noun Phrase based on N-gram",
"sec_num": null
},
{
"text": "where \" is the frequency of phrase occurring at the results of original word segmentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2. Word segmenting and its POS tagging for a case Phase B Reconstructing Noun Phrase based on N-gram",
"sec_num": null
},
{
"text": "\" is one of the constitute elements of phrase .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2. Word segmenting and its POS tagging for a case Phase B Reconstructing Noun Phrase based on N-gram",
"sec_num": null
},
{
"text": "\" is the frequency of the constitute elements \" of phrase occurring at the results of original word segmentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2. Word segmenting and its POS tagging for a case Phase B Reconstructing Noun Phrase based on N-gram",
"sec_num": null
},
{
"text": "Then, the expression of cohesive filtering is generalized as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2. Word segmenting and its POS tagging for a case Phase B Reconstructing Noun Phrase based on N-gram",
"sec_num": null
},
{
"text": "62 Xinsheng Xu et al.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2. Word segmenting and its POS tagging for a case Phase B Reconstructing Noun Phrase based on N-gram",
"sec_num": null
},
{
"text": "If then is not a correct phrase 3Through cohesive filtering process, the over-combined frequency phrases that consist of two frequency words can be removed from phrase set .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2. Word segmenting and its POS tagging for a case Phase B Reconstructing Noun Phrase based on N-gram",
"sec_num": null
},
{
"text": "(\u2162) Left entropy and right entropy filtering. In addition, a complete phrase always has various neighbors including left neighbors and right neighbors. If a phrase has a fixed neighbor either left neighbor or right neighbor, it is always not a complete phrase. For example, phrase \"\u8bfa\u57fa\u4e9a\" (Nokia: a band of mobile phone ) should be a complete phrase. But it is always divided into two separated words \"\u8bfa\u57fa 4 \" and \"\u4e9a 5 \". Although the process of reconstructing phrase can generate its complete form \"\u8bfa\u57fa\u4e9a(Nokia)\", but some incorrect word segmentation results such as \" \u8bfa \u57fa \" and \" \u4e9a \" still exist at the original word segmentation results. Therefore, it is necessary to remove these phrases from the original word segmentation results to keep the accuracy of word segmentation results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2. Word segmenting and its POS tagging for a case Phase B Reconstructing Noun Phrase based on N-gram",
"sec_num": null
},
{
"text": "The calculation models of the left entropy and the right entropy are defined as follows, respectively (Li et al., 2009) :",
"cite_spans": [
{
"start": 102,
"end": 119,
"text": "(Li et al., 2009)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2. Word segmenting and its POS tagging for a case Phase B Reconstructing Noun Phrase based on N-gram",
"sec_num": null
},
{
"text": "Left entropy: \u2211 (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2. Word segmenting and its POS tagging for a case Phase B Reconstructing Noun Phrase based on N-gram",
"sec_num": null
},
{
"text": "where is the number of the ith left neighbor appearing at the results of the original word segmentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2. Word segmenting and its POS tagging for a case Phase B Reconstructing Noun Phrase based on N-gram",
"sec_num": null
},
{
"text": "is the number of the current phrase appearing at the results of the original word segmentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2. Word segmenting and its POS tagging for a case Phase B Reconstructing Noun Phrase based on N-gram",
"sec_num": null
},
{
"text": "is the left entropy of the current phrase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2. Word segmenting and its POS tagging for a case Phase B Reconstructing Noun Phrase based on N-gram",
"sec_num": null
},
{
"text": "\u2211 (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Right entropy:",
"sec_num": null
},
{
"text": "where is the number of the ith right neighbor appearing at the results of the original word segmentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Right entropy:",
"sec_num": null
},
{
"text": "is the right entropy of the current phrase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Right entropy:",
"sec_num": null
},
{
"text": "On the basis of these, an expression of the left entropy and right entropy filtering is generalized as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Right entropy:",
"sec_num": null
},
{
"text": "or then is not a complete phrase 6where is the threshold of the left entropy and right entropy filtering process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "If",
"sec_num": null
},
{
"text": "Through reconstructing phrase and three filtering processes, some incorrect word segmentations are removed from the results of original word segmentation and some fragmented phrases are restored also. Besides, some valuable new phrases corresponding to specific research object can also be found during these processes. By adding these new 4 \u8bfa\u57fa\u4e9a is a transliteration word of brand name of mobile phone in Chinese language. There is no word corresponding to \"\u8bfa\u57fa\" in English language. 5 Likeness, there is no word corresponding to \"\u4e9a\" in English language.",
"cite_spans": [
{
"start": 340,
"end": 341,
"text": "4",
"ref_id": null
},
{
"start": 483,
"end": 484,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Optimizing Word Segmentation Process",
"sec_num": "3.2"
},
{
"text": "Chinese Consumer Reviews and Establish Product Feature Structure Tree phrases into the user dictionary which is the important evidences of word segmentation process, then the word segmentation process will restart again based on this extended user dictionary. Thus, the process of word segmentation in this work contains two stages which is presented in Figure 3 . These two stages can optimize the results of word segmentation to provide valid data resources for the next product feature extraction process. ",
"cite_spans": [],
"ref_spans": [
{
"start": 354,
"end": 362,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "An Approach to Extract Product Features from 63",
"sec_num": null
},
{
"text": "The CRF (Lafferty, McCallum & Pereira, 2001; Jakob & Gurevych, 2010 ) is a sequence modeling framework that can solve the label bias problem in a principled way. CRF has a single exponential model for the joint probability of the entire label sequence given the observation sequence which assign a well-defined probability distribution over possible labeling, trained by maximum likelihood or MAP estimation. Therefore, the weights of features at different states can be traded off against each other. CRF perform better than HMMs and MEMMs when the true data distribution has higher-order dependencies than the model, as is often the case in practice (Zheng, Lei, Liao & Chen, 2013; Zhang & Li, 2015) . With these considerations, CRF is employed to extract product features from Chinese consumer reviews in this work. The principles of CRF can be described as follows:",
"cite_spans": [
{
"start": 8,
"end": 44,
"text": "(Lafferty, McCallum & Pereira, 2001;",
"ref_id": "BIBREF25"
},
{
"start": 45,
"end": 67,
"text": "Jakob & Gurevych, 2010",
"ref_id": "BIBREF19"
},
{
"start": 652,
"end": 683,
"text": "(Zheng, Lei, Liao & Chen, 2013;",
"ref_id": "BIBREF58"
},
{
"start": 684,
"end": 701,
"text": "Zhang & Li, 2015)",
"ref_id": "BIBREF57"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Product Feature Extraction based on CRF",
"sec_num": "4."
},
{
"text": "Let is a random variable over data sequences to be labeled. is a random variable over corresponding label sequences. And , , \u22ef , might range over natural language sentences, and denotes the th phrase in . , , \u22ef , range over POS taggings of those sentence s, and is the POS tag of the phrase . It is illustrated in Figure 4 . The random variables and are jointly distributed. CRF, with the known observation data sequence , calculate the conditional probability | . As a result, the POS tag sequence that corresponds to the maximum value of the conditional probability | will be label sequence of the .",
"cite_spans": [],
"ref_spans": [
{
"start": 314,
"end": 322,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Product Feature Extraction based on CRF",
"sec_num": "4."
},
{
"text": "The conditional probability | can be calculated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Product Feature Extraction based on CRF",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "| exp \u2211 \u2211 , , , \u2211 \u2211 , ,",
"eq_num": "(7)"
}
],
"section": "Product Feature Extraction based on CRF",
"sec_num": "4."
},
{
"text": "where , , , is the transfer character function. It denotes that the label corresponding to the 1 th element in the observation sequence is , and the label corresponding to the th element in the observation sequence is . , , is the status character function. It denotes that the label corresponding to the ith element in the observation sequence is . and are the weights for the transfer character function and the status character function, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Product Feature Extraction based on CRF",
"sec_num": "4."
},
{
"text": "According to the principle of CRF, the process of extracting product feature from the results of word segmentation mainly contains two tasks: annotating train set and designing rule template.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 4. Undirected graph of conditional random fields",
"sec_num": null
},
{
"text": "Annotating train set, based on the results of the preprocessing phase including POS tag, dependency relations, and governing words, is to identify the opinion words, product features and their types that is presented in Figure 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 220,
"end": 228,
"text": "Figure 5",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Annotating Train Set",
"sec_num": "4.1"
},
{
"text": "For Chinese language, opinion words may also be other kinds of POSs, not just adjective. For example, Chinese phrase \"\u53ef\u4ee5(can) 6 \" is a verb but it may express a positive opinion of consumer sometimes. This is one of the notable differences between Chinese language and English language. However, these phrases are usually not included in traditional opinion word set. This leads to the inaccuracy of the sentiment analysis for product features inevitably, Chinese Consumer Reviews and Establish Product Feature Structure Tree especially for Chinese product features. In order to analyze Chinese product features effectively, it is necessary to identify this kind of opinion words. Table 1 presents these unusual opinion words (partial) based on the analysis for Chinese language at preprocessing phase. Using them, many nouns or noun phrases can be identified and evaluated. This is high significant for product feature extraction from Chinese consumer reviews and its sentiment analysis. Product feature identifying is a crucial step for supervised feature extraction method. It will affect the validity of product feature extraction directly. A reasonable size of train set is necessary to keep the accuracy of product feature extraction. Therefore, it is a time-consuming manual annotating process.",
"cite_spans": [],
"ref_spans": [
{
"start": 681,
"end": 688,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Opinion Word Identifying",
"sec_num": "4.1.1"
},
{
"text": "In general, the product features extracted from consumer reviews include contents and types. For example, some product features refer to the product, and some product features refer to the components/parts constituting this product while some product features refer to the attributes of the product or the components/parts. Furthermore, these attributes can be grouped into the function, performance, quality, and service and so on. Distinguishing these product features carefully can help designer, manufacturer, or retailer to insight into the correlation and influence characters among them. It provides evidences for deep comprehensive applications based on product features. Therefore, identifying feature type is very necessary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Type Identifying",
"sec_num": "4.1.3"
},
{
"text": "Considering the types of product features and their classifications as well as the interrelations among them, a hierarchical structure for product features can be constructed which is presented in Figure 6 . This hierarchical structure consists of two parts: basic product structure and the product features describing the attributes of the nodes in basic product structure such as function, quality, and (or) service. Basic product structure consists of root node (product), components, and parts which may also be extracted from consumer reviews and are product features. And the attributed product features including function, quality, and service are the expanded descriptions to corresponding product, component, or part. This product feature structure tree connects the attributed product features with corresponding product, component or parts. It is the foundation for implementing deep comprehensive applications based on product features. ",
"cite_spans": [],
"ref_spans": [
{
"start": 197,
"end": 205,
"text": "Figure 6",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Feature Type Identifying",
"sec_num": "4.1.3"
},
{
"text": "Product feature extraction based on CRF need a rule template to train its model which is the core module of CRF to guide product feature extraction process. According to the requirements of our research works, an approach of designing the rule template for Chinese product feature extraction is proposed. It mainly includes three aspects of works such as the core elements of rule template, the unit structure of rule template, and the organization form of rule template. Considering the characters of Chinese language, the core elements that consist of rule template are presented in Table 2 which contains word elements (including phrase, POS, and context), syntactic elements (including dependency relations and governing words), and sentiment element (opinion words). Each element is also explained in detail in Table 2 . These elements describe the current phrase and the concerned information around it that are very useful to identify product feature. The utilization unit of these elements can be described as a three tuple , \u03a9, \" \" which is explained in Figure 7 .",
"cite_spans": [],
"ref_spans": [
{
"start": 585,
"end": 592,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 816,
"end": 823,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 1063,
"end": 1071,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Rule Template Designing",
"sec_num": "4.2"
},
{
"text": "Where denotes the position information of the elements. \u03a9 denotes the content information of the element such as phrase (0), POS (1), dependency relation (2), governing word (3), and opinion word (4). And denotes the value corresponding to the element that is determined based on and \u03a9. For example, the unit [1, 1,\"n\"] means that the POS of the phrase that is next to the current phrase is a noun. Using this mode, we can design the contents at a given position to deal with the various expression forms of Chinese language. In practice, the elements in Table 2 are always combined when establishing the rule template to increase the accuracy and efficiency of extracting product features. The combination forms of elements and its implications are presented in Figure 8 . between two phrases and what is the governing word of this dependency relation. These combination utilizations of the elements, together with their sole utilizations, form a complex architecture of rule template which is illustrated in Figure 9 . Based on it, product feature extraction for specific task can be achieved well.",
"cite_spans": [],
"ref_spans": [
{
"start": 555,
"end": 562,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 763,
"end": 771,
"text": "Figure 8",
"ref_id": null
},
{
"start": 1010,
"end": 1018,
"text": "Figure 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Rule Template Designing",
"sec_num": "4.2"
},
{
"text": "Based on train set and rule template, the models of CRF can be established through in-depth learning process which is presented in Figure 10 . This learning process constructs a large amount of function sets which will be used in models to calculate the conditional probability of elements co-occurring with the form of rule unit description at consumer reviews. Then these results are used to calculate the probabilities at the transfer character and those of the state character in Equation (7), respectively. ",
"cite_spans": [],
"ref_spans": [
{
"start": 131,
"end": 140,
"text": "Figure 10",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Figure 9. General organization form of rule template",
"sec_num": null
},
{
"text": "Quantitative description is the foundation of analyzing product features precisely. In this work, the quantitative characters of product features are investigated from two aspects: the frequency of product feature and the sentiment score of product feature which reflect the extent of consumer paying attention to them, and the positive or negative feeling of consumer for them, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantitative Characters of Product Features",
"sec_num": "5."
},
{
"text": "The frequency of product feature occurring at consumer reviews reflects the extent of customer paying attention to it. For example, consumer maybe like a product feature very much or disappoint very much when the frequency of it is very high. The frequency of a product feature occurring at consumer reviews is generalized as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frequency of Product Feature Occurring at Consumer Reviews",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "_ \u2211",
"eq_num": "(8)"
}
],
"section": "Frequency of Product Feature Occurring at Consumer Reviews",
"sec_num": "5.1"
},
{
"text": "where denotes the number of all consumer reviews. denotes the number of the th product feature appearing at the th consumer review.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frequency of Product Feature Occurring at Consumer Reviews",
"sec_num": "5.1"
},
{
"text": "_ denotes the frequency of the th product feature occurring at consumer reviews.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frequency of Product Feature Occurring at Consumer Reviews",
"sec_num": "5.1"
},
{
"text": "Generally, the evaluation of consumers to a product feature is either positive or negative, and its strength is different as well. How to describe this kind of distinguishes and how to measure its strength are very important to insight into the preference of consumers precisely.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Score of Product Feature",
"sec_num": "5.2"
},
{
"text": "After analyzing 3,000 consumer reviews, we find that the language pattern of consumer evaluating a product feature is mainly manifested as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Score of Product Feature",
"sec_num": "5.2"
},
{
"text": ". ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Score of Product Feature",
"sec_num": "5.2"
},
{
"text": ". ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Score of Product Feature",
"sec_num": "5.2"
},
{
"text": "where denotes a product feature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Score of Product Feature",
"sec_num": "5.2"
},
{
"text": ". denotes the adjective that modifies product feature . And . denotes the adverb that modifies the adjective .. The adverb . and the adjective . modify the product feature together.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Score of Product Feature",
"sec_num": "5.2"
},
{
"text": "The adverb . and the adjective . that modify the product features are always qualitative descriptions at consumer reviews. In order to describe the strengths of these adjectives . and their polarity as well as those of adverb . for the goal of calculation and comparison, the adjective",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Score of Product Feature",
"sec_num": "5.2"
},
{
"text": ". and the adverb . should be transformed to numerical value according to their strength and polarity. In this work, the adjective . is defined as the range [-9, +9] , and the adverb . is also defined as the range [-9, +9] . From 1 to 9, strength is increasing gradually. And the minus sign denotes opposite polarity (namely negative). Then, the sentiment score of the th product feature is generalized as follows: ",
"cite_spans": [
{
"start": 156,
"end": 164,
"text": "[-9, +9]",
"ref_id": null
},
{
"start": 213,
"end": 221,
"text": "[-9, +9]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Score of Product Feature",
"sec_num": "5.2"
},
{
"text": "where _ denotes the sentiment score of the th product feature . , , and denote the number of the positive consumer reviews concerned with the th product feature , the number of the negative consumer reviews concerned with the th product feature , and the number of the neutral consumer reviews (the consumer review that has multiple different polarity opinion words is defined as neutral consumer review in this work because it is difficult to identify its exact polarity) concerned with the th product feature , respectively. _ denotes the score of the adjective nearby the th product feature at the th positive consumer review. And _ denotes the strength of the adverb that modifies the nearest adjective at the th positive consumer review.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Score of Product Feature",
"sec_num": "5.2"
},
{
"text": "_ denotes the sentiment score of the adjective nearby the th product feature at the th negative consumer review. _ denotes the strength of the adverb that modifies the nearest adjective at the th negative consumer review.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Score of Product Feature",
"sec_num": "5.2"
},
{
"text": "is the number of the positive adjective that correspond to product feature at the th neutral consumer review, and is the number of the negative adjective that correspond to product feature at the th neutral consumer review. _ _ 1 denotes the sentiment score of the 1th positive adjective of the th neutral consumer review, and _ _ 1 denotes the strength of the adverb that modifies the 1th positive adjective at the th neutral consumer review. Likeness, _ _ 2 denotes the sentiment score of the 2th negative adjective of the th neutral consumer reviews, and _ _ 2 denotes the strength of the adverb that modifies the 2th negative adjective at the th neutral consumer review.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Score of Product Feature",
"sec_num": "5.2"
},
{
"text": "The sentiment score reflects the preference of consumers to a product feature and its extent comprehensively. It can provide the evidences for retailer, designer, or manufacturer to precisely implement product improvement, and market strategy et al.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Score of Product Feature",
"sec_num": "5.2"
},
{
"text": "Product features that correspond to the attributes of the product, components, or parts should be connected with relevant objects (namely product, components, or parts) in order to implement in-depth analysis and comprehensive applications at product features level. According to the classifications of product features, product features form a tree structure in general which is presented in Figure 6 , namely product feature structure tree. In order to construct this product feature structure tree, a basic product structure is employed which is an existing product structure and used as frame, and the nodes of it are also the potential parent nodes for attributed product features. It needs to be noted that the nodes of the basic product structure should also the product features extracted from consumer reviews. Therefore, the key effort of constructing product feature structure tree is to find corresponding parent nodes for each attributed product feature from the results of word segmentation, and compare the corresponding parent nodes with the nodes of basic product structure.",
"cite_spans": [],
"ref_spans": [
{
"start": 393,
"end": 401,
"text": "Figure 6",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Product Feature Structure Tree Constructing",
"sec_num": "6."
},
{
"text": "The potential parent nodes of current product features are always the parts, components, or even product. They are noun phrases. And they always co-exist with these attributed product features. Besides, considering the expression habits of Chinese consumer reviews e.g. some consumers may mention the parts or product first when they comment an object, and then evaluate its attributes for example \"\u7167\u76f8\u673a\u7684\u50cf\u7d20\u592a\u4f4e(The pixels of the camera is too poor)\" while some other consumers may evaluate the attributes of the parts or product first, and then mention the parts or product for example \"\u7eed\u822a\u65f6\u95f4\u957f(long battery life)\uff0c\u7535\u6c60\u6760\u6760\u7684(the battery very good)\". Thus, keeping the current product feature as a central point, the process of finding potential parent node for current product feature is to search the phrase that satisfies with specific POS and type requirements, or the dependency relation with current product feature based on given step-length from left and right direction illustrated as Figure 11 . The pseudo-code description for the algorithm of finding potential parent node for current product feature is presented in Figure 12 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 983,
"end": 992,
"text": "Figure 11",
"ref_id": "FIGREF6"
},
{
"start": 1118,
"end": 1127,
"text": "Figure 12",
"ref_id": null
}
],
"eq_spans": [],
"section": "Finding Potential Parent Node for Current Product Feature",
"sec_num": "6.1"
},
{
"text": "It is necessary to confirm whether a potential parent node of the current attributed product feature exists at basic product structure or not before adding the attributed product features into basic product structure. Comparing the similarity between the potential parent nodes and the nodes of basic product structure is a valid measure. Considering the characters of Chinese language, the similarity between the potential parent nodes and the nodes of basic product structure is calculated from two aspects: literal similarity and context similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity between Potential Parent Nodes and the Nodes of Basic Product Structure",
"sec_num": "6.2"
},
{
"text": "Word is the basic unit of constructing a phrase. For Chinese language, many phrases whose meanings are similar always contain the same words (Xia, 2007) . Based on these facts, the similarity between potential parent nodes and the nodes of basic product structure can be calculated through the status of words appearing at these nodes (product features) namely literal similarity which is influenced by two factors: quantitative and position (Wang, Zhou & Sun, 2012) .",
"cite_spans": [
{
"start": 141,
"end": 152,
"text": "(Xia, 2007)",
"ref_id": "BIBREF52"
},
{
"start": 442,
"end": 466,
"text": "(Wang, Zhou & Sun, 2012)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Literal Similarity",
"sec_num": "6.2.1"
},
{
"text": "Let and are two product features that the similarity between them need to be calculated. The literal similarity , between and is generalized as follows (Xia, 2007; Wang et al., 2012) :",
"cite_spans": [
{
"start": 152,
"end": 163,
"text": "(Xia, 2007;",
"ref_id": "BIBREF52"
},
{
"start": 164,
"end": 182,
"text": "Wang et al., 2012)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Literal Similarity",
"sec_num": "6.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": ", \u03b1 | , | | | | , | | | /2 \u03b2 \u2211 , \u2211 \u2211 , \u2211 /2",
"eq_num": "(12)"
}
],
"section": "Literal Similarity",
"sec_num": "6.2.1"
},
{
"text": "and 0 , 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Literal Similarity",
"sec_num": "6.2.1"
},
{
"text": "where \u03b1 and \u03b2 are the weights that describe the importance of quantitative factor at the literal similarity calculation and the importance of position factor at the literal similarity calculation respectively, and \u03b1 \u03b2 1. In addition, defines the ratio of the number of words at these two product features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Literal Similarity",
"sec_num": "6.2.1"
},
{
"text": "min | | | | , | | | |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Literal Similarity",
"sec_num": "6.2.1"
},
{
"text": ", denotes the weight of the th word of the product feature .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Literal Similarity",
"sec_num": "6.2.1"
},
{
"text": ", , if at , 0,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Literal Similarity",
"sec_num": "6.2.1"
},
{
"text": "where | | and | | denote the number of words at product feature and product feature , respectively. denotes the th word of product feature . , denotes the set of the words that are contained in both product feature and product feature at the same time. | , | is the number of the set , .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Literal Similarity",
"sec_num": "6.2.1"
},
{
"text": "In addition, some Chinese phrases are similar at sematic but they don't contain any the same words such as \"\u5916\u89c2(appearance)\" and \"\u6837\u5b50(shape)\". In order to calculate the similarity of these kinds of product features, it should make full use of the context information around these product features because the phrases which modify the same sematic phrases are always similar (Tu, Zhang, Zhou & He, 2012) . Thus, the similarity calculation between product features based on context can be generalized as follows: , , \u22ef , , \u22ef ,",
"cite_spans": [
{
"start": 372,
"end": 400,
"text": "(Tu, Zhang, Zhou & He, 2012)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Context Similarity",
"sec_num": "6.2.2"
},
{
"text": "where is the co-occurrence frequency between product feature and the th modified phrase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Similarity",
"sec_num": "6.2.2"
},
{
"text": "Thereupon, the similarity calculation among product features is transformed into the similarity between two vectors. It is generalized as follows (Tu et al., 2012) :",
"cite_spans": [
{
"start": 146,
"end": 163,
"text": "(Tu et al., 2012)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Context Similarity",
"sec_num": "6.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": ", \u2211 \u2211 \u2211",
"eq_num": "(14)"
}
],
"section": "Context Similarity",
"sec_num": "6.2.2"
},
{
"text": "where denotes the co-occurrence frequency between product feature and the th modification phrase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Similarity",
"sec_num": "6.2.2"
},
{
"text": "denotes the occurrence frequency of product feature and the th modification phrase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Similarity",
"sec_num": "6.2.2"
},
{
"text": "is the total number of the modification phrases in an existing group. And , is the similarity between product feature and product feature .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Similarity",
"sec_num": "6.2.2"
},
{
"text": "Based on potential parent node searching and similarity calculating, the process of constructing product feature structure tree is presented in Figure 13 . First, picking out a product feature from product feature database, and locating it at the results of word segmentation e.g. the th consumer reviews. Second, searching the potential parent-child pairs (PCP) by calling Algorithm 1, and then comparing the parent nodes of potential PCP with the nodes of basic product structure based on similarity analysis. If exists, adding the product features (namely attributed product feature) into the corresponding nodes of basic product structure as its children. Repeating this process, until all the attributed product features are added into basic product structure. This process connects not only the attributed product features but also their quantitative descriptions such as frequency and sentiment score with their parent nodes.",
"cite_spans": [],
"ref_spans": [
{
"start": 144,
"end": 153,
"text": "Figure 13",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Figure 13. Process of constructing product feature structure tree",
"sec_num": null
},
{
"text": "Product feature extraction from Chinese consumer reviews is a complicated task and is also a crucial task because its results influence the efficiency of similarity analysis and comprehensive applications directly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Analysis",
"sec_num": "7."
},
{
"text": "Many factors influence the results of product feature extraction. In order to insight into these factors and provide evidences to control the process of product feature extraction effectively, we design extended experiments from different perspectives based on 5,806 Chinese consumer reviews retrieved from e-commerce platforms Taobao.com, Suning.com, and Zhongguancun.com.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Analysis",
"sec_num": "7."
},
{
"text": "The results of word segmentation provide the data resources for product feature extraction and product feature structure tree constructing. Therefore, a valid word segmentation should keep enough correctness. In this work, a two-stage optimizing word segmentation process is proposed which is presented in Figure 3 . In order to show the effectiveness and necessity of two-stage optimizing word segmentation process, we designed two experiments: the word segmentation based on tool ictcals only and the word segmentation based on our proposed two-stage optimizing word segmentation method. And then the correct rate, which is defined as the ratio between the number of correct word segmentation and the number of total word segmentation result, is used as index to evaluate the effectiveness of different word segmentation methods and different data sources such as taobao.com, suning.com, and zhongguancun.com, respectively. These results are presented in Figure 14 . Black rectangles describe the correct rates of product features that are extracted based on ictcals system only from taobao.com, suning.com, and zhongguancun.com respectively (taobao:90.16%, suning:90.5%, and zhongguancun:95.29%. suning:95.97%, and zhongguancun:97.65%.) . Obviously, the correct ratios of red rectangle are all higher than those of black rectangle.",
"cite_spans": [
{
"start": 1199,
"end": 1239,
"text": "suning:95.97%, and zhongguancun:97.65%.)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 306,
"end": 314,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 957,
"end": 966,
"text": "Figure 14",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results of Word Segmentation",
"sec_num": "7.1"
},
{
"text": "Furthermore, we also calculate the average correct rate of word segmentation based on the total data from taobao.com, suning.com, and zhongguancun.com which is illustrated in Figure 15 . The correct rate is also increased by 6.16%. Therefore, it is very necessary to implement two-stages optimizing word segmentation in order to increase the correctness of Chinese consumer reviews and provide valid data sources for product feature extraction.",
"cite_spans": [],
"ref_spans": [
{
"start": 175,
"end": 184,
"text": "Figure 15",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Results of Word Segmentation",
"sec_num": "7.1"
},
{
"text": "The elements of rule template and its organization form determine the solution of extracting product features. Different rule templates will lead to different effectiveness of product feature extraction. In order to explore a valid rule template including elements and its organization form for our work, 10 rule templates that are developed based on different elements which are presented at Table 2 and Figure 7 and organization forms are designed. The efficiency of product feature extraction based on these 10 rule templates are evaluated respectively based on existing popular index such as precision, recall, and F-score which is illustrated in Figure 16 . We found that the precision, recall, and F-score corresponding to the 7 th rule template are 90.86%, 93.8%, and 92.31%, respectively. These are comprehensive optimal comparing with those of product feature extraction processes based on the other 9 rule templates. Thus, the rule template for CRF in this work will be established according to the 7 th rule template.",
"cite_spans": [],
"ref_spans": [
{
"start": 393,
"end": 400,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 405,
"end": 413,
"text": "Figure 7",
"ref_id": null
},
{
"start": 651,
"end": 662,
"text": "Figure 16",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Contents of Rule Template",
"sec_num": "7.2"
},
{
"text": "Consumer reviews are always irregular expression because the purpose of consumer commenting on products at network platform is to exchange and share information. Especially for Chinese language, its complex syntax, grammar and diversified expressions make it more serious. Therefore, a proper search range is very important in order to find the valid phrases which are correlated with the current object.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Widths of Searching Window",
"sec_num": "7.3"
},
{
"text": "With these considerations, three widths of searching window which had been described in Figure 11 are designed such as 3, 5, and 7 respectively to extract the potential parent nodes for current product features. We also employ precision, recall and F-score to measure the effectiveness of finding potential parent node at different widths of searching window, and the results of them are presented in Figure 17 . It can be seen that the comprehensive result is the optimal when the width of searching window is 5 although the recall of it increases continuously along with the increasing of width. The precision and F-score will be decreased once expanding the width of searching window when the potential parent node cannot be found at given range. The reason is that the phrases that are found at expanding range maybe satisfy with the constraint conditions defined at our searching algorithms such as POS or rules, it may not correlate with the current product feature at all. Thus, it decreases the precision and the F-score in the end.",
"cite_spans": [],
"ref_spans": [
{
"start": 88,
"end": 97,
"text": "Figure 11",
"ref_id": "FIGREF6"
},
{
"start": 401,
"end": 410,
"text": "Figure 17",
"ref_id": "FIGREF8"
}
],
"eq_spans": [],
"section": "Widths of Searching Window",
"sec_num": "7.3"
},
{
"text": "Considering the expression habits of Chinese language and the irregularity of consumer review, the potential parent nodes of the current product features are always omitted or implicated. Therefore, they cannot be found directly under these conditions. In order to deal with this issue, a workflow of identifying the potential parent nodes for this kind of product features is presented in Figure 18 . It is to infer the potential parent node for the current product feature according to the existing searching results namely the potential parent nodes for the same product feature at the front of consumer reviews. If the infer results are null, then the design manual for the target product which records the correlations between components/parts and their attributes is used as evidences to identify its parent node. It avoids to searching at wider range aimlessly and keep the effectiveness of searching process as well. Moreover, the exact coverage regions of searching window may also be different even for the same width of searching window. Using the width 5 of searching window as example, three forms of coverage regions are presented in Figure 19 . Accordingly, the efficiencies of searching potential parent nodes are evaluated through precision, recall, and F-score which are presented in Figure 20 . The form of coverage region in Figure (19-2) corresponds to a better result. Therefore, the practical searching range and its coverage region are set based on this result in the case study. These experiments and their results provide the evidences for our research works at word segmentation, product feature extraction, and product feature structure tree constructing. They are very significant for keeping the validity of our proposed methods. ",
"cite_spans": [],
"ref_spans": [
{
"start": 390,
"end": 399,
"text": "Figure 18",
"ref_id": null
},
{
"start": 1148,
"end": 1157,
"text": "Figure 19",
"ref_id": null
},
{
"start": 1302,
"end": 1311,
"text": "Figure 20",
"ref_id": "FIGREF10"
},
{
"start": 1345,
"end": 1358,
"text": "Figure (19-2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Widths of Searching Window",
"sec_num": "7.3"
},
{
"text": "Consumer reviews contain rich information regarding consumer requirements and preferences. Mining valuable information effectively from consumer reviews can provide evidences for designers, manufacturers, or retailers to implement product improvement or make market strategy. With the rapid expansion of e-commerce businesses based on network platforms and clients, more and more companies have realized the importance of this kinds of utilities. Aiming at Chinese consumer reviews, this section, using intelligent mobile phone xx-F2 as example, is to elaborate the implementations of the principles and methods mentioned above, and the applications based on product features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "8."
},
{
"text": "By using web crawler tools Goseeker and Train collector, we retrieved 5,806 Chinese consumer reviews from e-commerce platform taobao.com (2,591), suning.com (1,243), and zhongguancun.com (1,972) which are used as analysis corpuses. According to the technique framework presented in Figure 1 , we employ software ictclas, which is developed by Chinese Science Academic, as word segmentation tool to divide consumer reviews into discrete phrases and label their POS. At the same time, we employ software ltp, which is developed by Harbin Institute of Technology of China to achieve syntactic parsing. And 82,724 raw phrases are obtained. After preprocessing for these raw phrases such as stop words, typos, and meaningless phrases, a two-stages optimization word segmentation process is performed presented at Section 3.2 to make the results of word segmentation more suitable for our research tasks, and the key parameters of these optimization phases are set presented in Table 3 . Finally, 50,785 valid phrases are obtained. These phrases are used as the data resources (corpus) for product feature extraction. In order to extract product features from these results of word segmentation effectively based on CRF, 9,081 phrases obtained from 1,000 consumer reviews are used as train set. We invited 2 engineers from mobile phone development department and 1 linguist from the literature of our school to annotate these phrases manually including feature, type, and opinion. It took two days, 8 hours per day to implement this task. At the same time, rule template is developed according to the analysis results of experiment at Section 7.2.",
"cite_spans": [],
"ref_spans": [
{
"start": 282,
"end": 290,
"text": "Figure 1",
"ref_id": null
},
{
"start": 972,
"end": 981,
"text": "Table 3",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Case Study",
"sec_num": "8."
},
{
"text": "Based on train set and rule template, the model of CRF is trained through a machine learning process. And then, product features for xx-F2 product are extracted from 50,785 valid phrases of 5,806 consumer reviews based on CRF. Finally, 80 product features are obtained after merging synonym, homoionym, and alternative names.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "8."
},
{
"text": "Product feature extraction is a very crucial step for ensuring the effectiveness of the next comprehensive analysis and application based on product features. In order to verify the validity of our proposed methods, we design a five-fold intersection experiments by using 5,000 phrases from the results of word segmentation. These 5,000 phrases are divided into 5 subsets which are labeled 1, 2, 3, 4, and 5 respectively, and each subset contains 1,000 phrases. The effectiveness of product feature extraction based on five-fold intersection experiments are measured through indexes precision, recall, F-score. At the same time, we calculated the precision, recall, and F-score of product feature extraction based on the methods proposed by Jakob's work, which is the closest with our works at the aspect of product feature extraction, by using the same phrase set. Finally, we compare the results obtained based on our methods with those obtained based on the methods of Jakob's work (Jakob & Gurevych, 2010) which are presented in Table 4 . Obviously, the precision, recall, and F-score of our methods are all better than those of Jakon's work. It denotes that our methods of extracting product features from consumer reviews are valid, especially for Chinese consumer reviews. Based on the product features extracted above and the results of word segmentation, the frequency of each product feature is calculated, so does the sentiment score of it. And the potential parent nodes of the attributed product features are identified based on the Algorithm 1 presented in Figure 12 and the workflow presented in Figure 18 . As a result, the attributed product features are added into the basic product structure of product xx-F2. Thus, product feature structure tree for xx-F2 is established which is illustrated in Figure 21 . The unit of product feature structure tree is a four tuple: < F i , frequency, score, F j > where F i is parent node and F j is child node. Frequency denotes the times of product feature F j appearing at consumer reviews. Score denotes the sentiment evaluation of consumer to product feature F j . Based on the product feature structure tree and the data on it including frequency and sentiment score, the influence or interaction relations between the parent nodes of product feature structure tree and its child nodes can be inferred conveniently.",
"cite_spans": [
{
"start": 985,
"end": 1009,
"text": "(Jakob & Gurevych, 2010)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 1033,
"end": 1040,
"text": "Table 4",
"ref_id": "TABREF8"
},
{
"start": 1571,
"end": 1580,
"text": "Figure 12",
"ref_id": null
},
{
"start": 1611,
"end": 1620,
"text": "Figure 18",
"ref_id": null
},
{
"start": 1815,
"end": 1824,
"text": "Figure 21",
"ref_id": null
}
],
"eq_spans": [],
"section": "Case Study",
"sec_num": "8."
},
{
"text": "In this work, a Bayes theory based application is investigated based on product feature structure tree that is to infer the factors (namely child nodes) of leading to the negative valuations or low sentiment scores of their parent nodes. The mathematic description of this inferring process is as following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "8."
},
{
"text": "For a Bayes network which is concerned on a set of variables , , \u22ef , , it contains two aspects: \u2460 network structure in which variables are conditional independency, and \u2461 local probability distribution which connects with each variable. Let variable corresponds to a node of Bayes network, and is the parent node of variable , then the probability of child node leading to the low sentiment score of its parent node can be generalized as following.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "8."
},
{
"text": "| | (15)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "8."
},
{
"text": "Where is the ratio of unsatisfied consumer reviews (L) for product feature relative to all consumer reviews. It is calculated as following. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "8."
},
{
"text": "Where , denotes the times of product feature appearing on the sth consumer review.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "8."
},
{
"text": ", , denotes the times of product feature appearing on the sth consumer review that has negative evaluating on the product feature .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "8."
},
{
"text": "And, | denotes the probability that a child node (product feature) being evaluated as negative (described as L) leads to its parent node (product feature) being evaluated as a poor feature (described as N) by consumers. Thus, | can be generalized as following.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 21. Product feature structure tree for product xx-F2",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "| \u2211 , \u22ef , , , , \u22ef , , ,",
"eq_num": "(17)"
}
],
"section": "Figure 21. Product feature structure tree for product xx-F2",
"sec_num": null
},
{
"text": "Where , \u22ef , denotes the probability that the child nodes (product features)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 21. Product feature structure tree for product xx-F2",
"sec_num": null
},
{
"text": ", \u22ef , being evaluated as positive (described as H), negative (described as L), and neutral (described as M) lead to its parent node (product feature) being evaluated as a poor feature (described as N) by consumers. The probability sum of these child nodes being evaluated as positive (H), negative (L), and neutral (M) respectively denotes the probability of parent node (product feature) being evaluated as a poor feature (described as N) when the child node (product feature)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 21. Product feature structure tree for product xx-F2",
"sec_num": null
},
{
"text": "is evaluated as negative (described as L) namely .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 21. Product feature structure tree for product xx-F2",
"sec_num": null
},
{
"text": "Using substructure \u9001 \u8bdd \u5668 (transmitter), which has a relative low sentiment score according to our statistical results and consists of three child nodes such as \u9ea6 \u514b \u98ce (microphone), \u62fe\u97f3\u5668(pickup) and \u8bdd\u7b52(mike), as example, the correlation matrix between the child node evaluations and the parent node evaluations from consumers is established by experts based on their observations on 3,000 consumer reviews which is presented Table 5 . On the basis of this, the influences between parent nodes and their child nodes can be calculated based on formulas (15)-(17). The results shown that the probabilities of a relative low sentiment scores of \u9001\u8bdd\u5668(transmitter) causing by its child nodes such as \u9ea6\u514b\u98ce (microphone), \u62fe\u97f3\u5668(pickup) and \u8bdd\u7b52(mike) are 0.415, 0.327, and 0.258, respectively. It can be seen that this relative low sentiment score of \u9001\u8bdd\u5668(transmitter) is the most likely caused by \u9ea6\u514b\u98ce(microphone). Thus, designers or manufacturers should improve the \u9ea6\u514b\u98ce (microphone) for the future in order to increase the satisfactions of consumers for their products, and gain profit margins under fierce market competition in the end.",
"cite_spans": [],
"ref_spans": [
{
"start": 422,
"end": 429,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 21. Product feature structure tree for product xx-F2",
"sec_num": null
},
{
"text": "Similarly, the influences among other nodes on the product feature structure tree can also be analyzed in this way. It will provide valuable evidences for the designers, manufacturers, or retailers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 5. Observation results of influence among product features",
"sec_num": null
},
{
"text": "The main goal of Online reviews from consumers is to exchange or share information among them. The languages from consumers are characterized as oral, haphazard, and irregular syntax. And some new words or terms are also created or introduced continuously, specifically for young people. Therefore, it is necessary to adopt two-stages optimization method for word segmentation. This process can deal with the error results of direct word segmentation first, and find some new words or terms. For example, \"\u5206\u8fa8\u7387(pixel)\", in fact, is a kind of attribute descriptions of intelligent electronic products. So \"\u5206\", \"\u5206\u8fa8\", and \"\u8fa8\u7387\" are all error results of word segmentation but these results exactly exist in practice. Obviously, it is necessary to delete these error phrases from the results of original word segmentation process in order to keep the accuracy of our research and analysis works. Based on the results of original word segmentation, the correct form namely \"\u5206\u8fa8\u7387(pixel)\" can only be generated through word reorganization. However, new error forms can also be generated such as \"*\u5206\", \"\u5206\u8fa8\", and \"\u7387*\", etc. Through three filter algorithms such as frequency filtering, cohesive filtering, and left & right entropy filtering, most of these error results can be deleted from the results of original word segmentation. In addition, some new terms or phrases can also be found such as \"\u4e91\u5b58\u50a8(cloud storage)\" and \"\u8bed\u97f3\u8bc6\u522b(speech recognition)\", etc. All these new words and terms, along with the correct results of word segmentation, are input into user dictionary again which is used to guide word segmentation at practice. And then, the process of word segmentation will be restarted based on this extended user dictionary. As a result, the correct rate of word segmentation is increased remarkably. For example, we used 1,000 consumer reviews as experiment corpus, and invited two development engineers of intelligent mobile phone and one linguist to divide reviews into phrases and annotate their POSs manually. The results are used as reference to evaluate the efficiency of word segmentation methods. And then, two kinds of word segmentation processes such as word segmentation based on ictclas tool directly and our proposed two-stages optimizing methods. Comparing with the reference results obtained from experts, the results generated from our two-stages optimizing method are more accuracy than those of ictclas tool directly which had been explained in Figure 14 and Figure 15 . Therefore, two-stages optimizing word segmentation method for Chinese consumer reviews is valid and necessary. It ensures to provide high quality data for the next product feature extraction analysis and application.",
"cite_spans": [],
"ref_spans": [
{
"start": 2457,
"end": 2466,
"text": "Figure 14",
"ref_id": null
},
{
"start": 2471,
"end": 2480,
"text": "Figure 15",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Discussions",
"sec_num": "9."
},
{
"text": "Product feature extraction is a complex task especially for Chinese consumer reviews, and also a crucial stage that will influence the effectiveness of applications based on product features directly. Therefore, product feature extraction in this work adopted supervised product feature extraction strategy due to its high precision. Thus, the core work is to design a reasonable rule template. Besides the elements of existing traditional rule templates, the rule template developed in this work added two kinds of elements such as governing word and opinion word to support product feature extraction and sentiment identification. By doing these, some implicit product features or sentiment expresses can be detected by combining these new adding elements with the existing elements of existing rule templates which were presented in Figure 8 . For example, \"\u6760\u6760\u7684(ganggangde means very good)\" is a recent popular express which describes a kind of positive evaluation. It is an opinion word but it isn't contained at user dictionary exactly. Thereupon, we added it into extended user dictionary, and annotated it as opinion word manually at train set. And then, the implicit product features concerned with it can be extracted conveniently, and their sentiment score can be calculated accurately. In addition, \"\u6218\u6597\u673a(fighter)\" is another popular express recently. In essence, it is a noun phrase. But it is always used as an adjective phrase to modify a product feature around it and express a positive sentiment. Likeness, this phrase is also not contained at user dictionary. Therefore, it is high significant for product feature extraction from Chinese consumer reviews to find new words especially for opinion words to extend existing user dictionary through two-stages optimizing word segmentation process, and annotate the opinion attributes of phrases at train set and rule template. After doing this, the implicit product features and their sentiment evaluation can be processed accurately. These were verified in Figure 16 which presents the efficiency of product feature extraction based on 10 different rule templates, and the 7 th rule template which was proposed in this work has better results than those of rule templates.",
"cite_spans": [],
"ref_spans": [
{
"start": 836,
"end": 844,
"text": "Figure 8",
"ref_id": null
},
{
"start": 2020,
"end": 2029,
"text": "Figure 16",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Discussions",
"sec_num": "9."
},
{
"text": "In addition, product features are always internal correlated with each other. For example, \"\u6444\u50cf\u5934(camera)\" and \"\u50cf\u7d20(pixel)\" are two product features, and may appear at different consumer reviews discretely. However, product feature \"\u50cf\u7d20(pixel)\" is one of the attributes of product feature \"\u6444\u50cf\u5934(camera)\" in essence. Therefore, the internal correlation among them is an inevitable existence. Unfortunately, the existing researches don't explore this fact. This paper discussed this issue. Product feature structure tree is the representation form of the internal correlations among product features. It integrates product features which distributes at consumer reviews concretely into a whole object, and makes the comprehensive applications based on product features feasible. However, the numbers between parent nodes (product features) and its child nodes (product features), according to our observations, don't satisfy with cumulative calculation law both frequency and sentiment score e.g. between \"\u9001\u8bdd\u5668 (transmitter)\" and {\"\u9ea6\u514b\u98ce(microphone)\", \"\u62fe\u97f3\u5668(pickup)\", and \"\u8bdd\u7b52(mike)\"}. The reason is that many consumers provide a snippet text description for products only for the goal of completing evaluation task required by platform or system. As a result, many product features at consumer reviews are not evaluated by consumers at all. Therefore, the influences among product features cannot be reflected by the numbers on product feature structure tree directly. For this reason, a method of inferring the influences among product features based on product feature structure tree is proposed by using Bayes theory. This method uses the sentiment scores of product features as evidences to identify the product features that need to be analyzed in depth because of its low or negative evaluations from consumers. At the same time, it makes full use of the practical evaluation results of each review from consumers. Therefore, the inferring results are more convince. For example, product feature \"\u9001\u8bdd\u5668 (transmitter)\" is determined as the object that need to be inferred the elements leading to its An Approach to Extract Product Features from 89 Chinese Consumer Reviews and Establish Product Feature Structure Tree low or negative sentiment score. According to the data on product feature structure tree, child node (product feature) \"\u62fe\u97f3\u5668(pickup)\" may be the potential element because of its lowest sentiment score. However, the inferring result from our proposed method based on Bayes theory is that child node \"\u9ea6\u514b\u98ce(microphone)\" has the maximal possible of leading to low sentiment score of its parent node or negative evaluation. It is in accordance with fact. Even if the sentiment scores of \"\u9ea6\u514b\u98ce(microphone)\" are not the lowest while the frequency of product feature received negative evaluation are very high which means a large amount of consumers pay attention on this product feature and give negative evaluation on this product feature. Therefore, this leads to a lower sentiment score of its parent nodes. From the perspective of probability theory and mathematical statistics, a minority events always have no statistic means in general. Therefore, product feature structure tree makes the research and analysis on the internal relations among product features feasible, and the inferring method based on Bayes theory is a valid method to keep the applications more reasonable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "9."
},
{
"text": "A large amount of product reviews provide valuable consumer feedback. In the past decade, many researchers in computer science and information management have paid much attention to extract product features from consumer reviews, and analyze the opinion direction of consumer for product features. This paper, aiming at Chinese consumer reviews, investigates the issues of product feature extraction and the applications at product feature level. It is high significant because of emerging a huge e-commerce market in China.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "10."
},
{
"text": "In this work, a technique overview of extracting product features from Chinese consumer reviews is proposed in which two-stages optimizing word segmentation, product feature extraction based on CRF, and product feature structure tree establishing are investigated. Two-stages optimizing word segmentation process mainly consists of phrase reconstructing, frequency filtering, cohesiveness filtering, and left & right entropy filtering. It increases the correct rate of word segmentation through new phrase finding to expand user dictionary and the second word segmentation process. Likeness, an expanded rule template is proposed in which governing word and opinion word annotating are added to detect the implicit product features and infrequent opinion words. It increases both the efficiency of product feature extraction from Chinese consumer reviews and the accurateness of sentiment evaluation for product features. At the same time, two quantitative characters are defined to describe the preference extent of consumers for a product feature. Furthermore, product feature structure tree is established based on the inevitable internal correlations among product features. An algorithm is proposed to find the potential parent nodes for current product features from the results of word segmentation and different similarity functions are employed to evaluate the similarity between the potential parent nodes and the nodes of basic product structure in order to add the attribute product features into basic product structure. On the basis of these, an inferring application based on product feature structure tree is explored to identify the potential factors that lead to the low sentiment score of its parent node by using Bayes theory. This is high significant for designers, manufacturers, or retailers to implement product update, quality improvement, and market strategy, etc. Moreover, categories of comparative experiments and profound analysis are conducted on 5,806 real consumer reviews. The results generated from them provide the evidences for our research works. Finally, the case study verified the effectiveness of our proposed methods and applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "10."
},
{
"text": "Potential research work can be extended in many directions such as product quality and risk management and the dynamic evolution characteristics of the influences among product features, etc. These are also our future research directions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "10."
},
{
"text": "The phrase \"\u53ef\u4ee5\" expresses a positive opinion that means \"good\" in Chinese language. However, its literal meaning corresponds to the word \"can\" in English language. The POS of it defines as verb at word segmanetation process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "It is an auxiliary word in Chinese language. There is no word corresponding to it in English language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "It is an auxiliary word in Chinese language. There is no word corresponding to it in English language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The phrase \"\u53ef\u4ee5\" expresses a positive opinion that means \"good\" in Chinese language. However, its literal meaning corresponds to the word \"can\" in English language. Therefore, the POS of it defines as verb at word segmanetation process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This project was supported by the Natural Science Foundation of China (NSFC) under contract 51405462 and 51175486. Zhejiang Province Fund of Natural Science, China under contract LY16G010006 and LQ15G010005.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Fast algorithms for mining association rules in large databases",
"authors": [
{
"first": "R",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Srikant",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of 20th international conference on very large data bases(VLDB '94",
"volume": "",
"issue": "",
"pages": "487--499",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Agrawal, R. & Srikant, R. (1994). Fast algorithms for mining association rules in large databases. In Proceedings of 20th international conference on very large data bases(VLDB '94), 487-499.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An Unsupervised Approach for Feature Based Sentiment Analysis of Product Reviews",
"authors": [
{
"first": "S",
"middle": [
"M"
],
"last": "Bahu",
"suffix": ""
},
{
"first": "S",
"middle": [
"N"
],
"last": "Das",
"suffix": ""
}
],
"year": 2015,
"venue": "International Journal of Scientific Research Engineering & Technology",
"volume": "4",
"issue": "5",
"pages": "484--489",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bahu, S. M. & Das, S. N. (2015). An Unsupervised Approach for Feature Based Sentiment Analysis of Product Reviews. International Journal of Scientific Research Engineering & Technology, 4(5), 484-489.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Linguistic Template Extraction for Recognizing Reader-Emotion",
"authors": [
{
"first": "Y",
"middle": [
"C"
],
"last": "Chang",
"suffix": ""
},
{
"first": "C",
"middle": [
"H"
],
"last": "Chu",
"suffix": ""
},
{
"first": "C",
"middle": [
"C"
],
"last": "Chen",
"suffix": ""
},
{
"first": "W",
"middle": [
"L"
],
"last": "Hsu",
"suffix": ""
}
],
"year": 2016,
"venue": "International Journal of Computational Linguistics and Chinese Language Processing",
"volume": "21",
"issue": "1",
"pages": "29--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chang, Y. C., Chu, C. H., Chen, C. C. & Hsu, W. L. (2016). Linguistic Template Extraction for Recognizing Reader-Emotion. International Journal of Computational Linguistics and Chinese Language Processing, 21(1), 29-50.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Comparison of feature-level learning methods for mining online consumer reviews",
"authors": [
{
"first": "L",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "L",
"middle": [
"L"
],
"last": "Qi",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2012,
"venue": "Expert System with Applications",
"volume": "39",
"issue": "10",
"pages": "9588--9601",
"other_ids": {
"DOI": [
"10.1016/j.eswa.2012.02.158"
]
},
"num": null,
"urls": [],
"raw_text": "Chen, L., Qi, L. L. & Wang, F. (2012). Comparison of feature-level learning methods for mining online consumer reviews. Expert System with Applications, 39(10), 9588-9601. doi: 10.1016/j.eswa.2012.02.158",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Adapting a polarity lexicon using integer linear programming for domain-specific sentiment classification",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing(EMNLP '09)",
"volume": "2",
"issue": "",
"pages": "590--598",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Choi, Y. & Cardie, C. (2009). Adapting a polarity lexicon using integer linear programming for domain-specific sentiment classification. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing(EMNLP '09), 2, 590-598.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Joint Learning of Entity Linking Constraints Using a Markov-Logic Network",
"authors": [
{
"first": "H",
"middle": [
"J"
],
"last": "Dai",
"suffix": ""
},
{
"first": "R",
"middle": [
"T H"
],
"last": "Tsai",
"suffix": ""
},
{
"first": "W",
"middle": [
"L"
],
"last": "Hsu",
"suffix": ""
}
],
"year": 2014,
"venue": "International Journal of Computational Linguistics and Chinese Language Processing",
"volume": "19",
"issue": "1",
"pages": "11--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dai, H. J., Tsai, R. T. H. & Hsu, W. L. (2014). Joint Learning of Entity Linking Constraints Using a Markov-Logic Network. International Journal of Computational Linguistics and Chinese Language Processing, 19(1), 11-32.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Mining the peanut gallery: opinion extraction and semantic classification of product review",
"authors": [
{
"first": "K",
"middle": [],
"last": "Dave",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Lawrence",
"suffix": ""
},
{
"first": "D",
"middle": [
"M"
],
"last": "Pennock",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 12th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dave, K., Lawrence, S. & Pennock, D. M. (2003). Mining the peanut gallery: opinion extraction and semantic classification of product review. In Proceedings of the 12th",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Chinese Consumer Reviews and Establish Product Feature Structure Tree international conference on World Wide Web",
"authors": [],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "519--528",
"other_ids": {
"DOI": [
"10.1145/775152.775226"
]
},
"num": null,
"urls": [],
"raw_text": "Chinese Consumer Reviews and Establish Product Feature Structure Tree international conference on World Wide Web (WWW 2003), 519-528. doi: 10.1145/775152.775226",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The digitization of word of mouth: promise and challenges of online feedback mechanisms",
"authors": [
{
"first": "C",
"middle": [],
"last": "Dellarocas",
"suffix": ""
}
],
"year": 2003,
"venue": "Management Science",
"volume": "49",
"issue": "10",
"pages": "1407--1424",
"other_ids": {
"DOI": [
"10.1287/mnsc.49.10.1407.17308"
]
},
"num": null,
"urls": [],
"raw_text": "Dellarocas, C. (2003). The digitization of word of mouth: promise and challenges of online feedback mechanisms. Management Science, 49(10), 1407-1424. doi: 10.1287/mnsc.49.10.1407.17308",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Do online reviews matter? An empirical investigation of panel data",
"authors": [
{
"first": "W",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "A",
"middle": [
"B"
],
"last": "Whinston",
"suffix": ""
}
],
"year": 2008,
"venue": "Decision Support System",
"volume": "45",
"issue": "4",
"pages": "1007--1016",
"other_ids": {
"DOI": [
"10.1016/j.dss.2008.04.001"
]
},
"num": null,
"urls": [],
"raw_text": "Duan, W., Gu, B. & Whinston, A. B. (2008). Do online reviews matter? An empirical investigation of panel data. Decision Support System, 45(4), 1007-1016. doi: 10.1016/j.dss.2008.04.001",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Examining the relationship between reviews and sales: the role of reviewer identity discloser in electronic markets",
"authors": [
{
"first": "C",
"middle": [],
"last": "Forman",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ghose",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Wiesenfeld",
"suffix": ""
}
],
"year": 2008,
"venue": "Information System Research",
"volume": "19",
"issue": "3",
"pages": "291--313",
"other_ids": {
"DOI": [
"10.1287/isre.1080.0193"
]
},
"num": null,
"urls": [],
"raw_text": "Forman, C., Ghose, A. & Wiesenfeld, B. (2008). Examining the relationship between reviews and sales: the role of reviewer identity discloser in electronic markets. Information System Research, 19(3), 291-313. doi: 10.1287/isre.1080.0193",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Using online conversations to study word of mouth communication",
"authors": [
{
"first": "D",
"middle": [],
"last": "Godes",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Mayzlin",
"suffix": ""
}
],
"year": 2004,
"venue": "Marketing Science",
"volume": "23",
"issue": "4",
"pages": "545--560",
"other_ids": {
"DOI": [
"10.1287/mksc.1040.0071"
]
},
"num": null,
"urls": [],
"raw_text": "Godes, D. & Mayzlin, D. (2004). Using online conversations to study word of mouth communication. Marketing Science, 23(4), 545-560. doi:10.1287/mksc.1040.0071",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Extracting Product Features and Opinion Words Using Pattern Knowledge in Customer Reviews",
"authors": [
{
"first": "S",
"middle": [
"S"
],
"last": "Htay",
"suffix": ""
},
{
"first": "K",
"middle": [
"T"
],
"last": "Lynn",
"suffix": ""
}
],
"year": 2013,
"venue": "The Scientific World Journal",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1155/2013/394758"
]
},
"num": null,
"urls": [],
"raw_text": "Htay, S. S. & Lynn, K. T. (2013). Extracting Product Features and Opinion Words Using Pattern Knowledge in Customer Reviews. The Scientific World Journal, Article ID 394758. doi: 10.1155/2013/394758",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Mining and summarizing customer reviews",
"authors": [
{
"first": "M",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 10th ACM SIGKDD international conference on knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "168--177",
"other_ids": {
"DOI": [
"10.1145/1014052.1014073"
]
},
"num": null,
"urls": [],
"raw_text": "Hu, M. & Liu, B. (2004a). Mining and summarizing customer reviews. In Proceedings of the 10th ACM SIGKDD international conference on knowledge discovery and data mining, 168-177. doi: 10.1145/1014052.1014073",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Mining opinion features in customer reviews",
"authors": [
{
"first": "M",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 19th national conference on Artifical intelligence",
"volume": "",
"issue": "",
"pages": "755--760",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hu, M. & Liu, B. (2004b). Mining opinion features in customer reviews. In Proceedings of the 19th national conference on Artifical intelligence, 755-760.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Product recommendation algorithm based on users' reviews mining",
"authors": [
{
"first": "Z",
"middle": [
"K"
],
"last": "Hu",
"suffix": ""
},
{
"first": "X",
"middle": [
"L"
],
"last": "Zheng",
"suffix": ""
},
{
"first": "Y",
"middle": [
"F"
],
"last": "Wu",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of Zhejiang University (Engineering Science)",
"volume": "47",
"issue": "8",
"pages": "1475--1485",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hu, Z. K., Zheng, X. L., Wu, Y. F. & Chen, D.-r. (2013). Product recommendation algorithm based on users' reviews mining. Journal of Zhejiang University (Engineering Science), 47(8), 1475-1485. [In Chinese]",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Extracting opinion targets in a single-and cross-domain setting with conditional random fields",
"authors": [
{
"first": "N",
"middle": [],
"last": "Jakob",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1035--1045",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jakob, N. & Gurevych, I., (2010). Extracting opinion targets in a single-and cross-domain setting with conditional random fields. In Proceedings of the the 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP' 10), 1035-1045.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Enhancement of Feature Engineering for Conditional Random Field Learning in Chinese Word Segmentation Using Unlabeled Data",
"authors": [
{
"first": "M",
"middle": [
"T"
],
"last": "Jiang",
"suffix": ""
},
{
"first": ".-J",
"middle": [],
"last": "Shih",
"suffix": ""
},
{
"first": "C.-W",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "T.-H",
"middle": [],
"last": "Kuo",
"suffix": ""
},
{
"first": "C.-H",
"middle": [],
"last": "Tsai",
"suffix": ""
},
{
"first": "R",
"middle": [
"T"
],
"last": "",
"suffix": ""
},
{
"first": ".-H",
"middle": [],
"last": "Hsu",
"suffix": ""
},
{
"first": "W.-L",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2012,
"venue": "International Journal of Computational Linguistics & Chinese Language Processing",
"volume": "17",
"issue": "3",
"pages": "45--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiang, M. T.-J., Shih, C.-W., Yang, T.-H., Kuo, C.-H., Tsai, R. T.-H. & Hsu, W.-L. (2012). Enhancement of Feature Engineering for Conditional Random Field Learning in Chinese Word Segmentation Using Unlabeled Data. International Journal of Computational Linguistics & Chinese Language Processing, 17(3), 45-86.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Identifying comparative sentences in text documents",
"authors": [
{
"first": "N",
"middle": [],
"last": "Jindal",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 29th annual international ACM SIGIR conference on research and development in information retrieval",
"volume": "",
"issue": "",
"pages": "244--251",
"other_ids": {
"DOI": [
"10.1145/1148170.1148215"
]
},
"num": null,
"urls": [],
"raw_text": "Jindal, N. & Liu, B. (2006). Identifying comparative sentences in text documents. In Proceedings of the 29th annual international ACM SIGIR conference on research and development in information retrieval, 244-251. doi: 10.1145/1148170.1148215",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Collecting evaluative expressions for opinion extraction",
"authors": [
{
"first": "N",
"middle": [],
"last": "Kobayashi",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Inui",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matsumoto",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Tateishi",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Fukushima",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the first international joint conference on natural language processing",
"volume": "",
"issue": "",
"pages": "596--605",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kobayashi, N., Inui, K., Matsumoto, Y., Tateishi, K. & Fukushima, T. (2004). Collecting evaluative expressions for opinion extraction. In Proceedings of the first international joint conference on natural language processing (IJCNLP-04), 596-605.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Opinion extraction using a learning-based anaphora resolution technique",
"authors": [
{
"first": "N",
"middle": [],
"last": "Kobayashi",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Iida",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Inui",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matsumotto",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the second international joint conference on natural language processing",
"volume": "",
"issue": "",
"pages": "173--178",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kobayashi, N., Iida, R., Inui, K. & Matsumotto, Y. (2005). Opinion extraction using a learning-based anaphora resolution technique. In Proceedings of the second international joint conference on natural language processing (IJCNLP-04), 173-178.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data",
"authors": [
{
"first": "J",
"middle": [
"D"
],
"last": "Lafferty",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "F",
"middle": [
"C N"
],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 18th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lafferty, J. D., McCallum, A. & Pereira, F. C. N. (2001). Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. In Proceedings of the 18th International Conference on Machine Learning(ICML 01), 282-289.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Structure aware review mining and summarization",
"authors": [
{
"first": "F",
"middle": [
"T"
],
"last": "Li",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "M",
"middle": [
"L"
],
"last": "Huang",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Y.-J",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on computational Linguistics",
"volume": "",
"issue": "",
"pages": "653--661",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, F. T., Han, C., Huang, M. L., Zhu, X., Xia, Y.-J., Zhang, S. & Yu, H. (2010). Structure aware review mining and summarization. In Proceedings of the 23rd International Conference on computational Linguistics, 653-661.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Mining features of products from Chinese customer online reviews",
"authors": [
{
"first": "S",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Y",
"middle": [
"J"
],
"last": "Li",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Law",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of Management Sciences in China",
"volume": "12",
"issue": "2",
"pages": "142--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, S., Ye, Q., Li, Y. J. & Law, R. (2009). Mining features of products from Chinese customer online reviews. Journal of Management Sciences in China, 12(2), 142-152. [In Chinese]",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Research on Key Technologies of Chinese Dependency Parsing (Doctoral dissertation",
"authors": [
{
"first": "Z",
"middle": [
"H"
],
"last": "Li",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, Z. H. (2013). Research on Key Technologies of Chinese Dependency Parsing (Doctoral dissertation, Harbin Institute of Technology). Retrieved from http://hlt.suda.edu.cn/~zhli/papers/zhenghua-2013-phd-thesis.pdf. [In Chinese]",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Opinion observer: Analyzing and comparing opinions on the Web",
"authors": [
{
"first": "B",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Cheng",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of 2005 World Wide Web conference(WWW 05",
"volume": "",
"issue": "",
"pages": "342--351",
"other_ids": {
"DOI": [
"10.1145/1060745.1060797"
]
},
"num": null,
"urls": [],
"raw_text": "Liu, B., Hu, M. & Cheng, J. (2005). Opinion observer: Analyzing and comparing opinions on the Web. In Proceedings of 2005 World Wide Web conference(WWW 05), 342-351. doi: 10.1145/1060745.1060797",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Keywords extraction algorithm based on semantic dictionary and lexical chain",
"authors": [
{
"first": "D",
"middle": [
"Y"
],
"last": "Liu",
"suffix": ""
},
{
"first": "L",
"middle": [
"F"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of Zhejiang University of Technology",
"volume": "41",
"issue": "5",
"pages": "545--551",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu, D. Y. & Wang, L. F. (2013). Keywords extraction algorithm based on semantic dictionary and lexical chain. Journal of Zhejiang University of Technology, 41(5), 545-551. [In Chinese]",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A Novel Feature-based Method for Sentiment Analysis of Chinese Product Reviews",
"authors": [
{
"first": "L",
"middle": [
"Z"
],
"last": "Liu",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "H",
"middle": [
"S"
],
"last": "Wang",
"suffix": ""
},
{
"first": "C",
"middle": [
"C"
],
"last": "Li",
"suffix": ""
},
{
"first": "J",
"middle": [
"L"
],
"last": "Lu",
"suffix": ""
}
],
"year": 2014,
"venue": "China Communications",
"volume": "11",
"issue": "3",
"pages": "154--164",
"other_ids": {
"DOI": [
"10.1109/CC.2014.6825268"
]
},
"num": null,
"urls": [],
"raw_text": "Liu, L. Z., Song, W., Wang, H. S., Li, C. C. & Lu, J. L. (2014). A Novel Feature-based Method for Sentiment Analysis of Chinese Product Reviews. China Communications, 11(3), 154-164. doi: 10.1109/CC.2014.6825268",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Opinion Searching in Multi-Product Reviews",
"authors": [
{
"first": "T",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Yao",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Sixth IEEE International Conference on Computer and Information Technology (CIT'06)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1109/CIT.2006.132"
]
},
"num": null,
"urls": [],
"raw_text": "Liu, T., Wu, G. & Yao, T. (2006). Opinion Searching in Multi-Product Reviews. In Proceedings of the Sixth IEEE International Conference on Computer and Information Technology (CIT'06). doi: 10.1109/CIT.2006.132",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Theories and Methods of Chinese Automatic Syntactic Parsing",
"authors": [
{
"first": "T",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "J",
"middle": [
"H"
],
"last": "Ma",
"suffix": ""
}
],
"year": 2009,
"venue": "Contemporary linguistics",
"volume": "11",
"issue": "",
"pages": "100--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu, T. & Ma, J. H. (2009). Theories and Methods of Chinese Automatic Syntactic Parsing. Contemporary linguistics, 11(2), 100-112. [In Chinese]",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Effective mining product features from Chinese review based on CRF",
"authors": [
{
"first": "P",
"middle": [],
"last": "Lv",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "D",
"middle": [
"B"
],
"last": "Cai",
"suffix": ""
},
{
"first": "Y",
"middle": [
"T"
],
"last": "Wu",
"suffix": ""
}
],
"year": 2014,
"venue": "Computer Engineering & Science",
"volume": "36",
"issue": "2",
"pages": "359--366",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lv, P., Zhong, L., Cai, D. B. & Wu, Y. T. (2014). Effective mining product features from Chinese review based on CRF. Computer Engineering & Science, 36(2), 359-366. [In Chinese]",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Product features extraction of online reviews based on LDA model",
"authors": [
{
"first": "B",
"middle": [
"Z"
],
"last": "Ma",
"suffix": ""
},
{
"first": "Z",
"middle": [
"J"
],
"last": "Yan",
"suffix": ""
}
],
"year": 2014,
"venue": "Computer Integrated Manufacturing Systems",
"volume": "20",
"issue": "1",
"pages": "96--103",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ma, B. Z. & Yan, Z. J. (2014). Product features extraction of online reviews based on LDA model. Computer Integrated Manufacturing Systems, 20(1), 96-103. [In Chinese]",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Mining fine grained opinions by using probabilistic models and domain knowledge",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Miao",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Zeng",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology",
"volume": "",
"issue": "",
"pages": "358--363",
"other_ids": {
"DOI": [
"10.1109/WI-IAT.2010.193"
]
},
"num": null,
"urls": [],
"raw_text": "Miao, Q., Li, Q. & Zeng, D. (2010). Mining fine grained opinions by using probabilistic models and domain knowledge. In Proceedings of the 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, 358-363. doi: 10.1109/WI-IAT.2010.193",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Features-level Sentiment Analysis of Movie Reviews",
"authors": [
{
"first": "C",
"middle": [
"P"
],
"last": "Ouyang",
"suffix": ""
},
{
"first": "Y",
"middle": [
"B"
],
"last": "Liu",
"suffix": ""
},
{
"first": "S",
"middle": [
"Q"
],
"last": "Zhang",
"suffix": ""
},
{
"first": "X",
"middle": [
"H"
],
"last": "Yang",
"suffix": ""
}
],
"year": 2015,
"venue": "Adcanced Science and Technology Letters",
"volume": "81",
"issue": "",
"pages": "110--113",
"other_ids": {
"DOI": [
"10.14257/astl.2015.81.23"
]
},
"num": null,
"urls": [],
"raw_text": "Ouyang, C. P., Liu, Y. B., Zhang, S. Q. & Yang, X. H. (2015). Features-level Sentiment Analysis of Movie Reviews. Adcanced Science and Technology Letters, 81, 110-113. doi: 10.14257/astl.2015.81.23",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Chinese Consumer Reviews and Establish Product Feature Structure Tree",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chinese Consumer Reviews and Establish Product Feature Structure Tree",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Extracting product features and opinions from reviews",
"authors": [
{
"first": "A",
"middle": [],
"last": "Popescu",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the conference on human language technology and empirical methods in natural language processing(HLT '05)",
"volume": "",
"issue": "",
"pages": "339--346",
"other_ids": {
"DOI": [
"10.3115/1220575.1220618"
]
},
"num": null,
"urls": [],
"raw_text": "Popescu, A. & Etzioni, O. (2005). Extracting product features and opinions from reviews. In Proceedings of the conference on human language technology and empirical methods in natural language processing(HLT '05), 339-346. doi: 10.3115/1220575.1220618",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "A grammatical dependency improved CRF learning approach for integrated product extraction",
"authors": [
{
"first": "H",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "X",
"middle": [
"Q"
],
"last": "Liu",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of 2nd International Conference on Computer Science and Network Technology(ICCSNT)",
"volume": "",
"issue": "",
"pages": "1787--1794",
"other_ids": {
"DOI": [
"10.1109/ICCSNT.2012.6526267"
]
},
"num": null,
"urls": [],
"raw_text": "Song, H., Yan, Y. & Liu, X. Q. (2012). A grammatical dependency improved CRF learning approach for integrated product extraction. In Proceedings of 2nd International Conference on Computer Science and Network Technology(ICCSNT), 1787-1794. doi: 10.1109/ICCSNT.2012.6526267",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Mining generalized association rules",
"authors": [
{
"first": "R",
"middle": [],
"last": "Srikant",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Agrawal",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 21th international conference on very large data bases(VLDB '95)",
"volume": "",
"issue": "",
"pages": "407--419",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Srikant, R. & Agrawal, R. (1995). Mining generalized association rules. In Proceedings of the 21th international conference on very large data bases(VLDB '95), 407-419.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Extracting Structured Information from Chinese Wipipedia and Measuring Relatedness between Words",
"authors": [
{
"first": "X",
"middle": [
"H"
],
"last": "Tu",
"suffix": ""
},
{
"first": "H",
"middle": [
"C"
],
"last": "Zhang",
"suffix": ""
},
{
"first": "K",
"middle": [
"F"
],
"last": "Zhou",
"suffix": ""
},
{
"first": "T",
"middle": [
"T"
],
"last": "He",
"suffix": ""
}
],
"year": 2012,
"venue": "Journal of Chinese Information Processing",
"volume": "26",
"issue": "3",
"pages": "109--114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tu, X. H., Zhang, H. C., Zhou, K. F. & He, T. T. (2012). Extracting Structured Information from Chinese Wipipedia and Measuring Relatedness between Words. Journal of Chinese Information Processing, 26(3), 109-114. [In Chinese]",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Thumbs, up or thumbs down: Semantic orientation applied to unsupervised classification of reviews",
"authors": [
{
"first": "P",
"middle": [
"D"
],
"last": "Turney",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics (ACL'02)",
"volume": "",
"issue": "",
"pages": "417--424",
"other_ids": {
"DOI": [
"10.3115/1073083.1073153"
]
},
"num": null,
"urls": [],
"raw_text": "Turney, P. D. (2002). Thumbs, up or thumbs down: Semantic orientation applied to unsupervised classification of reviews. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics (ACL'02), 417-424. doi: 10.3115/1073083.1073153",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Feature-based Sentiment Analysis Approach for Product Reviews",
"authors": [
{
"first": "H",
"middle": [
"S"
],
"last": "Wang",
"suffix": ""
},
{
"first": "L",
"middle": [
"Z"
],
"last": "Liu",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Software",
"volume": "9",
"issue": "2",
"pages": "274--279",
"other_ids": {
"DOI": [
"10.4304/jsw.9.2.274-279"
]
},
"num": null,
"urls": [],
"raw_text": "Wang, H. S., Liu, L. Z., Song, W. & Lu, J. (2014). Feature-based Sentiment Analysis Approach for Product Reviews. Journal of Software, 9(2), 274-279. doi:10.4304/jsw.9.2.274-279",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Comparative network for product competition in feature-levels through sentiment analysis",
"authors": [
{
"first": "W",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "H",
"middle": [
"W"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "Journal of Management Sciences in China",
"volume": "19",
"issue": "9",
"pages": "109--126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang, W. & Wang, H. W. (2016). Comparative network for product competition in feature-levels through sentiment analysis. Journal of Management Sciences in China, 19(9), 109-126. [In Chinese]",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Opinion Object Extraction Based on the Syntax Analysis and Dependency Analysis",
"authors": [
{
"first": "W",
"middle": [
"P"
],
"last": "Wang",
"suffix": ""
},
{
"first": "C",
"middle": [
"C"
],
"last": "Meng",
"suffix": ""
}
],
"year": 2011,
"venue": "Computer System Applications",
"volume": "20",
"issue": "8",
"pages": "52--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang, W. P. & Meng, C. C. (2011). Opinion Object Extraction Based on the Syntax Analysis and Dependency Analysis. Computer System Applications, 20(8), 52-57. [In Chinese]",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Research on Automatic Building of Word Correlation Net Based on Statistic",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "X",
"middle": [
"G"
],
"last": "Zhou",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2012,
"venue": "Computer & Digital Engineering",
"volume": "40",
"issue": "2",
"pages": "15--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang, Y., Zhou, X. G. & Sun, Y. (2012). Research on Automatic Building of Word Correlation Net Based on Statistic. Computer & Digital Engineering, 40(2), 15-18. [In Chinese]",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Understanding what concerns consumers: a semantic approach to product feature extraction from consumer reviews",
"authors": [
{
"first": "C",
"middle": [
"P"
],
"last": "Wei",
"suffix": ""
},
{
"first": "Y",
"middle": [
"M"
],
"last": "Chen",
"suffix": ""
},
{
"first": "C",
"middle": [
"S"
],
"last": "Yang",
"suffix": ""
},
{
"first": "C",
"middle": [
"C"
],
"last": "Yang",
"suffix": ""
}
],
"year": 2010,
"venue": "Information Systems and e-Business Management",
"volume": "8",
"issue": "",
"pages": "149--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei, C. P., Chen, Y. M., Yang, C. S. & Yang, C. C. (2010). Understanding what concerns consumers: a semantic approach to product feature extraction from consumer reviews. Information Systems and e-Business Management, 8(2),149-167.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Hot item mining and summarization from multiple auction Web sites",
"authors": [
{
"first": "T.-L",
"middle": [],
"last": "Wong",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Lam",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the fifth IEEE international conference on data mining (ICDM'05)",
"volume": "",
"issue": "",
"pages": "797--800",
"other_ids": {
"DOI": [
"10.1109/ICDM.2005.78"
]
},
"num": null,
"urls": [],
"raw_text": "Wong, T.-L. & Lam, W. (2005). Hot item mining and summarization from multiple auction Web sites. In Proceedings of the fifth IEEE international conference on data mining (ICDM'05), 797-800. doi: 10.1109/ICDM.2005.78",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Learning to extract and summarize hot item features from multiple auction Web sites. Knowledge and Information and System",
"authors": [
{
"first": "T.-L",
"middle": [],
"last": "Wong",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Lam",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "14",
"issue": "",
"pages": "143--160",
"other_ids": {
"DOI": [
"10.1109/ICDM.2005.78"
]
},
"num": null,
"urls": [],
"raw_text": "Wong, T.-L. & Lam, W. (2008). Learning to extract and summarize hot item features from multiple auction Web sites. Knowledge and Information and System, 14(2), 143-160. doi: 10.1109/ICDM.2005.78",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Study on Chinese Words Semantic Similarity Computation",
"authors": [
{
"first": "T",
"middle": [],
"last": "Xia",
"suffix": ""
}
],
"year": 2007,
"venue": "Computer Engineering",
"volume": "33",
"issue": "6",
"pages": "191--194",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xia, T. (2007). Study on Chinese Words Semantic Similarity Computation. Computer Engineering, 33(6), 191-194. [In Chinese]",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Sentiment Mining in WebFountain",
"authors": [
{
"first": "J",
"middle": [],
"last": "Yi",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Niblack",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 21st International Conference on Data Engineering (ICDE 2005",
"volume": "",
"issue": "",
"pages": "1073--1083",
"other_ids": {
"DOI": [
"10.1109/ICDE.2005.132"
]
},
"num": null,
"urls": [],
"raw_text": "Yi, J. & Niblack, W. (2005). Sentiment Mining in WebFountain. In Proceedings of the 21st International Conference on Data Engineering (ICDE 2005), 1073-1083. doi: 10.1109/ICDE.2005.132",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Sentiment Analysis for Product Features in Chinese Reviews Based on Semantic Association",
"authors": [
{
"first": "C",
"middle": [
"X"
],
"last": "Yin",
"suffix": ""
},
{
"first": "Q",
"middle": [
"K"
],
"last": "Peng",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of International Conference on Artificial Intelligence and Computational Intelligence",
"volume": "",
"issue": "",
"pages": "81--85",
"other_ids": {
"DOI": [
"10.1109/AICI.2009.326"
]
},
"num": null,
"urls": [],
"raw_text": "Yin, C. X. & Peng, Q. K. (2009). Sentiment Analysis for Product Features in Chinese Reviews Based on Semantic Association. In Proceedings of International Conference on Artificial Intelligence and Computational Intelligence 2009(AICI 09), 81-85. doi: 10.1109/AICI.2009.326",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Feature-level sentiment analysis for Chinese product reviews",
"authors": [
{
"first": "H",
"middle": [
"P"
],
"last": "Zhang",
"suffix": ""
},
{
"first": "Z",
"middle": [
"G"
],
"last": "Yu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Y",
"middle": [
"L"
],
"last": "Shi",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of 3rd International Conference on Computer Research and Development",
"volume": "",
"issue": "",
"pages": "135--140",
"other_ids": {
"DOI": [
"10.1109/ICCRD.2011.5764099"
]
},
"num": null,
"urls": [],
"raw_text": "Zhang, H. P., Yu, Z. G., Xu, M. & Shi, Y. L. (2011). Feature-level sentiment analysis for Chinese product reviews. In Proceedings of 3rd International Conference on Computer Research and Development(ICCRD 2011), 135-140. doi: 10.1109/ICCRD.2011.5764099",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Opinion Target and Polarity Extraction Based on Iterative Two-Stage CRF Model",
"authors": [
{
"first": "S",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of Chinese Information Processing",
"volume": "29",
"issue": "1",
"pages": "163--169",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang, S. & Li, F. (2015). Opinion Target and Polarity Extraction Based on Iterative Two-Stage CRF Model. Journal of Chinese Information Processing, 29(1), 163-169. [In Chinese]",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Identify Sentiment-Objects from Chinese Sentences based on Cascaded Conditional Random Fields",
"authors": [
{
"first": "M",
"middle": [
"J"
],
"last": "Zheng",
"suffix": ""
},
{
"first": "Z",
"middle": [
"C"
],
"last": "Lei",
"suffix": ""
},
{
"first": "X",
"middle": [
"W"
],
"last": "Liao",
"suffix": ""
},
{
"first": "G",
"middle": [
"L"
],
"last": "Chen",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of Chinese Information Processing",
"volume": "27",
"issue": "3",
"pages": "69--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zheng, M. J., Lei, Z. C., Liao, X. W. & Chen, G. L. (2013). Identify Sentiment-Objects from Chinese Sentences based on Cascaded Conditional Random Fields. Journal of Chinese Information Processing, 27(3), 69-77. [In Chinese]",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Collective opinion target extraction in Chinese microblogs",
"authors": [
{
"first": "X",
"middle": [
"J"
],
"last": "Zhou",
"suffix": ""
},
{
"first": "X",
"middle": [
"J"
],
"last": "Wan",
"suffix": ""
},
{
"first": "J",
"middle": [
"G"
],
"last": "Xiao",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1840--1850",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhou, X. J., Wan, X. J. & Xiao, J. G. (2013). Collective opinion target extraction in Chinese microblogs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, 1840-1850.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Movie review mining and summarization",
"authors": [
{
"first": "L",
"middle": [],
"last": "Zhuang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "X",
"middle": [
"Y"
],
"last": "Zhu",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 15th ACM International Conference on Information and Knowledge Management(CIKM '06)",
"volume": "",
"issue": "",
"pages": "43--50",
"other_ids": {
"DOI": [
"10.1145/1183614.1183625"
]
},
"num": null,
"urls": [],
"raw_text": "Zhuang, L., Feng, J. & Zhu, X. Y. (2006). Movie review mining and summarization. In Proceedings of the 15th ACM International Conference on Information and Knowledge Management(CIKM '06), 43-50. doi: 10.1145/1183614.1183625",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Research of Extracting Product Features from Chinese Online Reviews",
"authors": [
{
"first": "L",
"middle": [
"J"
],
"last": "Zu",
"suffix": ""
},
{
"first": "W",
"middle": [
"P"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2014,
"venue": "Computer System Applications",
"volume": "23",
"issue": "5",
"pages": "196--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zu, L. J. & Wang, W. P. (2014). Research of Extracting Product Features from Chinese Online Reviews. Computer System Applications, 23(5), 196-201. [In Chinese]",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Approach to Extract Product Features from 59 Chinese Consumer Reviews and Establish Product Feature Structure Tree Figure 1. Overview of product feature extraction techniques for Chinese consumer reviews"
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Two-stages word segmentation process"
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Annotating train set process"
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Product feature classifications and its hierarchical structure An Approach to Extract Product Features from 67 Chinese Consumer Reviews and Establish Product Feature Structure Tree"
},
"FIGREF4": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Unit structure and its explains Combination forms of the elements and its implicationThere are mainly three kinds of combination forms in this work namely phrase + opinion word, POS + opinion word, and dependency relation + governing word. The combination \"phrase + opinion word\" describes whether the current phrase is an opinion word or not. The combination \"POS +opinion word\" describes what is the POS of the opinion word. And the combination \"dependency relation + governing word\" describes the dependency relationAn Approach to Extract Product Features from 69 Chinese Consumer Reviews and Establish Product Feature Structure Tree"
},
"FIGREF5": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Training the models of CRF"
},
"FIGREF6": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Principle of finding the parent nodes for current product features An Approach to Extract Product Features from 73 Chinese Consumer Reviews and Establish Product Feature Structure Tree Figure 12. Pseudo-code of finding potential parent node"
},
"FIGREF7": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Correct rates of two word segmentation methods for total dataFigure 16. Precisions, recall, and F-score of product feature extraction based on different rule templates An Approach to Extract Product Features from 79 Chinese Consumer Reviews and Establish Product Feature Structure Tree"
},
"FIGREF8": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Precision, recall, and F-score under different widths of searching window Figure 18. Workflow of identifying the implicit parent nodes for some product features An Approach to Extract Product Features from 81 Chinese Consumer Reviews and Establish Product Feature Structure Tree"
},
"FIGREF9": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Figure 19. Different coverage forms of searching window"
},
"FIGREF10": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Efficiency of searching potential parent nodes under different forms of coverage regions"
},
"TABREF1": {
"text": "",
"num": null,
"content": "<table><tr><td colspan=\"2\">No POS</td><td>Phrases</td><td>Sentences</td></tr><tr><td>1</td><td>v</td><td>\u53ef\u4ee5(can), et al.</td><td>\"\u624b\u673a\u7684\u5206\u8fa8\u7387\u8fd8\u53ef\u4ee5\" (The resolution of this phone is good)</td></tr><tr><td>2</td><td>n</td><td>\u6218\u6597\u673a(fighter), et al.</td><td>\"\u624b\u673a\u4e2d\u7684\u6218\u6597\u673a\" (It is a fighter among phones)</td></tr><tr><td>\u2026</td><td>\u2026</td><td>\u2026</td><td>\u2026</td></tr><tr><td colspan=\"3\">4.1.2 Product Feature Identifying</td><td/></tr></table>",
"type_str": "table",
"html": null
},
"TABREF2": {
"text": "",
"num": null,
"content": "<table><tr><td>Elements</td><td>Contents</td><td>Explains</td></tr><tr><td/><td>Phrase</td><td>Element denotes a phrase</td></tr><tr><td>Word form</td><td>POS</td><td>Element denotes the POS of the current phrase</td></tr><tr><td>elements</td><td/><td>Element denotes the phrases that locate at the front</td></tr><tr><td/><td>Context (front or back)</td><td>of the current phrase or at the back of the current</td></tr><tr><td/><td/><td>phrase</td></tr><tr><td>Syntax</td><td>Dependency relation</td><td>Element denotes the dependency relation between the current phrase and its governing word</td></tr><tr><td>elements</td><td>Governing word</td><td>Element denotes the governing word that belong to the dependency relation between them</td></tr><tr><td>Opinion elements</td><td>Opinion words</td><td>Governing word is an opinion word or not</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF5": {
"text": "). Red rectangles describe that correct rates of An Approach to Extract Product Features from 77 Chinese Consumer Reviews and Establish Product Feature Structure Tree product features that are extracted based on our two-stage optimizing word segmentation from respectively (taobao:98.39%,",
"num": null,
"content": "<table/>",
"type_str": "table",
"html": null
},
"TABREF7": {
"text": "",
"num": null,
"content": "<table><tr><td>No</td><td>Parameters Explains</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF8": {
"text": "",
"num": null,
"content": "<table><tr><td colspan=\"2\">Precision(%)</td><td/><td>Recall(%)</td><td colspan=\"2\">F-score(%)</td></tr><tr><td colspan=\"6\">Our methods Jakob's work Our methods Jakob's work Our methods Jakob's work</td></tr><tr><td>93.80</td><td>86.47</td><td>90.86</td><td>78.70</td><td>92.31</td><td>79.63</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF10": {
"text": "An Approach to Extract Product Features from 87 Chinese Consumer Reviews and Establish Product Feature Structure Tree",
"num": null,
"content": "<table/>",
"type_str": "table",
"html": null
}
}
}
}