ACL-OCL / Base_JSON /prefixW /json /wassa /2022.wassa-1.19.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:06:56.270632Z"
},
"title": "Evaluating Content Features and Classification Methods for Helpfulness Prediction of Online Reviews: Establishing a Benchmark for Portuguese",
"authors": [
{
"first": "Rog\u00e9rio",
"middle": [],
"last": "Figueredo De Sousa",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of S\u00e3o Paulo Av. Trabalhador S\u00e3o Carlense",
"location": {
"addrLine": "400 -13.566-590 -S\u00e3o Carlos -SP",
"country": "Brazil"
}
},
"email": ""
},
{
"first": "Thiago",
"middle": [],
"last": "Alexandre",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of S\u00e3o Paulo Av. Trabalhador S\u00e3o Carlense",
"location": {
"addrLine": "400 -13.566-590 -S\u00e3o Carlos -SP",
"country": "Brazil"
}
},
"email": ""
},
{
"first": "Salgueiro",
"middle": [],
"last": "Pardo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of S\u00e3o Paulo Av. Trabalhador S\u00e3o Carlense",
"location": {
"addrLine": "400 -13.566-590 -S\u00e3o Carlos -SP",
"country": "Brazil"
}
},
"email": "taspardo@icmc.usp.br"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Over the years, the review helpfulness prediction task has been the subject of several works, but remains being a challenging issue in Natural Language Processing, as results vary a lot depending on the domain, on the adopted features and on the chosen classification strategy. This paper attempts to evaluate the impact of content features and classification methods for two different domains. In particular, we run our experiments for a low resource language-Portuguese-, trying to establish a benchmark for this language. We show that simple features and classical classification methods are powerful for the task of helpfulness prediction, but are largely outperformed by a convolutional neural network-based solution.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Over the years, the review helpfulness prediction task has been the subject of several works, but remains being a challenging issue in Natural Language Processing, as results vary a lot depending on the domain, on the adopted features and on the chosen classification strategy. This paper attempts to evaluate the impact of content features and classification methods for two different domains. In particular, we run our experiments for a low resource language-Portuguese-, trying to establish a benchmark for this language. We show that simple features and classical classification methods are powerful for the task of helpfulness prediction, but are largely outperformed by a convolutional neural network-based solution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The concern to facilitate users' decision-making is common in most e-commerce platforms. The possibility for customers to publicly provide product reviews is one of the consequences of this concern. This functionality allows future customers to read reviews from other customers and take their buying decision. Despite being useful, the amount of generated data is very large, making it impossible for a human to read them all. Moreover, a large part of this data can be considered unwanted, containing poorly written texts, vague opinions and texts of dubious quality (Kim et al., 2006) , making it difficult to find relevant content.",
"cite_spans": [
{
"start": 569,
"end": 587,
"text": "(Kim et al., 2006)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The helpfulness voting functionality that some e-commerce platforms adopt tries to address the above problem, ranking the reviews and showing the most helpful ones to the customers. However, manual voting has some drawbacks, as new helpful reviews take time to get enough votes and gain a visible position. The solution is to automatically predict the helpfulness of reviews.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Despite the usefulness of the task of helpfulness prediction and its practical implications, literature has shown that it is a challenging open issue in Natural Language Processing (NLP) . Performance results vary drastically across domains and there are several different features and classification methods in the area, as discussed in (Sousa and Pardo, 2021) .",
"cite_spans": [
{
"start": 153,
"end": 186,
"text": "Natural Language Processing (NLP)",
"ref_id": null
},
{
"start": 338,
"end": 361,
"text": "(Sousa and Pardo, 2021)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper aims to investigate such issues and to identify relevant features and methods for helpfulness prediction. We provide a qualitative and quantitative study of the impact of key content features in two different domains (apps and movies). By content features, we mean those that are related to the information that can be extracted directly from the review, such as the text and the \"stars\" given by the author. We also perform a comparative study of various classical and deep machine learning classifiers. We show that simple features and classical classification methods may be powerful for the task, but they are largely outperformed by a convolutional neural network-based approach, which reaches a f1-score of 0.90 for apps and 0.74 for movies. It is also relevant to cite that we run our experiments for a low resource language -Brazilian Portuguese -, bringing relevant contributions for NLP for Portuguese and establishing a benchmark for the task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows. Section 2 shows the main related work. In Section 3, we describe the experimental setting adopted in this work. Section 4 reports the achieved results and Section 5 brings some final remarks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main research line in review helpfulness prediction aims to predict the helpfulness score for a set of reviews. The helpfulness score is defined as shown in Equation 1 and can be used as the target for regression, binary classification, or ranking. The score regression aims to predict the helpfulness score h \u03f5 [0, 1]. For binary classification, a threshold is applied in helpfulness score (e.g., h > 0.5) and all reviews with a helpfulness score above the threshold are classified as helpful; otherwise, they are classified as not helpful. Review ranking seeks to order the reviews by their helpfulness according to a reference ranking. h = helpf ul votes helpf ul votes + unhelpf ul votes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In order to understand the helpfulness of online customer reviews, researches have performed several analyses. It is worth mentioning classical works like the ones of Kim et al. (2006) and Zhang and Varadarajan (2006) that introduce many types of features for helpfulness prediction. Kim et al. (2006) split the features in 5 categories, all considered to be content features: Structural, Lexical, Syntactic, Semantic and Meta-Data Features. They build a model for a regression task and a model for a ranking task using the SVM algorithm. Using a dataset of reviews on two products (MP3 players and Digital Cameras) extracted from Amazon.com, the best results are achieved with the combination of length, unigram and number of stars features. In a similar way, Zhang and Varadarajan (2006) propose three categories of features, also for a dataset extracted from Amazon.com. Their features include Lexical Similarity (Cosine similarity over TF-IDF vectors), Shallow Syntactic Features (Proper nouns, Modal verbs, Interjection, etc.) and Lexical Subjectivity Clues (Subjective adjectives, Subjective nouns, etc.). The authors model two regressors using SVR (Support Vector Regression) and SLR (Simple Linear Regression) techniques, obtaining the best results by combining all the features. Zeng et al. (2014) , in addition to the features already used by Kim et al. (2006) , propose the use of Trigrams, Comparison Expressions (\"Compare to\" or \"ADJ + er than\"), Degree of detail and Pros and Cons. Using an SVM classifier, the authors address the helpfulness prediction task as a three-class classification: Helpful positive reviews, Helpful negative reviews, and Unhelpful reviews. Furthermore, by running a series of experiments with one less feature each time, they found that the \"detail\" feature is the most important one, followed by length, number of stars and unigram.",
"cite_spans": [
{
"start": 167,
"end": 184,
"text": "Kim et al. (2006)",
"ref_id": "BIBREF12"
},
{
"start": 189,
"end": 217,
"text": "Zhang and Varadarajan (2006)",
"ref_id": "BIBREF32"
},
{
"start": 284,
"end": 301,
"text": "Kim et al. (2006)",
"ref_id": "BIBREF12"
},
{
"start": 761,
"end": 789,
"text": "Zhang and Varadarajan (2006)",
"ref_id": "BIBREF32"
},
{
"start": 1288,
"end": 1306,
"text": "Zeng et al. (2014)",
"ref_id": "BIBREF31"
},
{
"start": 1353,
"end": 1370,
"text": "Kim et al. (2006)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "More recently, researchers are using more robust methods for helpfulness prediction. It is the case of Xu et al. (2020) , that use BERT (Bidirectional Encoder Representations from Transformers) (Devlin et al., 2019) along with the features of Star Rating and Product Type. With this combination, the authors model a Neural Network to predict the helpfulness score for reviews extracted from Amazom.com. Wang et al. (2020) also use BERT, but the authors add more features (Number of Words, Number of Sentences, Rating, etc.) than Xu et al. (2020) and compare the BERT-based approach to SVM and CNN models. The neural network-based classifiers achieved similar results to SVM using all features. Wu and Wang (2019) propose the use of syntactic features along with BERT sentence embeddings to helpfulness classification. The work compares some CNN models with BERT and perform an ablation study with all syntactic features. Their results showed high recall but very low precision values. In terms of f1-score, BERT achieved the best results and the main feature was Star Rating.",
"cite_spans": [
{
"start": 103,
"end": 119,
"text": "Xu et al. (2020)",
"ref_id": "BIBREF30"
},
{
"start": 194,
"end": 215,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 403,
"end": 421,
"text": "Wang et al. (2020)",
"ref_id": "BIBREF28"
},
{
"start": 529,
"end": 545,
"text": "Xu et al. (2020)",
"ref_id": "BIBREF30"
},
{
"start": 701,
"end": 712,
"text": "Wang (2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "All these researches have in common the use of content features. The results of methods using handcrafted features were better or very close to state-of-the-art classifiers (using BERT and CNN, for instance). In such setting, this paper aims at further exploring such issues, specially for the context of Portuguese, a low resource language. We present our experiment setting in what follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "3 Experiment Setting",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We adopt the dataset of Sousa et al. (2019) that includes reviews written in Portuguese for two very different domains: Movies and Apps. While movie reviews are usually largely subjective and passionate, app reviews tend to be more objective and focus on technical aspects. The dataset (namely UTLCorpus) contains a total of 2, 732, 538 reviews (1, 833, 691 for movies and 898, 847 for apps). Figure 1 presents two examples of reviews extracted from the corpus (from the apps domain). The first is considered not helpful, while the second is helpful. According to the creators of the corpus, the helpfulness status is based on the number of votes the reviews received (0 and 335 helpful votes, respectively) and the posting time (more than 5 days).",
"cite_spans": [],
"ref_spans": [
{
"start": 393,
"end": 401,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Data Overview",
"sec_num": "3.1"
},
{
"text": "As the authors report, each review includes the review text, number of stars given by its author, the number of helpfulness votes, and publication time, among some other information. As shown in Table 1 , the UTLCorpus is highly unbalanced. We address the unbalancing problem using an under- sampling approach, randomly removing samples of the majority class. Due to the amount of data, we decided not to carry out the oversampling strategy. Besides the class balancing information, the details of tokens and types in the table show us that the average size of movie reviews is much bigger than that of apps. This difference can make the movies' reviews more challenging than the apps' reviews. Section 4 will further elucidate this assumption. For our experiments, which we report in the next section, we have randomly split our dataset in three parts: 70% for training, 20% for testing, and 10% for development.",
"cite_spans": [],
"ref_spans": [
{
"start": 195,
"end": 202,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data Overview",
"sec_num": "3.1"
},
{
"text": "The literature on online review helpfulness explores several features. The researchers often split the features in two big groups: Content and Context features. The content features are related to the information that can be extracted directly from the review, such as the text and the \"stars\" given by the author. Context features are those extracted from outside the review, such as reviewer information. (Ocampo Diaz and Ng, 2018; Almutairi et al., 2019; Arif et al., 2018) . Most of these features are used in domains such as products, books, hotels and so on. We desire to experiment them in apps and movies domains, which are the domains available in the dataset that we adopted in this work and that are remarkably different (which interests us in this paper).",
"cite_spans": [
{
"start": 415,
"end": 433,
"text": "Diaz and Ng, 2018;",
"ref_id": "BIBREF19"
},
{
"start": 434,
"end": 457,
"text": "Almutairi et al., 2019;",
"ref_id": "BIBREF0"
},
{
"start": 458,
"end": 476,
"text": "Arif et al., 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "We selected and adapted several content features to the Portuguese language. This process involved finding resources and tools that could support the use of the features in the target language. Table 2 summarizes the implemented features.",
"cite_spans": [],
"ref_spans": [
{
"start": 194,
"end": 201,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "We explored the features in machine learning classification solutions. We performed a selection of the best features employing three different strategies. The first method of feature selection is the classical Information Gain (Kozachenko and Leonenko, 1987) , which produces values from 0 (no information) to 1 (maximum information) for each feature. The features that contribute with more information are selected to the experiments. The second well-known method for feature selection is using the Random Forest classifier (Breiman, 2001) , which is a meta estimator that uses several tree-based classifiers in various subsamples of the dataset to classify the target. Due to its characteristic of using decision trees, it can indicate the importance of features used in the classification process. The third method for feature selection consists in using the correlation values of the features with the helpfulness classes. The previous work of Sousa and Pardo (2021) presents studies of correlation among the feature values and helpfulness status using the correlation coefficients of Pearson and Spearman. Using these correlations, we order the absolute values and select the features with better values.",
"cite_spans": [
{
"start": 243,
"end": 258,
"text": "Leonenko, 1987)",
"ref_id": "BIBREF13"
},
{
"start": 525,
"end": 540,
"text": "(Breiman, 2001)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "In addition to the previous features, we also test Term-Frequency (TF) and Term Frequency-Inverse Document Frequency (TF-IDF) techniques to generate specific text features and compare the results of the handcrafted features with these two wellknown baseline features. It is important to mention that all feature values were normalized for the experimentation process. Table 3 shows an overview of all the features used in this paper.",
"cite_spans": [],
"ref_spans": [
{
"start": 368,
"end": 375,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "We comment on the machine learning classifiers and report the achieved results in the next section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "We explored the following classical classification strategies in this work: Naive Bayes (NB), Support Vector Machines (SVM), Decision Tree (DT), Random Forest (RF), Neural Network Multilayer Perceptron (NN) and a Dummy Classifier. More sophisticated (deep) strategies that we tested are a BERT-based classifier and a Convolutional Neural Average sentence size in terms of words (Liu et al., 2007; Lu et al., 2010) spaCy with portuguese language model Number of Sentences (Num-S)",
"cite_spans": [
{
"start": 378,
"end": 396,
"text": "(Liu et al., 2007;",
"ref_id": "BIBREF14"
},
{
"start": 397,
"end": 413,
"text": "Lu et al., 2010)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "Total of sentences in the review (Liu et al., 2007; Lu et al., 2010) Number of Words (Num-W) Total of words in the review (Kim et al., 2006 Number of words not found in Wiktionary 1 and Unitex-PB lexicons (Muniz, 2004) Dominant Terms (Dom-Terms)",
"cite_spans": [
{
"start": 33,
"end": 51,
"text": "(Liu et al., 2007;",
"ref_id": "BIBREF14"
},
{
"start": 52,
"end": 68,
"text": "Lu et al., 2010)",
"ref_id": "BIBREF15"
},
{
"start": 122,
"end": 139,
"text": "(Kim et al., 2006",
"ref_id": "BIBREF12"
},
{
"start": 205,
"end": 218,
"text": "(Muniz, 2004)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "Presence of important terms in reviews, considering their specificity for the domain (Tsur and Rappoport, 2009) We use the NILC Corpus (Nunes et al., 1996) to calculate the frequencies of words that do not belong to the domains Product Aspects (Prod-Feat) Presence of product aspects in the reviews (Kim et al., 2006; Hong et al., 2012; Liu et al., 2007) We manually extract the features of texts from the corpus development set.",
"cite_spans": [
{
"start": 85,
"end": 111,
"text": "(Tsur and Rappoport, 2009)",
"ref_id": "BIBREF27"
},
{
"start": 135,
"end": 155,
"text": "(Nunes et al., 1996)",
"ref_id": "BIBREF18"
},
{
"start": 299,
"end": 317,
"text": "(Kim et al., 2006;",
"ref_id": "BIBREF12"
},
{
"start": 318,
"end": 336,
"text": "Hong et al., 2012;",
"ref_id": "BIBREF10"
},
{
"start": 337,
"end": 354,
"text": "Liu et al., 2007)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "Number of words that express sentiments (Kim et al., 2006) according to the following categories of the LIWC dictionary (Pennebaker et al. Difference between the number of stars in a review and the average star rating for the movie/app (Hong et al., 2012) - ",
"cite_spans": [
{
"start": 40,
"end": 58,
"text": "(Kim et al., 2006)",
"ref_id": "BIBREF12"
},
{
"start": 236,
"end": 255,
"text": "(Hong et al., 2012)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Words (SENT)",
"sec_num": null
},
{
"text": "As explained before, we performed feature selection using the techniques of Information Gain and Random Forest. Figures 2a and 2b show the results of feature ranking for the apps domain, while Figure 2c and 2d show the results for movies domain. We performed the classification for the top 8 features of each method of feature selection. As an alternative, we also selected the most correlated features to helpfulness status using the Pearson and Spearman values.",
"cite_spans": [],
"ref_spans": [
{
"start": 112,
"end": 129,
"text": "Figures 2a and 2b",
"ref_id": "FIGREF3"
},
{
"start": 193,
"end": 202,
"text": "Figure 2c",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "4.1"
},
{
"text": "We divided the process of training classifiers in some distinct phases. In the first phase, we trained the classifiers considering the feature selection methods against the TF and TF-IDF techniques. This phase shows us the best sets of features and the best classifiers for both types of features: handcrafted and TF/TF-IDF features. In the second phase, we merged the handcrafted features with the TF/TF-IDF ones. This feature combination process",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Results",
"sec_num": "4.2"
},
{
"text": "Handcrafted Content Features 29The content features adapted from previous literature works.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature category (number of features) Description",
"sec_num": null
},
{
"text": "Information Gain 8The handcrafted content features selected by Information Gain technique.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature category (number of features) Description",
"sec_num": null
},
{
"text": "Random Forest 8The handcrafted content features selected by Random Forest Classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature category (number of features) Description",
"sec_num": null
},
{
"text": "Correlation Coefficients 8The handcrafted content features selected by the intersection of correlation coefficients. Baseline TF 500The features selected by TF method. Baseline TF-IDF 500The features selected by TF-IDF method. consists of concatenating the vectors of each text (i.e., TF or TF-IDF vectors) with the vectors of each group of features, both with the same weight. Finally, in the third phase, we decided to use the results of the second phase to model voting-based ensemble classifiers. The classifiers with good results and fewer errors in common were selected to compose the ensembles. The chosen classifiers for the ensembles were Decision Trees and Neural Networks for apps, and Decision Trees and Random Forest for movies. Ensembles with three classifiers obtained similar results (never higher) to those with two classifiers, so we only report the results for ensembles of two classifiers 2 . Finally, in a fourth phase, we used a BERT-based classifier over a pre-trained Portuguese model (Souza et al., 2020) for both domains and a CNN using the GloVe 3 (Hartmann et al., 2017; Pennington et al., 2014) embeddings as input features.",
"cite_spans": [
{
"start": 1009,
"end": 1029,
"text": "(Souza et al., 2020)",
"ref_id": "BIBREF25"
},
{
"start": 1075,
"end": 1098,
"text": "(Hartmann et al., 2017;",
"ref_id": "BIBREF9"
},
{
"start": 1099,
"end": 1123,
"text": "Pennington et al., 2014)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature category (number of features) Description",
"sec_num": null
},
{
"text": "The results referring to the first phase are shown in Figures 3a and 3b , where we show F1 scores (the best ones are written in the chart). Notice that we show in the charts the F1-Measure that is the average F1 score for the two classes. One may see that, for apps, the best results were 72%, which may be achieved with simple TF features with SVM and Random Forest; for movies, the best results were 63% for TF-IDF, with the same classifiers. Overall, for both domains, there were no significant performance differences for the two classes.",
"cite_spans": [],
"ref_spans": [
{
"start": 54,
"end": 71,
"text": "Figures 3a and 3b",
"ref_id": null
}
],
"eq_spans": [],
"section": "Feature category (number of features) Description",
"sec_num": null
},
{
"text": "When we merge the two big groups of features (handcrafted and TF/IDF features), the results are better, as one may see in Figures 3c and 3d . Considering the best situation, apps classification achieved 78% with correlation-based feature selection and TF for SVM (results 8.3% better than before); movies achieved 66% with all the features and TF-IDF for SVM too (4.7% better). Again, SVM showed to be a distinctive technique, with stable classification performances for the two classes.",
"cite_spans": [],
"ref_spans": [
{
"start": 122,
"end": 139,
"text": "Figures 3c and 3d",
"ref_id": null
}
],
"eq_spans": [],
"section": "Feature category (number of features) Description",
"sec_num": null
},
{
"text": "The results for our ensemble, the BERT-based 4 and the CNN classifiers are shown in Figure 3e . For better understanding, the X-axis in Figure 3e mentions the use of the handcrafted features along with BERT (BERT-PT+Hand). For this strategy, we appended all handcrafted features to CLS vector (768 + 29 dimensions), and then the method proceeds normally, using the resulting vector in the next layer to perform the classification. In the same way, for clarification, the strategy BERT-PT+CNN was modeled to merge the BERT architecture to CNN, presented before. We used the four last layers of BERT as features for CNN. The fine-tuning of BERT model was made at the same time as the CNN training. Figure 4 shows the architecture of the CNN.",
"cite_spans": [],
"ref_spans": [
{
"start": 84,
"end": 93,
"text": "Figure 3e",
"ref_id": null
},
{
"start": 136,
"end": 145,
"text": "Figure 3e",
"ref_id": null
},
{
"start": 696,
"end": 704,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Feature category (number of features) Description",
"sec_num": null
},
{
"text": "Despite BERT being a new standard technique in the NLP area, it achieved results very similar to those presented by the ensemble. In the application domain, BERT shows a slight drop in performance. Further investigation is needed to find out why the results are so low for this case. Possible explanations include the more \"passionate\" and subjective nature of the movie reviews (while apps' reviews tend to discuss more \"technical\" aspects). Overall, the ensemble classification could not outperform the previous experiments, while the CNN model outperformed all classifiers. Considering all the experiments, we have some valuable learned lessons. We may see that simple textual features such as TF and TF-IDF may be powerful features for helpfulness prediction. However, merging handcrafted content features with TF-IDF features allows us to achieve better results. Other interesting result is that traditional machine learning techniques may rival more sophisticated strategies as ensemble or BERT-based classifiers. SVM, in special, showed to be an important technique among the classical methods. Anyway, all of them were outperformed by a CNN approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature category (number of features) Description",
"sec_num": null
},
{
"text": "Finally, regarding the feature selection processes, the correlation-based one was slightly better than information gain and the Random Forest-based one, but the differences appear to be insignificant. Among the best selected features, although there is some variation depending on the used correlation measure, it is possible to highlight some of them: for apps domain, we highlight average sentence length, star rating and part of speech tags; for movies domain, average sentence length, SMOG readability score, sentiment words and dominant terms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature category (number of features) Description",
"sec_num": null
},
{
"text": "This paper synthesized a series of experiments on predicting review helpfulness, showing some relevant learned lessons and contributions (in particular, for Brazilian Portuguese, which is considered a low resource language). However, a lot remains to be investigated. We highlight two issues that concern us the most at this time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Remarks",
"sec_num": "5"
},
{
"text": "Firstly, the different performances for different domains (across different classification methods) keep intriguing us. This is a known behavior in the sentiment analysis area, and we corroborate it by testing new domains in this paper. We wonder whether new methods or features should be tested, maybe focusing on those that are more domain independent, or whether we should \"transform\" our data, \"eliminating\" domain specific traits.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Remarks",
"sec_num": "5"
},
{
"text": "A L L + T F I D F I G + T F I D F R F + T F I D F C o r r + T F I D F A L L + T F I G + T F R F + T F C o r r + T F Features",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Remarks",
"sec_num": "5"
},
{
"text": "The other issue refers to the helpfulness predic-tion task itself. Although the literature (including us) have exhaustively tried with this task, it is a highly subjective task that (indirectly) incorporate several other tasks, as subjectivity classifi- Figure 4 : CNN's Architecture. We use 300-dimensional GloVe embeddings as input features. As we can see, we employ three paralels convlayers and set to 100 the size of the output channel for each convlayer. Also, the other parameters are: epochs = 5, optimizer = Adam, batch size = 32. Fully connected layers: input 1 = 300, output 1 = 32 and Dropout = 0.7 cation (more \"personal\" reviews look to be more interesting), polarity classification (more \"radical\" opinions call more attention), aspect identification (as reviews that directly cite some aspects look to be more useful), and detection of user information need (ultimately, a review is helpful only if it attends the information need of the user). Future efforts might explore such supporting tasks for helpfulness prediction.",
"cite_spans": [],
"ref_spans": [
{
"start": 254,
"end": 262,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Final Remarks",
"sec_num": "5"
},
{
"text": "The complete code for our features and models are available online at https://github.com/ RogerFig/deep-helpfulness. The interested reader may also find more information at the POeTiSA project web portal (https://sites. google.com/icmc.usp.br/poetisa).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Remarks",
"sec_num": "5"
},
{
"text": "We adopted a soft classification, in which the classes are weighted by their probabilities given by the classifiers; if it happens that the two classes end up with the same score, we opt for the not helpful class.3 http://nilc.icmc.usp.br/nilc/index.php/repositorio-deword-embeddings-do-nilc",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This model was fine-tuned and the pre-trained parameters were not frozen during fine-tuning. The reviews were tokenized using the default tokenizer of Bertimbau model. We applied a single layer feed forward network in CLS output vector (768 dimensions) to classify the instances. The main hyperparameters are as follows: epochs = 2, learning rate = 4e-5, optimizer = AdamW, train batch size = 8, max sequence length = 128. These hyperparameters were empirically chosen.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors are grateful to the Center for Artificial Intelligence (C4AI) of the University of S\u00e3o Paulo, sponsored by IBM and FAPESP (grant #2019/07665-4), and to Instituto Federal do Piau\u00ed (IFPI).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Review helpfulness prediction: Survey",
"authors": [
{
"first": "Yasamyian",
"middle": [],
"last": "Almutairi",
"suffix": ""
},
{
"first": "Manal",
"middle": [],
"last": "Abdullah",
"suffix": ""
},
{
"first": "Dimah",
"middle": [],
"last": "Alahmadi",
"suffix": ""
}
],
"year": 2019,
"venue": "Periodicals of Engineering and Natural Sciences",
"volume": "7",
"issue": "1",
"pages": "420--432",
"other_ids": {
"DOI": [
"10.21533/pen.v7i1.420"
]
},
"num": null,
"urls": [],
"raw_text": "Yasamyian Almutairi, Manal Abdullah, and Dimah Alahmadi. 2019. Review helpfulness prediction: Sur- vey. Periodicals of Engineering and Natural Sci- ences, 7(1):420-432.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Analyzing the adequacy of readability indicators to a non-english language",
"authors": [
{
"first": "H\u00e9lder",
"middle": [],
"last": "Antunes",
"suffix": ""
},
{
"first": "Carla",
"middle": [
"Teixeira"
],
"last": "Lopes",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the International Conference of the Cross-Language Evaluation Forum for European Languages",
"volume": "",
"issue": "",
"pages": "149--155",
"other_ids": {
"DOI": [
"10.1007/978-3-030-28577-7_10"
]
},
"num": null,
"urls": [],
"raw_text": "H\u00e9lder Antunes and Carla Teixeira Lopes. 2019. An- alyzing the adequacy of readability indicators to a non-english language. In Proceedings of the Inter- national Conference of the Cross-Language Evalua- tion Forum for European Languages, pages 149-155. Springer.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A survey of customer review helpfulness prediction techniques",
"authors": [
{
"first": "Madeha",
"middle": [],
"last": "Arif",
"suffix": ""
},
{
"first": "Usman",
"middle": [],
"last": "Qamar",
"suffix": ""
},
{
"first": "Saba",
"middle": [],
"last": "Farhan Hassan Khan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bashir",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of SAI Intelligent Systems Conference",
"volume": "",
"issue": "",
"pages": "215--226",
"other_ids": {
"DOI": [
"10.1007/978-3-030-01054-6_15"
]
},
"num": null,
"urls": [],
"raw_text": "Madeha Arif, Usman Qamar, Farhan Hassan Khan, and Saba Bashir. 2018. A survey of customer review helpfulness prediction techniques. In Proceedings of SAI Intelligent Systems Conference, pages 215-226. Springer.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "An evaluation of the brazilian portuguese liwc dictionary for sentiment analysis",
"authors": [
{
"first": "Pedro",
"middle": [],
"last": "Balage Filho",
"suffix": ""
},
{
"first": "Thiago Alexandre Salgueiro",
"middle": [],
"last": "Pardo",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "Alu\u00edsio",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 9th Brazilian Symposium in Information and Human Language Technology",
"volume": "",
"issue": "",
"pages": "215--219",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pedro Balage Filho, Thiago Alexandre Salgueiro Pardo, and Sandra Alu\u00edsio. 2013. An evaluation of the brazil- ian portuguese liwc dictionary for sentiment analysis. In Proceedings of the 9th Brazilian Symposium in In- formation and Human Language Technology, pages 215-219.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Random forests. Machine learning",
"authors": [
{
"first": "Leo",
"middle": [],
"last": "Breiman",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "45",
"issue": "",
"pages": "5--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leo Breiman. 2001. Random forests. Machine learning, 45(1):5-32.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The principles of readability",
"authors": [
{
"first": "William",
"middle": [],
"last": "Dubay",
"suffix": ""
}
],
"year": 2004,
"venue": "CA",
"volume": "92627949",
"issue": "",
"pages": "631--3309",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Dubay. 2004. The principles of readability. CA, 92627949:631-3309.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Macmorpho revisited: Towards robust part-of-speech tagging",
"authors": [
{
"first": "Erick",
"middle": [],
"last": "Rocha Fonseca",
"suffix": ""
},
{
"first": "Jo\u00e3o",
"middle": [],
"last": "Lu\u00eds",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Rosa",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 9th Brazilian symposium in information and human language technology",
"volume": "",
"issue": "",
"pages": "98--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erick Rocha Fonseca and Jo\u00e3o Lu\u00eds G Rosa. 2013. Mac- morpho revisited: Towards robust part-of-speech tag- ging. In Proceedings of the 9th Brazilian sympo- sium in information and human language technology, pages 98-107.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Estimating the helpfulness and economic impact of product reviews: Mining text and reviewer characteristics",
"authors": [
{
"first": "Anindya",
"middle": [],
"last": "Ghose",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Panagiotis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ipeirotis",
"suffix": ""
}
],
"year": 2011,
"venue": "IEEE Transactions on Knowledge and Data Engineering",
"volume": "23",
"issue": "10",
"pages": "1498--1512",
"other_ids": {
"DOI": [
"10.1109/TKDE.2010.188"
]
},
"num": null,
"urls": [],
"raw_text": "Anindya Ghose and Panagiotis G Ipeirotis. 2011. Esti- mating the helpfulness and economic impact of prod- uct reviews: Mining text and reviewer characteristics. IEEE Transactions on Knowledge and Data Engi- neering, 23(10):1498-1512.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Portuguese word embeddings: Evaluating on word analogies and natural language tasks",
"authors": [
{
"first": "Nathan",
"middle": [],
"last": "Hartmann",
"suffix": ""
},
{
"first": "Erick",
"middle": [],
"last": "Fonseca",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Shulby",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Treviso",
"suffix": ""
},
{
"first": "J\u00e9ssica",
"middle": [],
"last": "Silva",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "Alu\u00edsio",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th Brazilian Symposium in Information and Human Language Technology",
"volume": "",
"issue": "",
"pages": "122--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathan Hartmann, Erick Fonseca, Christopher Shulby, Marcos Treviso, J\u00e9ssica Silva, and Sandra Alu\u00edsio. 2017. Portuguese word embeddings: Evaluating on word analogies and natural language tasks. In Proceedings of the 11th Brazilian Symposium in In- formation and Human Language Technology, pages 122-131, Uberl\u00e2ndia, Brazil. Sociedade Brasileira de Computa\u00e7\u00e3o.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "What reviews are satisfactory: Novel features for automatic helpfulness voting",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Jianmin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Qiaoming",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "495--504",
"other_ids": {
"DOI": [
"10.1145/2348283.2348351"
]
},
"num": null,
"urls": [],
"raw_text": "Yu Hong, Jun Lu, Jianmin Yao, Qiaoming Zhu, and Guodong Zhou. 2012. What reviews are satisfactory: Novel features for automatic helpfulness voting. In Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Infor- mation Retrieval, pages 495-504, New York, NY, USA. ACM.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A study of factors that contribute to online review helpfulness",
"authors": [
{
"first": "H",
"middle": [],
"last": "Albert",
"suffix": ""
},
{
"first": "Kuanchin",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Trang",
"middle": [
"P"
],
"last": "Yen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tran",
"suffix": ""
}
],
"year": 2015,
"venue": "Computers in Human Behavior",
"volume": "48",
"issue": "",
"pages": "17--27",
"other_ids": {
"DOI": [
"10.1016/j.chb.2015.01.010"
]
},
"num": null,
"urls": [],
"raw_text": "Albert H Huang, Kuanchin Chen, David C Yen, and Trang P Tran. 2015. A study of factors that contribute to online review helpfulness. Computers in Human Behavior, 48:17-27.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Automatically assessing review helpfulness",
"authors": [
{
"first": "Soo-Min",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Chklovski",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Pennacchiotti",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "423--430",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soo-Min Kim, Patrick Pantel, Tim Chklovski, and Marco Pennacchiotti. 2006. Automatically assess- ing review helpfulness. In Proceedings of the 2006 Conference on Empirical Methods in Natural Lan- guage Processing, pages 423-430, Sydney, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Sample estimate of the entropy of a random vector",
"authors": [
{
"first": "Nikolai",
"middle": [
"N"
],
"last": "Lf Kozachenko",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Leonenko",
"suffix": ""
}
],
"year": 1987,
"venue": "Problemy Peredachi Informatsii",
"volume": "23",
"issue": "2",
"pages": "9--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "LF Kozachenko and Nikolai N Leonenko. 1987. Sample estimate of the entropy of a random vector. Problemy Peredachi Informatsii, 23(2):9-16.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Low-quality product review detection in opinion summarization",
"authors": [
{
"first": "Jingjing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yunbo",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Yalou",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)",
"volume": "",
"issue": "",
"pages": "334--342",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jingjing Liu, Yunbo Cao, Chin-Yew Lin, Yalou Huang, and Ming Zhou. 2007. Low-quality product review detection in opinion summarization. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 334-342, Prague, Czech Republic. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Exploiting social context for review quality prediction",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Panayiotis",
"middle": [],
"last": "Tsaparas",
"suffix": ""
},
{
"first": "Alexandros",
"middle": [],
"last": "Ntoulas",
"suffix": ""
},
{
"first": "Livia",
"middle": [],
"last": "Polanyi",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 19th international conference on World wide web",
"volume": "",
"issue": "",
"pages": "691--700",
"other_ids": {
"DOI": [
"10.1145/1772690.1772761"
]
},
"num": null,
"urls": [],
"raw_text": "Yue Lu, Panayiotis Tsaparas, Alexandros Ntoulas, and Livia Polanyi. 2010. Exploiting social context for review quality prediction. In Proceedings of the 19th international conference on World wide web, pages 691-700.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Research note: What makes a helpful online review? a study of customer reviews on amazon",
"authors": [
{
"first": "M",
"middle": [],
"last": "Susan",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mudambi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schuff",
"suffix": ""
}
],
"year": 2010,
"venue": "MIS quarterly",
"volume": "",
"issue": "",
"pages": "185--200",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Susan M Mudambi and David Schuff. 2010. Research note: What makes a helpful online review? a study of customer reviews on amazon. com. MIS quarterly, pages 185-200.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A constru\u00e7\u00e3o de recursos ling\u00fc\u00edstico-computacionais para o portugu\u00eas do Brasil: o projeto Unitex-PB",
"authors": [
{
"first": "Marcelo Caetano Martins",
"middle": [],
"last": "Muniz",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.11606/D.55.2020.tde-19022020-151305"
]
},
"num": null,
"urls": [],
"raw_text": "Marcelo Caetano Martins Muniz. 2004. A constru\u00e7\u00e3o de recursos ling\u00fc\u00edstico-computacionais para o por- tugu\u00eas do Brasil: o projeto Unitex-PB. Ph.D. thesis, Universidade de S\u00e3o Paulo.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A constru\u00e7\u00e3o de um l\u00e9xico para o portugu\u00eas do brasil: li\u00e7\u00f5es aprendidas e perspectivas",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Das Gra\u00e7as Volpe",
"suffix": ""
},
{
"first": "Fabiano M Costa",
"middle": [],
"last": "Nunes",
"suffix": ""
},
{
"first": "Cl\u00e1udia",
"middle": [],
"last": "Vieira",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zavaglia",
"suffix": ""
},
{
"first": "R",
"middle": [
"C"
],
"last": "C\u00e1ssia",
"suffix": ""
},
{
"first": "Jos\u00e9lia",
"middle": [],
"last": "Sossolote",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hernandez",
"suffix": ""
}
],
"year": 1996,
"venue": "Anais do II Encontro para o Processamento de Portugu\u00eas Escrito e Falado",
"volume": "",
"issue": "",
"pages": "61--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria das Gra\u00e7as Volpe Nunes, Fabiano M Costa Vieira, Cl\u00e1udia Zavaglia, C\u00e1ssia RC Sossolote, and Jos\u00e9lia Hernandez. 1996. A constru\u00e7\u00e3o de um l\u00e9xico para o portugu\u00eas do brasil: li\u00e7\u00f5es aprendidas e perspectivas. In Anais do II Encontro para o Processamento de Portugu\u00eas Escrito e Falado, pages 61-70.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Modeling and prediction of online product review helpfulness: A survey",
"authors": [
{
"first": "Gerardo",
"middle": [
"Ocampo"
],
"last": "Diaz",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "698--708",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1065"
]
},
"num": null,
"urls": [],
"raw_text": "Gerardo Ocampo Diaz and Vincent Ng. 2018. Modeling and prediction of online product review helpfulness: A survey. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 698-708, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Linguistic inquiry and word count: Liwc",
"authors": [
{
"first": "Martha",
"middle": [
"E"
],
"last": "James W Pennebaker",
"suffix": ""
},
{
"first": "Roger J",
"middle": [],
"last": "Francis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Booth",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James W Pennebaker, Martha E Francis, and Roger J Booth. 2001. Linguistic inquiry and word count: Liwc 2001. Mahway: Lawrence Erlbaum Associates, 71.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Pro- cessing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Building a sentiment lexicon for social judgement mining",
"authors": [
{
"first": "J",
"middle": [],
"last": "M\u00e1rio",
"suffix": ""
},
{
"first": "Paula",
"middle": [],
"last": "Silva",
"suffix": ""
},
{
"first": "Lu\u00eds",
"middle": [],
"last": "Carvalho",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sarmento",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the International Conference on Computational Processing of the Portuguese Language",
"volume": "",
"issue": "",
"pages": "218--228",
"other_ids": {
"DOI": [
"10.1007/978-3-642-28885-2_25"
]
},
"num": null,
"urls": [],
"raw_text": "M\u00e1rio J Silva, Paula Carvalho, and Lu\u00eds Sarmento. 2012. Building a sentiment lexicon for social judgement mining. In Proceedings of the International Confer- ence on Computational Processing of the Portuguese Language, pages 218-228. Springer.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A bunch of helpfulness and sentiment corpora in brazilian portuguese",
"authors": [
{
"first": "Henrico",
"middle": [],
"last": "Rog\u00e9rio Figueredo Sousa",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Bertini Brum",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Das Gra\u00e7as Volpe",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nunes",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 12th Brazilian Symposium in Information and Human Language Technology",
"volume": "",
"issue": "",
"pages": "209--218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rog\u00e9rio Figueredo Sousa, Henrico Bertini Brum, and Maria das Gra\u00e7as Volpe Nunes. 2019. A bunch of helpfulness and sentiment corpora in brazilian por- tuguese. In Proceedings of the 12th Brazilian Sympo- sium in Information and Human Language Technol- ogy, pages 209-218. Sociedade Brasileira de Com- puta\u00e7\u00e3o.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The challenges of modeling and predicting online review helpfulness",
"authors": [
{
"first": "Figueredo",
"middle": [],
"last": "Rog\u00e9rio",
"suffix": ""
},
{
"first": "Thiago Alexandre Salgueiro",
"middle": [],
"last": "Sousa",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pardo",
"suffix": ""
}
],
"year": 2021,
"venue": "Anais do XVIII Encontro Nacional de Intelig\u00eancia Artificial e Computacional",
"volume": "",
"issue": "",
"pages": "727--738",
"other_ids": {
"DOI": [
"10.5753/eniac.2021.18298"
]
},
"num": null,
"urls": [],
"raw_text": "Rog\u00e9rio Figueredo Sousa and Thiago Alexan- dre Salgueiro Pardo. 2021. The challenges of modeling and predicting online review helpfulness. In Anais do XVIII Encontro Nacional de Intelig\u00ean- cia Artificial e Computacional, pages 727-738. Sociedade Brasileira de Computa\u00e7\u00e3o.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "BERTimbau: pretrained BERT models for Brazilian Portuguese",
"authors": [
{
"first": "F\u00e1bio",
"middle": [],
"last": "Souza",
"suffix": ""
},
{
"first": "Rodrigo",
"middle": [],
"last": "Nogueira",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Lotufo",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 9th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1007/978-3-030-61377-8_28"
]
},
"num": null,
"urls": [],
"raw_text": "F\u00e1bio Souza, Rodrigo Nogueira, and Roberto Lotufo. 2020. BERTimbau: pretrained BERT models for Brazilian Portuguese. In Proceedings of the 9th",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Revrank: A fully unsupervised algorithm for selecting the most helpful book reviews",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Tsur",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the International AAAI Conference on Web and Social Media",
"volume": "3",
"issue": "",
"pages": "154--161",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oren Tsur and Ari Rappoport. 2009. Revrank: A fully unsupervised algorithm for selecting the most help- ful book reviews. In Proceedings of the Interna- tional AAAI Conference on Web and Social Media, volume 3, pages 154-161.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Negative confidence-aware weakly supervised binary classification for effective review helpfulness classification",
"authors": [
{
"first": "Xi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Iadh",
"middle": [],
"last": "Ounis",
"suffix": ""
},
{
"first": "Craig",
"middle": [],
"last": "Macdonald",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 29th ACM International Conference on Information & Knowledge Management",
"volume": "",
"issue": "",
"pages": "1565--1574",
"other_ids": {
"DOI": [
"10.1145/3340531.3411978"
]
},
"num": null,
"urls": [],
"raw_text": "Xi Wang, Iadh Ounis, and Craig Macdonald. 2020. Negative confidence-aware weakly supervised binary classification for effective review helpfulness classifi- cation. In Proceedings of the 29th ACM International Conference on Information & Knowledge Manage- ment, pages 1565-1574.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Integrating neural and syntactic features on the helpfulness analysis of the online customer reviews",
"authors": [
{
"first": "Hung",
"middle": [],
"last": "Shih",
"suffix": ""
},
{
"first": "Jun-Wei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining",
"volume": "",
"issue": "",
"pages": "1013--1017",
"other_ids": {
"DOI": [
"10.1145/3341161.3344825"
]
},
"num": null,
"urls": [],
"raw_text": "Shih-Hung Wu and Jun-Wei Wang. 2019. Integrating neural and syntactic features on the helpfulness anal- ysis of the online customer reviews. In IEEE/ACM International Conference on Advances in Social Net- works Analysis and Mining, pages 1013-1017. IEEE.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Bert feature based model for predicting the helpfulness scores of online customers reviews",
"authors": [
{
"first": "Shuzhe",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Salvador",
"middle": [
"E"
],
"last": "Barbosa",
"suffix": ""
},
{
"first": "Don",
"middle": [],
"last": "Hong",
"suffix": ""
}
],
"year": 2020,
"venue": "Future of Information and Communication Conference",
"volume": "",
"issue": "",
"pages": "270--281",
"other_ids": {
"DOI": [
"10.1007/978-3-030-39442-4_21"
]
},
"num": null,
"urls": [],
"raw_text": "Shuzhe Xu, Salvador E Barbosa, and Don Hong. 2020. Bert feature based model for predicting the helpful- ness scores of online customers reviews. In Future of Information and Communication Conference, pages 270-281. Springer.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Modeling the helpful opinion mining of online consumer reviews as a classification problem",
"authors": [
{
"first": "Yi-Ching",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Tsun",
"middle": [],
"last": "Ku",
"suffix": ""
},
{
"first": "Shih-Hung",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Liang-Pu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Gwo-Dong",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2014,
"venue": "International Journal of Computational Linguistics & Chinese Language Processing",
"volume": "19",
"issue": "2",
"pages": "17--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi-Ching Zeng, Tsun Ku, Shih-Hung Wu, Liang-Pu Chen, and Gwo-Dong Chen. 2014. Modeling the helpful opinion mining of online consumer reviews as a classification problem. International Journal of Computational Linguistics & Chinese Language Processing, 19(2):17-32.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Utility scoring of product reviews",
"authors": [
{
"first": "Zhu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Balaji",
"middle": [],
"last": "Varadarajan",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 15th ACM International Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "51--57",
"other_ids": {
"DOI": [
"10.1145/1183614.1183626"
]
},
"num": null,
"urls": [],
"raw_text": "Zhu Zhang and Balaji Varadarajan. 2006. Utility scor- ing of product reviews. In Proceedings of the 15th ACM International Conference on Information and Knowledge Management, pages 51-57, New York, NY, USA. ACM.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Examples of reviews",
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"num": null,
"text": "general sentiment about the movie/app and the sentiment expressed by the author of a review(Hong et al., 2012) Sentilex sentiment lexicon (Silva et al., 2012) Subjectivity (SUB)The probability of a review being subjective(Ghose and Ipeirotis, 2011) Morpho-Syntactic Tokens (SYN)Number of tokens with the following Part-of-Speech tags: Noun (N), Verb (V), Adverb (ADV) and Adjective (ADJ). It also includes counting for open class words (Open)(Kim et al., 2006) NLPNet POS-Tagger (Fonseca and Rosa, 2013)Star Deviation (Star-Dev)",
"type_str": "figure",
"uris": null
},
"FIGREF3": {
"num": null,
"text": "Results of feature importance",
"type_str": "figure",
"uris": null
},
"FIGREF4": {
"num": null,
"text": "of ensemble classification and deep models with their combinations",
"type_str": "figure",
"uris": null
},
"FIGREF5": {
"num": null,
"text": "Figure 3: Classification Results",
"type_str": "figure",
"uris": null
},
"TABREF1": {
"content": "<table/>",
"text": "UTLCorpus numbers. The helpfulness label refers to the percentage of reviews labeled as helpful.",
"num": null,
"type_str": "table",
"html": null
},
"TABREF3": {
"content": "<table/>",
"text": "List of content features Network (CNN).",
"num": null,
"type_str": "table",
"html": null
},
"TABREF4": {
"content": "<table/>",
"text": "Overview of the features",
"num": null,
"type_str": "table",
"html": null
}
}
}
}