ACL-OCL / Base_JSON /prefixN /json /N19 /N19-1035.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N19-1035",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:56:08.248149Z"
},
"title": "Utilizing BERT for Aspect-Based Sentiment Analysis via Constructing Auxiliary Sentence",
"authors": [
{
"first": "Chi",
"middle": [],
"last": "Sun",
"suffix": "",
"affiliation": {
"laboratory": "Shanghai Key Laboratory of Intelligent Information Processing",
"institution": "Fudan University",
"location": {
"addrLine": "825 Zhangheng Road",
"settlement": "Shanghai",
"country": "China"
}
},
"email": ""
},
{
"first": "Luyao",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {
"laboratory": "Shanghai Key Laboratory of Intelligent Information Processing",
"institution": "Fudan University",
"location": {
"addrLine": "825 Zhangheng Road",
"settlement": "Shanghai",
"country": "China"
}
},
"email": "lyhuang18@fudan.edu.cn"
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": "",
"affiliation": {
"laboratory": "Shanghai Key Laboratory of Intelligent Information Processing",
"institution": "Fudan University",
"location": {
"addrLine": "825 Zhangheng Road",
"settlement": "Shanghai",
"country": "China"
}
},
"email": "xpqiu@fudan.edu.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Aspect-based sentiment analysis (ABSA), which aims to identify fine-grained opinion polarity towards a specific aspect, is a challenging subtask of sentiment analysis (SA). In this paper, we construct an auxiliary sentence from the aspect and convert ABSA to a sentence-pair classification task, such as question answering (QA) and natural language inference (NLI). We fine-tune the pre-trained model from BERT and achieve new state-ofthe-art results on SentiHood and SemEval-2014 Task 4 datasets. The source codes are available at https://github.com/ HSLCY/ABSA-BERT-pair.",
"pdf_parse": {
"paper_id": "N19-1035",
"_pdf_hash": "",
"abstract": [
{
"text": "Aspect-based sentiment analysis (ABSA), which aims to identify fine-grained opinion polarity towards a specific aspect, is a challenging subtask of sentiment analysis (SA). In this paper, we construct an auxiliary sentence from the aspect and convert ABSA to a sentence-pair classification task, such as question answering (QA) and natural language inference (NLI). We fine-tune the pre-trained model from BERT and achieve new state-ofthe-art results on SentiHood and SemEval-2014 Task 4 datasets. The source codes are available at https://github.com/ HSLCY/ABSA-BERT-pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Sentiment analysis (SA) is an important task in natural language processing. It solves the computational processing of opinions, emotions, and subjectivity -sentiment is collected, analyzed and summarized. It has received much attention not only in academia but also in industry, providing real-time feedback through online reviews on websites such as Amazon, which can take advantage of customers' opinions on specific products or services. The underlying assumption of this task is that the entire text has an overall polarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, the users' comments may contain different aspects, such as: \"This book is a hardcover version, but the price is a bit high.\" The polarity in 'appearance' is positive, and the polarity regarding 'price' is negative. Aspect-based sentiment analysis (ABSA) (Jo and Oh, 2011; Pontiki et al., 2014 Pontiki et al., , 2015 Pontiki et al., , 2016 aims to identify fine-grained polarity towards a specific aspect. This task allows users to evaluate aggregated sentiments for each aspect of a given product or service and gain a more granular understanding of their quality.",
"cite_spans": [
{
"start": 263,
"end": 280,
"text": "(Jo and Oh, 2011;",
"ref_id": "BIBREF4"
},
{
"start": 281,
"end": 301,
"text": "Pontiki et al., 2014",
"ref_id": "BIBREF12"
},
{
"start": 302,
"end": 324,
"text": "Pontiki et al., , 2015",
"ref_id": "BIBREF11"
},
{
"start": 325,
"end": 347,
"text": "Pontiki et al., , 2016",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Both SA and ABSA are sentence-level or document-level tasks, but one comment may refer to more than one object, and sentence-level tasks cannot handle sentences with multiple targets. Therefore, Saeidi et al. (2016) introduce the task of targeted aspect-based sentiment analysis (TABSA), which aims to identify fine-grained opinion polarity towards a specific aspect associated with a given target. The task can be divided into two steps: (1) the first step is to determine the aspects associated with each target; (2) the second step is to resolve the polarity of aspects to a given target.",
"cite_spans": [
{
"start": 195,
"end": 215,
"text": "Saeidi et al. (2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The earliest work on (T)ABSA relied heavily on feature engineering (Wagner et al., 2014; Kiritchenko et al., 2014) , and subsequent neural network-based methods (Nguyen and Shirai, 2015; Wang et al., 2016; Tang et al., 2015 Tang et al., , 2016 Wang et al., 2017) achieved higher accuracy. Recently, Ma et al. (2018) incorporate useful commonsense knowledge into a deep neural network to further enhance the result of the model. Liu et al. (2018) optimize the memory network and apply it to their model to better capture linguistic structure.",
"cite_spans": [
{
"start": 67,
"end": 88,
"text": "(Wagner et al., 2014;",
"ref_id": "BIBREF17"
},
{
"start": 89,
"end": 114,
"text": "Kiritchenko et al., 2014)",
"ref_id": "BIBREF5"
},
{
"start": 161,
"end": 186,
"text": "(Nguyen and Shirai, 2015;",
"ref_id": null
},
{
"start": 187,
"end": 205,
"text": "Wang et al., 2016;",
"ref_id": "BIBREF19"
},
{
"start": 206,
"end": 223,
"text": "Tang et al., 2015",
"ref_id": "BIBREF15"
},
{
"start": 224,
"end": 243,
"text": "Tang et al., , 2016",
"ref_id": "BIBREF16"
},
{
"start": 244,
"end": 262,
"text": "Wang et al., 2017)",
"ref_id": "BIBREF18"
},
{
"start": 299,
"end": 315,
"text": "Ma et al. (2018)",
"ref_id": "BIBREF7"
},
{
"start": 428,
"end": 445,
"text": "Liu et al. (2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "More recently, the pre-trained language models, such as ELMo (Peters et al., 2018) , OpenAI GPT (Radford et al., 2018) , and BERT (Devlin et al., 2018) , have shown their effectiveness to alleviate the effort of feature engineering. Especially, BERT has achieved excellent results in QA and NLI. However, there is not much improvement in (T)ABSA task with the direct use of the pretrained BERT model (see Table 3 ). We think this is due to the inappropriate use of the pre-trained BERT model.",
"cite_spans": [
{
"start": 61,
"end": 82,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 96,
"end": 118,
"text": "(Radford et al., 2018)",
"ref_id": "BIBREF13"
},
{
"start": 130,
"end": 151,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 405,
"end": 412,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since the input representation of BERT can represent both a single text sentence and a pair of text sentences, we can convert (T)ABSA into a sentence-pair classification task and fine-tune the pre-trained BERT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we investigate several methods of constructing an auxiliary sentence and transform (T)ABSA into a sentence-pair classification task. We fine-tune the pre-trained model from BERT and achieve new state-of-the-art results on (T)ABSA task. We also conduct a comparative experiment to verify that the classification based on a sentence-pair is better than the single-sentence classification with fine-tuned BERT, which means that the improvement is not only from BERT but also from our method. In particular, our contribution is two-fold:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. We propose a new solution of (T)ABSA by converting it to a sentence-pair classification task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. We fine-tune the pre-trained BERT model and achieve new state-of-the-art results on Senti-Hood and SemEval-2014 Task 4 datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we describe our method in detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "TABSA In TABSA, a sentence s usually consists of a series of words: Saeidi et al. (2016) , we set the task as a 3class classification problem: given the sentence s, a set of target entities T and a fixed aspect set A = {general, price, transitlocation, saf ety}, predict the sentiment polarity y \u2208 {positive, negative, none} over the full set of the target-aspect pairs {(t, a) : t \u2208 T, a \u2208 A}. As we can see in Table 1 , the gold standard polarity of (LOCATION2, price) is negative, while the polarity of (LOCATION1, price) is none.",
"cite_spans": [
{
"start": 68,
"end": 88,
"text": "Saeidi et al. (2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 412,
"end": 419,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Task description",
"sec_num": "2.1"
},
{
"text": "{w 1 , \u2022 \u2022 \u2022 , w m }, and some of the words {w i 1 , \u2022 \u2022 \u2022 , w i k } are pre- identified targets {t 1 , \u2022 \u2022 \u2022 , t k }, following",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task description",
"sec_num": "2.1"
},
{
"text": "ABSA In ABSA, the target-aspect pairs {(t, a)} become only aspects a. This setting is equivalent to learning subtask 3 (Aspect Category Detection) and subtask 4 (Aspect Category Polarity) of SemEval-2014 Task 4 1 at the same time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task description",
"sec_num": "2.1"
},
{
"text": "For simplicity, we mainly describe our method with TABSA as an example.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Construction of the auxiliary sentence",
"sec_num": "2.2"
},
{
"text": "We consider the following four methods to convert the TABSA task into a sentence pair classification task:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Construction of the auxiliary sentence",
"sec_num": "2.2"
},
{
"text": "1 http://alt.qcri.org/semeval2014/task4/ Example: LOCATION2 is central London so extremely expensive, LOCATION1 is often considered the coolest area of London. Sentences for QA-M The sentence we want to generate from the target-aspect pair is a question, and the format needs to be the same. For example, for the set of a target-aspect pair (LOCATION1, safety), the sentence we generate is \"what do you think of the safety of location -1 ?\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Construction of the auxiliary sentence",
"sec_num": "2.2"
},
{
"text": "Sentences for NLI-M For the NLI task, the conditions we set when generating sentences are less strict, and the form is much simpler. The sentence created at this time is not a standard sentence, but a simple pseudo-sentence, with (LOCA-TION1, safety) pair as an example: the auxiliary sentence is: \"location -1 -safety\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Construction of the auxiliary sentence",
"sec_num": "2.2"
},
{
"text": "Sentences for QA-B For QA-B, we add the label information and temporarily convert TABSA into a binary classification problem (label \u2208 {yes, no}) to obtain the probability distribution. At this time, each target-aspect pair will generate three sequences such as \"the polarity of the aspect safety of location -1 is positive\", \"the polarity of the aspect safety of location -1 is negative\", \"the polarity of the aspect safety of location -1 is none\". We use the probabil-ity value of yes as the matching score. For a target-aspect pair which generates three sequences (positive, negative, none), we take the class of the sequence with the highest matching score for the predicted category.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Construction of the auxiliary sentence",
"sec_num": "2.2"
},
{
"text": "Sentences for NLI-B The difference between NLI-B and QA-B is that the auxiliary sentence changes from a question to a pseudo-sentence. The auxiliary sentences are: \"location -1 -safety -positive\", \"location -1 -safety -negative\", and \"location -1 -safety -none\". After we construct the auxiliary sentence, we can transform the TABSA task from a single sentence classification task to a sentence pair classification task. As shown in Table 3 , this is a necessary operation that can significantly improve the experimental results of the TABSA task.",
"cite_spans": [],
"ref_spans": [
{
"start": 433,
"end": 440,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Construction of the auxiliary sentence",
"sec_num": "2.2"
},
{
"text": "BERT (Devlin et al., 2018 ) is a new language representation model, which uses bidirectional transformers to pre-train a large corpus, and fine-tunes the pre-trained model on other tasks. We finetune the pre-trained BERT model on TABSA task. Let's take a brief look at the input representation and the fine-tuning procedure.",
"cite_spans": [
{
"start": 5,
"end": 25,
"text": "(Devlin et al., 2018",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning pre-trained BERT",
"sec_num": "2.3"
},
{
"text": "The input representation of the BERT can explicitly represent a pair of text sentences in a sequence of tokens. For a given token, its input representation is constructed by summing the corresponding token, segment, and position embeddings. For classification tasks, the first word of each sequence is a unique classification embedding ([CLS]).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input representation",
"sec_num": "2.3.1"
},
{
"text": "BERT fine-tuning is straightforward. To obtain a fixed-dimensional pooled representation of the input sequence, we use the final hidden state (i.e., the output of the transformer) of the first token as the input. We denote the vector as C \u2208 R H . Then we add a classification layer whose parameter matrix is W \u2208 R K\u00d7H , where K is the number of categories. Finally, the probability of each category P is calculated by the softmax function P = softmax(CW T ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning procedure",
"sec_num": "2.3.2"
},
{
"text": "BERT-single for (T)ABSA BERT for single sentence classification tasks. Suppose the number of target categories are n t and aspect categories are n a . We consider TABSA as a combination of n t \u2022 n a target-aspect-related sentiment classification problems, first classifying each sentiment classification problem, and then summarizing the results obtained. For ABSA, We fine-tune pretrained BERT model to train n a classifiers for all aspects and then summarize the results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT-single and BERT-pair",
"sec_num": "2.3.3"
},
{
"text": "BERT-pair for (T)ABSA BERT for sentence pair classification tasks. Based on the auxiliary sentence constructed in Section 2.2, we use the sentence-pair classification approach to solve (T)ABSA. Corresponding to the four ways of constructing sentences, we name the models: BERTpair-QA-M, BERT-pair-NLI-M, BERT-pair-QA-B, and BERT-pair-NLI-B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT-single and BERT-pair",
"sec_num": "2.3.3"
},
{
"text": "We evaluate our method on the SentiHood (Saeidi et al., 2016 ) dataset 2 , which consists of 5,215 sentences, 3,862 of which contain a single target, and the remainder multiple targets. Each sentence contains a list of target-aspect pairs {t, a} with the sentiment polarity y. Ultimately, given a sentence s and the target t in the sentence, we need to:",
"cite_spans": [
{
"start": 40,
"end": 60,
"text": "(Saeidi et al., 2016",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "(1) detect the mention of an aspect a for the target t;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "(2) determine the positive or negative sentiment polarity y for detected target-aspect pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "We also evaluate our method on SemEval-2014 Task 4 (Pontiki et al., 2014) dataset 3 for aspectbased sentiment analysis. The only difference from the SentiHood is that the target-aspect pairs {t, a} become only aspects a. This setting allows us to jointly evaluate subtask 3 (Aspect Category Detection) and subtask 4 (Aspect Category Polarity).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "We use the pre-trained uncased BERT-base model 4 for fine-tuning. The number of Transformer blocks is 12, the hidden layer size is 768, the number of self-attention heads is 12, and the total number of parameters for the pretrained model is 110M. When fine-tuning, we keep the dropout probability at 0.1, set the number of Model Aspect Sentiment Acc. F 1 AUC Acc. AUC LR (Saeidi et al., 2016 ) -39.3 92.4 87.5 90.5 LSTM-Final (Saeidi et al., 2016) -68.9 89.8 82.0 85.4 LSTM-Loc (Saeidi et al., 2016) -69.3 89.7 81.9 83.9 LSTM+TA+SA (Ma et al., 2018) 66.4 76.7 -86.8 -SenticLSTM (Ma et al., 2018) 67.4 78.2 -89.3 -Dmu-Entnet (Liu et al., 2018) 73 Table 3 : Performance on SentiHood dataset. We boldface the score with the best performance across all models. We use the results reported in Saeidi et al. (2016) , Ma et al. (2018) and Liu et al. (2018) . \"-\" means not reported.",
"cite_spans": [
{
"start": 371,
"end": 391,
"text": "(Saeidi et al., 2016",
"ref_id": "BIBREF14"
},
{
"start": 426,
"end": 447,
"text": "(Saeidi et al., 2016)",
"ref_id": "BIBREF14"
},
{
"start": 478,
"end": 499,
"text": "(Saeidi et al., 2016)",
"ref_id": "BIBREF14"
},
{
"start": 532,
"end": 549,
"text": "(Ma et al., 2018)",
"ref_id": "BIBREF7"
},
{
"start": 578,
"end": 595,
"text": "(Ma et al., 2018)",
"ref_id": "BIBREF7"
},
{
"start": 624,
"end": 642,
"text": "(Liu et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 788,
"end": 808,
"text": "Saeidi et al. (2016)",
"ref_id": "BIBREF14"
},
{
"start": 811,
"end": 827,
"text": "Ma et al. (2018)",
"ref_id": "BIBREF7"
},
{
"start": 832,
"end": 849,
"text": "Liu et al. (2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 646,
"end": 653,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Hyperparameters",
"sec_num": "3.2"
},
{
"text": "epochs to 4. The initial learning rate is 2e-5, and the batch size is 24.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperparameters",
"sec_num": "3.2"
},
{
"text": "We compare our model with the following models:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exp-I: TABSA",
"sec_num": "3.3"
},
{
"text": "\u2022 LR (Saeidi et al., 2016) : a logistic regression classifier with n-gram and pos-tag features.",
"cite_spans": [
{
"start": 5,
"end": 26,
"text": "(Saeidi et al., 2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exp-I: TABSA",
"sec_num": "3.3"
},
{
"text": "\u2022 LSTM-Final (Saeidi et al., 2016 ): a biLSTM model with the final state as a representation.",
"cite_spans": [
{
"start": 13,
"end": 33,
"text": "(Saeidi et al., 2016",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exp-I: TABSA",
"sec_num": "3.3"
},
{
"text": "\u2022 LSTM-Loc (Saeidi et al., 2016 ): a biLSTM model with the state associated with the target position as a representation.",
"cite_spans": [
{
"start": 11,
"end": 31,
"text": "(Saeidi et al., 2016",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exp-I: TABSA",
"sec_num": "3.3"
},
{
"text": "\u2022 LSTM+TA+SA (Ma et al., 2018 ): a biLSTM model which introduces complex target-level and sentence-level attention mechanisms.",
"cite_spans": [
{
"start": 13,
"end": 29,
"text": "(Ma et al., 2018",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exp-I: TABSA",
"sec_num": "3.3"
},
{
"text": "\u2022 SenticLSTM (Ma et al., 2018) : an upgraded version of the LSTM+TA+SA model which introduces external information from Sentic-Net (Cambria et al., 2016) .",
"cite_spans": [
{
"start": 13,
"end": 30,
"text": "(Ma et al., 2018)",
"ref_id": "BIBREF7"
},
{
"start": 131,
"end": 153,
"text": "(Cambria et al., 2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exp-I: TABSA",
"sec_num": "3.3"
},
{
"text": "\u2022 Dmu-Entnet (Liu et al., 2018) : a bidirectional EntNet (Henaff et al., 2016) with external \"memory chains\" with a delayed memory update mechanism to track entities.",
"cite_spans": [
{
"start": 13,
"end": 31,
"text": "(Liu et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 57,
"end": 78,
"text": "(Henaff et al., 2016)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exp-I: TABSA",
"sec_num": "3.3"
},
{
"text": "During the evaluation of SentiHood, following Saeidi et al. (2016) , we only consider the four most frequently seen aspects (general, price, transitlocation, safety). When evaluating the aspect detection, following Ma et al. (2018) , we use strict accuracy and Macro-F1, and we also report AUC.",
"cite_spans": [
{
"start": 46,
"end": 66,
"text": "Saeidi et al. (2016)",
"ref_id": "BIBREF14"
},
{
"start": 215,
"end": 231,
"text": "Ma et al. (2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exp-I: TABSA",
"sec_num": "3.3"
},
{
"text": "In sentiment classification, we use accuracy and macro-average AUC as the evaluation indices.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exp-I: TABSA",
"sec_num": "3.3"
},
{
"text": "Results on SentiHood are presented in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 38,
"end": 45,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.3.1"
},
{
"text": "The results of the BERT-single model on aspect detection are better than Dmu-Entnet, but the accuracy of sentiment classification is much lower than that of both SenticLstm and Dmu-Entnet, with a difference of 3.8 and 5.5 respectively. However, BERT-pair outperforms other models on aspect detection and sentiment analysis by a substantial margin, obtaining 9.4 macro-average F1 and 2.6 accuracies improvement over Dmu-Entnet. Overall, the performance of the four BERT-pair models is close. It is worth noting that BERT-pair-NLI models perform relatively better on aspect detection, while BERT-pair-QA models perform better on sentiment classification. Also, the BERT-pair-QA-B and BERT-pair-NLI-B models can achieve better AUC values on sentiment classification than the other models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.3.1"
},
{
"text": "The benchmarks for SemEval-2014 Task 4 are the two best performing systems in Pontiki et al. (2014) and ATAE-LSTM (Wang et al., 2016) . When evaluating SemEval-2014 Task 4 subtask 3 and subtask 4, following Pontiki et al. (2014) , we use Micro-F1 and accuracy respectively.",
"cite_spans": [
{
"start": 78,
"end": 99,
"text": "Pontiki et al. (2014)",
"ref_id": "BIBREF12"
},
{
"start": 114,
"end": 133,
"text": "(Wang et al., 2016)",
"ref_id": "BIBREF19"
},
{
"start": 207,
"end": 228,
"text": "Pontiki et al. (2014)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exp-II: ABSA",
"sec_num": "3.4"
},
{
"text": "Results on SemEval-2014 are presented in Table 4 and Table 5 . We find that BERT-single Table 4 : Test set results for Semeval-2014 task 4 Subtask 3: Aspect Category Detection. We use the results reported in XRCE (Brun et al., 2014) and NRC-Canada (Kiritchenko et al., 2014) . Table 5 : Test set accuracy (%) for Semeval-2014 task 4 Subtask 4: Aspect Category Polarity. We use the results reported in XRCE (Brun et al., 2014) , NRC-Canada (Kiritchenko et al., 2014) and ATAE-LSTM (Wang et al., 2016) . \"-\" means not reported.",
"cite_spans": [
{
"start": 213,
"end": 232,
"text": "(Brun et al., 2014)",
"ref_id": "BIBREF0"
},
{
"start": 248,
"end": 274,
"text": "(Kiritchenko et al., 2014)",
"ref_id": "BIBREF5"
},
{
"start": 406,
"end": 425,
"text": "(Brun et al., 2014)",
"ref_id": "BIBREF0"
},
{
"start": 439,
"end": 465,
"text": "(Kiritchenko et al., 2014)",
"ref_id": "BIBREF5"
},
{
"start": 480,
"end": 499,
"text": "(Wang et al., 2016)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 41,
"end": 60,
"text": "Table 4 and Table 5",
"ref_id": null
},
{
"start": 88,
"end": 95,
"text": "Table 4",
"ref_id": null
},
{
"start": 277,
"end": 284,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.4.1"
},
{
"text": "has achieved better results on these two subtasks, and BERT-pair has achieved further improvements over BERT-single. The BERT-pair-NLI-B model achieves the best performance for aspect category detection. For aspect category polarity, BERTpair-QA-B performs best on all 4-way, 3-way, and binary settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.4.1"
},
{
"text": "Why is the experimental result of the BERT-pair model so much better? On the one hand, we convert the target and aspect information into an auxiliary sentence, which is equivalent to exponentially expanding the corpus. A sentence s i in the original data set will be expanded into",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "(s i , t 1 , a 1 ), \u2022 \u2022 \u2022 , (s i , t 1 , a na ), \u2022 \u2022 \u2022 , (s i , t nt , a na )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "in the sentence pair classification task. On the other hand, it can be seen from the amazing improvement of the BERT model on the QA and NLI tasks (Devlin et al., 2018 ) that the BERT model has an advantage in dealing with sentence pair classification tasks. This advantage comes from both unsupervised masked language model and next sentence prediction tasks. TABSA is more complicated than SA due to additional target and aspect information. Directly fine-tuning the pre-trained BERT on TABSA does not achieve performance growth. However, when we separate the target and the aspect to form an auxiliary sentence and transform the TABSA into a sentence pair classification task, the scenario is similar to QA and NLI, and then the advantage of the pre-trained BERT model can be fully utilized. Our approach is not limited to TABSA, and this construction method can be used for other similar tasks. For ABSA, we can use the same approach to construct the auxiliary sentence with only aspects.",
"cite_spans": [
{
"start": 147,
"end": 167,
"text": "(Devlin et al., 2018",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "In BERT-pair models, BERT-pair-QA-B and BERT-pair-NLI-B achieve better AUC values on sentiment classification, probably because of the modeling of label information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "In this paper, we constructed an auxiliary sentence to transform (T)ABSA from a single sentence classification task to a sentence pair classification task. We fine-tuned the pre-trained BERT model on the sentence pair classification task and obtained the new state-of-the-art results. We compared the experimental results of single sentence classification and sentence pair classification based on BERT fine-tuning, analyzed the advantages of sentence pair classification, and verified the validity of our conversion method. In the future, we will apply this conversion method to other similar tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Dataset mirror: https://github.com/uclmr/jack/tree/master /data/sentihood 3 http://alt.qcri.org/semeval2014/task4/ 4 https://storage.googleapis.com/bert models/2018 10 18/ uncased L-12 H-768 A-12.zip",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers for their valuable comments. The research work is supported by Shanghai Municipal Science and Technology Commission (No. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Xrce: Hybrid classification for aspect-based sentiment analysis",
"authors": [
{
"first": "Caroline",
"middle": [],
"last": "Brun",
"suffix": ""
},
{
"first": "Diana",
"middle": [
"Nicoleta"
],
"last": "Popa",
"suffix": ""
},
{
"first": "Claude",
"middle": [],
"last": "Roux",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 8th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "838--842",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Caroline Brun, Diana Nicoleta Popa, and Claude Roux. 2014. Xrce: Hybrid classification for aspect-based sentiment analysis. In Proceedings of the 8th In- ternational Workshop on Semantic Evaluation (Se- mEval 2014), pages 838-842.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Senticnet 4: A semantic resource for sentiment analysis based on conceptual primitives",
"authors": [
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Rajiv",
"middle": [],
"last": "Bajpai",
"suffix": ""
},
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Schuller",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "2666--2677",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik Cambria, Soujanya Poria, Rajiv Bajpai, and Bj\u00f6rn Schuller. 2016. Senticnet 4: A semantic resource for sentiment analysis based on conceptual primitives. In Proceedings of COLING 2016, the 26th Inter- national Conference on Computational Linguistics: Technical Papers, pages 2666-2677.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Tracking the world state with recurrent entity networks",
"authors": [
{
"first": "Mikael",
"middle": [],
"last": "Henaff",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Szlam",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1612.03969"
]
},
"num": null,
"urls": [],
"raw_text": "Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, and Yann LeCun. 2016. Tracking the world state with recurrent entity networks. arXiv preprint arXiv:1612.03969.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Aspect and sentiment unification model for online review analysis",
"authors": [
{
"first": "Yohan",
"middle": [],
"last": "Jo",
"suffix": ""
},
{
"first": "Alice",
"middle": [
"H"
],
"last": "Oh",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the fourth ACM international conference on Web search and data mining",
"volume": "",
"issue": "",
"pages": "815--824",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yohan Jo and Alice H Oh. 2011. Aspect and senti- ment unification model for online review analysis. In Proceedings of the fourth ACM international con- ference on Web search and data mining, pages 815- 824. ACM.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Nrc-canada-2014: Detecting aspects and sentiment in customer reviews",
"authors": [
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 8th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "437--442",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Svetlana Kiritchenko, Xiaodan Zhu, Colin Cherry, and Saif Mohammad. 2014. Nrc-canada-2014: Detect- ing aspects and sentiment in customer reviews. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 437- 442.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Recurrent entity networks with delayed memory update for targeted aspect-based sentiment analysis",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.11019"
]
},
"num": null,
"urls": [],
"raw_text": "Fei Liu, Trevor Cohn, and Timothy Baldwin. 2018. Re- current entity networks with delayed memory update for targeted aspect-based sentiment analysis. arXiv preprint arXiv:1804.11019.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Targeted aspect-based sentiment analysis via embedding commonsense knowledge into an attentive lstm",
"authors": [
{
"first": "Yukun",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Haiyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yukun Ma, Haiyun Peng, and Erik Cambria. 2018. Targeted aspect-based sentiment analysis via em- bedding commonsense knowledge into an attentive lstm. In Proceedings of AAAI.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Phrasernn: Phrase recursive neural network for aspect-based sentiment analysis",
"authors": [],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2509--2514",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thien Hai Nguyen and Kiyoaki Shirai. 2015. Phrasernn: Phrase recursive neural network for aspect-based sentiment analysis. In Proceedings of the 2015 Conference on Empirical Methods in Nat- ural Language Processing, pages 2509-2514.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.05365"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. arXiv preprint arXiv:1802.05365.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Semeval-2016 task 5: Aspect based sentiment analysis",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Pontiki",
"suffix": ""
},
{
"first": "Dimitris",
"middle": [],
"last": "Galanis",
"suffix": ""
},
{
"first": "Haris",
"middle": [],
"last": "Papageorgiou",
"suffix": ""
},
{
"first": "Ion",
"middle": [],
"last": "Androutsopoulos",
"suffix": ""
},
{
"first": "Suresh",
"middle": [],
"last": "Manandhar",
"suffix": ""
},
{
"first": "Al-Smadi",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Mahmoud",
"middle": [],
"last": "Al-Ayyoub",
"suffix": ""
},
{
"first": "Yanyan",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Orph\u00e9e",
"middle": [],
"last": "De Clercq",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th international workshop on semantic evaluation (SemEval-2016)",
"volume": "",
"issue": "",
"pages": "19--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Pontiki, Dimitris Galanis, Haris Papageor- giou, Ion Androutsopoulos, Suresh Manandhar, AL- Smadi Mohammad, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orph\u00e9e De Clercq, et al. 2016. Semeval-2016 task 5: Aspect based sentiment anal- ysis. In Proceedings of the 10th international work- shop on semantic evaluation (SemEval-2016), pages 19-30.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Semeval-2015 task 12: Aspect based sentiment analysis",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Pontiki",
"suffix": ""
},
{
"first": "Dimitris",
"middle": [],
"last": "Galanis",
"suffix": ""
},
{
"first": "Haris",
"middle": [],
"last": "Papageorgiou",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 9th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "486--495",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos. 2015. Semeval-2015 task 12: Aspect based sentiment anal- ysis. In Proceedings of the 9th International Work- shop on Semantic Evaluation (SemEval 2015), pages 486-495.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Semeval-2014 task 4: Aspect based sentiment analysis",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Pontiki",
"suffix": ""
},
{
"first": "Dimitris",
"middle": [],
"last": "Galanis",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Pavlopoulos",
"suffix": ""
},
{
"first": "Harris",
"middle": [],
"last": "Papageorgiou",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 8th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "27--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. Semeval-2014 task 4: As- pect based sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 27-35. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Improving language understanding by generative pretraining",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Sali- mans, and Ilya Sutskever. 2018. Improv- ing language understanding by generative pre- training. URL https://s3-us-west-2. amazon- aws. com/openai-assets/research-covers/language- unsupervised/language understanding paper. pdf.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Sentihood: targeted aspect based sentiment analysis dataset for urban neighbourhoods",
"authors": [
{
"first": "Marzieh",
"middle": [],
"last": "Saeidi",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Bouchard",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Liakata",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1610.03771"
]
},
"num": null,
"urls": [],
"raw_text": "Marzieh Saeidi, Guillaume Bouchard, Maria Liakata, and Sebastian Riedel. 2016. Sentihood: targeted aspect based sentiment analysis dataset for urban neighbourhoods. arXiv preprint arXiv:1610.03771.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Effective lstms for targetdependent sentiment classification",
"authors": [
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Xiaocheng",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1512.01100"
]
},
"num": null,
"urls": [],
"raw_text": "Duyu Tang, Bing Qin, Xiaocheng Feng, and Ting Liu. 2015. Effective lstms for target- dependent sentiment classification. arXiv preprint arXiv:1512.01100.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Aspect level sentiment classification with deep memory network",
"authors": [
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1605.08900"
]
},
"num": null,
"urls": [],
"raw_text": "Duyu Tang, Bing Qin, and Ting Liu. 2016. Aspect level sentiment classification with deep memory net- work. arXiv preprint arXiv:1605.08900.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Dcu: Aspect-based polarity classification for semeval task 4",
"authors": [
{
"first": "Joachim",
"middle": [],
"last": "Wagner",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "Santiago",
"middle": [],
"last": "Cortes",
"suffix": ""
},
{
"first": "Utsab",
"middle": [],
"last": "Barman",
"suffix": ""
},
{
"first": "Dasha",
"middle": [],
"last": "Bogdanova",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Lamia",
"middle": [],
"last": "Tounsi",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 8th international workshop on semantic evaluation",
"volume": "",
"issue": "",
"pages": "223--229",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joachim Wagner, Piyush Arora, Santiago Cortes, Utsab Barman, Dasha Bogdanova, Jennifer Foster, and Lamia Tounsi. 2014. Dcu: Aspect-based polarity classification for semeval task 4. In Proceedings of the 8th international workshop on semantic evalua- tion (SemEval 2014), pages 223-229.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Tdparse: Multi-target-specific sentiment recognition on twitter",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Liakata",
"suffix": ""
},
{
"first": "Arkaitz",
"middle": [],
"last": "Zubiaga",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Procter",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "483--493",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Wang, Maria Liakata, Arkaitz Zubiaga, and Rob Procter. 2017. Tdparse: Multi-target-specific sen- timent recognition on twitter. In Proceedings of the 15th Conference of the European Chapter of the As- sociation for Computational Linguistics: Volume 1, Long Papers, volume 1, pages 483-493.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Attention-based lstm for aspect-level sentiment classification",
"authors": [
{
"first": "Yequan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "606--615",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yequan Wang, Minlie Huang, Li Zhao, et al. 2016. Attention-based lstm for aspect-level sentiment clas- sification. In Proceedings of the 2016 conference on empirical methods in natural language processing, pages 606-615.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"type_str": "table",
"content": "<table><tr><td colspan=\"3\">Methods Output Auxiliary Sentence</td></tr><tr><td>QA-M</td><td>S.P.</td><td>Question w/o S.P.</td></tr><tr><td>NLI-M</td><td>S.P.</td><td>Pseudo w/o S.P.</td></tr><tr><td>QA-B</td><td>{yes,no}</td><td>Question w/ S.P.</td></tr><tr><td>NLI-B</td><td>{yes,no}</td><td>Pseudo w/ S.P.</td></tr></table>",
"html": null,
"num": null,
"text": "An example of SentiHood dataset."
},
"TABREF2": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": ""
}
}
}
}