ACL-OCL / Base_JSON /prefixL /json /ltedi /2021.ltedi-1.16.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:12:24.516321Z"
},
"title": "ZYJ@LT-EDI-EACL2021:XLM-RoBERTa-Based Model with Attention for Hope Speech Detection",
"authors": [
{
"first": "Yingjia",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Yunnan University/ Yunnan",
"location": {
"country": "P.R. China"
}
},
"email": ""
},
{
"first": "Xin",
"middle": [],
"last": "Tao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Yunnan University / Yunnan",
"location": {
"country": "P.R. China"
}
},
"email": "taoxinwy@126.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Due to the development of modern computer technology and the increase in the number of online media users, we can see all kinds of posts and comments everywhere on the internet. Hope speech can not only inspire the creators but also make other viewers pleasant. It is necessary to effectively and automatically detect hope speech. This paper describes the approach of our team in the task of hope speech detection. We use the attention mechanism to adjust the weight of all the output layers of XLM-RoBERTa to make full use of the information extracted from each layer, and use the weighted sum of all the output layers to complete the classification task. And we use the Stratified-K-Fold method to enhance the training data set. We achieve a weighted average F1-score of 0.59, 0.84, and 0.92 for Tamil, Malayalam, and English language, ranked 3rd, 2nd, and 2nd.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Due to the development of modern computer technology and the increase in the number of online media users, we can see all kinds of posts and comments everywhere on the internet. Hope speech can not only inspire the creators but also make other viewers pleasant. It is necessary to effectively and automatically detect hope speech. This paper describes the approach of our team in the task of hope speech detection. We use the attention mechanism to adjust the weight of all the output layers of XLM-RoBERTa to make full use of the information extracted from each layer, and use the weighted sum of all the output layers to complete the classification task. And we use the Stratified-K-Fold method to enhance the training data set. We achieve a weighted average F1-score of 0.59, 0.84, and 0.92 for Tamil, Malayalam, and English language, ranked 3rd, 2nd, and 2nd.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "With the development of network media, people can express their opinions on the Internet at any time, among which there will be some hope speech. Hope speech will encourage people to stand firm in their beliefs and follow the path of their goals. At the same time, comments or posts containing hope speech can often contain factors such as Equality, Diversity, and Inclusion. They can also provide support, reassurance, suggestions, inspiration, and insight, etc., which can change people's emotions from negative to positive. The proportion of comments and posts on media platforms that contain hopeful statements can reflect the current emotional tendencies of online communities. As the proportion of words containing hope increases, the corresponding posts conveying negative emotions such as pessimism and despair will decrease, thus promoting social harmony. Because the net-work media platform needs to support multiple languages and faces users with different backgrounds, the multi-lingual, multi-modal, and non-standard writing style of online media posts makes it a very challenging task to effectively detect whether the comment or post belongs to the hope speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our team participated the shared task on Hope Speech Detection for Equality, Diversity, and Inclusion at LT-EDI 2021 -EACL 2021 (Chakravarthi, 2020 Chakravarthi and Muralidaran, 2021) . The goal of this task is to identify whether a given comment contains hope speech or not. This is a comment/post level classification task. The task is divided into three sub-tasks based on three languages, namely English, Tamil, and Malayalam. Given a Youtube comment, the systems submitted by the participants should classify it into 'Hope speech', 'Not hope speech' and 'Not in intended language'.",
"cite_spans": [
{
"start": 105,
"end": 116,
"text": "LT-EDI 2021",
"ref_id": null
},
{
"start": 117,
"end": 127,
"text": "-EACL 2021",
"ref_id": null
},
{
"start": 128,
"end": 147,
"text": "(Chakravarthi, 2020",
"ref_id": "BIBREF5"
},
{
"start": 148,
"end": 183,
"text": "Chakravarthi and Muralidaran, 2021)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this task, we use XLM-RoBERTa as the base model to extract features and combine 12 layers of output with an embedding layer to do the classification task. In order to train the weight of each layer, we add an attention mechanism for the model. During the training, we process the training data through the Stratified-K-Fold method, each data set is trained to get the different weights, and finally, the best-predicted value is obtained through a voting mechanism. The rest of our paper is structured as follows. Section 2 describes the related work. Methods are described in Section 3. Experiments and results are described in Section 4. The conclusion is drawn in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While hope speech detection is a relatively new task, it can be categorized as a problem of sentiment analysis (Bakshi et al., 2016) and text classification. A great deal of work has been done by many researchers and practitioners from in-dustry and academia. Turney (2002)proposed a simple unsupervised learning (Barlow, 1989) algorithm for classifying reviews as recommended or not recommended. Basiri et al. 2020proposed an Attention-based Bidirectional CNN-RNN Deep Model (ABCDM) for polarity classification of long comments and short tweets, two independent bidirectional LSTM and GRU layers are used to extract past and future context by considering temporal information flow in both directions. Chen and Ke proposed a convolutional neural network-regional long short-term memory (CNN-RLSTM) that combines CNN and regional LSTM. At the sentence level (Kim and Hovy, 2004), the emotional information of the whole sentence is retained through CNN network, and through regional LSTM effectively distinguish between different target emotional polarity. Chen et al. proposed a multi-head attentionmemory network (MNHMA) based on hierarchy for aspect-based sentiment analysis, which makes full use of the long-term semantic relationship of a given aspect term (Manandhar, 2014) in a sentence to reduce the loss of aspect information. Wang et al. (2014) demonstrated the effectiveness of ensemble learning in sentiment analysis on ten publicly available datasets. Akhtar et al. 2017proposed a cascade framework based on feature selection and classifier ensemble (Avidan, 2007) . Particle swarm optimization algorithm is used for feature-based sentiment analysis. Because of the powerful performance of BERT (Devlin et al., 2018) , it is natural to use it for sentiment analysis. Malhotra et al. proposed an efficacious transfer learning based ensemble model for sentiment analysis.",
"cite_spans": [
{
"start": 1260,
"end": 1277,
"text": "(Manandhar, 2014)",
"ref_id": "BIBREF13"
},
{
"start": 1334,
"end": 1352,
"text": "Wang et al. (2014)",
"ref_id": "BIBREF16"
},
{
"start": 1561,
"end": 1575,
"text": "(Avidan, 2007)",
"ref_id": "BIBREF1"
},
{
"start": 1706,
"end": 1727,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We make a statistic based on the data set provided by the organizer. We count the proportions of 'hope speech', 'not hope speech', and 'not in integrated language' in the training set and verification set. To make an intuitive observation, the statistical situation of three language datasets, English, Tamil, and Malayalam, are presented in one single table. As shown in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 372,
"end": 379,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data description",
"sec_num": "3.1"
},
{
"text": "In addition to the English dataset, this shared task also involves the training and classification of lowresource languages such as Tamil, and Malayalam. Therefore, XLM-RoBERTa, a pretraining enhancement model based on multiple languages, is undoubtedly the best choice. To further improve the performance of the model, we add a specific network structure based on XLM-RoBERTa. Since the output of each layer of XLM-RoBERTa contains different information, we combine the 12 layers of its output with the embedding layer to do the classification task. To adjust the weight of each layer, the attention mechanism is used to train the weight of each layer. Finally, the weighted sum of all layers is input into the classifier for the classification task, as shown in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 764,
"end": 772,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "XLM-RoBERTa with attention",
"sec_num": "3.2"
},
{
"text": "Since many words and symbols on social media are useless and sentence expression is not standardized enough, in order to extract useful features more effectively and reduce interference during training the model, we need to clean the text in the training data set with NLTK (Loper and Bird, 2002) tool. We perform the following preprocessing steps.",
"cite_spans": [
{
"start": 274,
"end": 296,
"text": "(Loper and Bird, 2002)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data preprocessing",
"sec_num": "4.1"
},
{
"text": "\u2022 Remove Emoticons. Text data contains a lot of emoticons. Emoticons increase the number of unknown words and reduce training effects. Emoticons are replaced by emotional words with their own meanings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data preprocessing",
"sec_num": "4.1"
},
{
"text": "\u2022 Convert Abbreviations. All abbreviations are converted to complete parts. For English datasets, this helps the machine understand the meaning of words (e.g. \"there's\" becomes 'there' and 'are').",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data preprocessing",
"sec_num": "4.1"
},
{
"text": "\u2022 Remove Stop Word. For English datasets, the stop words are meaningless words, such as' is' /' our '/' the' /' in '/'at' etc. They don't add much meaning to the sentence. Single stop words are very frequent words. To reduce the number of vocabularies we have to deal with, and thus reduce the complexity of subsequent programs, we need to clear out stop words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data preprocessing",
"sec_num": "4.1"
},
{
"text": "\u2022 Stemming and Lemmatization. For English datasets, to further simplify the textual data, we can standardize the different variations and inflections of words. Stemming extraction is the process of reducing a word to its stem or root. For example 'branching'/' branched '/' branches', can all be reduced to 'branch'. This helps reduce complexity while retaining the basic meaning of the word. For example, the suffixes 'ing' and 'ed'can be discarded;'ies' can be replaced by 'y' and so on. This may result in an incomplete word stem, but only as long as all forms of the word are reduced to the same stem. So they all have the same fundamental meaning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data preprocessing",
"sec_num": "4.1"
},
{
"text": "Before training, we use the Stratified-K-fold method to process the dataset. The core of Stratified-K-Fold is to ensure stratified sampling. It needs to ensure that the ratio of all kinds of samples in test sets and training sets is the same as in the original data set, namely k-fold crossover segmentation. In the initial training set, each type of data is divided into k sub samples, one single subsample is reserved as the data of validation model, and the maximum sequence length learning rate 256 2e-5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stratified-K-Fold method",
"sec_num": "4.2"
},
{
"text": "Stratified-K-Fold number batch size 5 4 Table 2 : Details of the parameters other k-1 samples are used for training. Finally, we get the final output from k results based on the voting (Onan et al., 2016) method.",
"cite_spans": [
{
"start": 185,
"end": 204,
"text": "(Onan et al., 2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 40,
"end": 47,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Stratified-K-Fold method",
"sec_num": "4.2"
},
{
"text": "In this experiment, XLM-RoBERTa-base pretrained model is used. On the basis of the pretraining model, we add the attention mechanism to adjust the structure of the whole model. Then we began to train the model. The batch size is set to 4, the max sequence length is set to 256, the learning rate is set to 2e-5, and the Stratified-K-Fold number is set to 5. As shown in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 370,
"end": 377,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment setting",
"sec_num": "4.3"
},
{
"text": "The results submitted by teams are evaluated with a weighted average F1-score. According to the leaderboard provided by the organizer, our team's F1-score is 0.59, ranked 3rd place for the Tamil language. For the Malayalam language, our team's F1-score is 0.84 ranked 2nd place, and for the English language, our team's F1-score is 0.92 ranked 2nd place. As shown in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 367,
"end": 374,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "Best F1-score Our Precision Our Recall Our F1-score Rank Table 3 5 Conclusion",
"cite_spans": [],
"ref_spans": [
{
"start": 57,
"end": 64,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "In this paper, we present our work in the task of hope speech detection for Tamil, Malayalam, and English language. Our team uses a model based on XLM-RoBERTa. This model uses an attention mechanism to adjust the weight of each output layer, makes full use of the information extracted from each layer to complete the classification task, and the Stratified-K-Fold method is used to enhance the training data. Finally, the model we submit achieves significant performance for all three languages. For future work, we will further optimize our model to achieve better results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Feature selection and ensemble construction: A two-step method for aspect based sentiment analysis. Knowledge-Based Systems",
"authors": [
{
"first": "Deepak",
"middle": [],
"last": "Md Shad Akhtar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gupta",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "125",
"issue": "",
"pages": "116--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Md Shad Akhtar, Deepak Gupta, Asif Ekbal, and Push- pak Bhattacharyya. 2017. Feature selection and en- semble construction: A two-step method for aspect based sentiment analysis. Knowledge-Based Sys- tems, 125:116-135.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Ensemble tracking. IEEE transactions on pattern analysis and machine intelligence",
"authors": [
{
"first": "",
"middle": [],
"last": "Shai Avidan",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "29",
"issue": "",
"pages": "261--271",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shai Avidan. 2007. Ensemble tracking. IEEE transac- tions on pattern analysis and machine intelligence, 29(2):261-271.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Opinion mining and sentiment analysis",
"authors": [
{
"first": "Navneet",
"middle": [],
"last": "Rushlene Kaur Bakshi",
"suffix": ""
},
{
"first": "Ravneet",
"middle": [],
"last": "Kaur",
"suffix": ""
},
{
"first": "Gurpreet",
"middle": [],
"last": "Kaur",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kaur",
"suffix": ""
}
],
"year": 2016,
"venue": "2016 3rd international conference on computing for sustainable global development (INDIACom)",
"volume": "",
"issue": "",
"pages": "452--455",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rushlene Kaur Bakshi, Navneet Kaur, Ravneet Kaur, and Gurpreet Kaur. 2016. Opinion mining and sen- timent analysis. In 2016 3rd international confer- ence on computing for sustainable global develop- ment (INDIACom), pages 452-455. IEEE.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Unsupervised learning",
"authors": [
{
"first": "B",
"middle": [],
"last": "Horace",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Barlow",
"suffix": ""
}
],
"year": 1989,
"venue": "Neural computation",
"volume": "1",
"issue": "3",
"pages": "295--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Horace B Barlow. 1989. Unsupervised learning. Neu- ral computation, 1(3):295-311.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Abcdm: An attention-based bidirectional cnn-rnn deep model for sentiment analysis",
"authors": [
{
"first": "Shahla",
"middle": [],
"last": "Mohammad Ehsan Basiri",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nemati",
"suffix": ""
}
],
"year": 2020,
"venue": "Future Generation Computer Systems",
"volume": "115",
"issue": "",
"pages": "279--294",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad Ehsan Basiri, Shahla Nemati, Moloud Ab- dar, Erik Cambria, and U Rajendra Acharya. 2020. Abcdm: An attention-based bidirectional cnn-rnn deep model for sentiment analysis. Future Gener- ation Computer Systems, 115:279-294.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "HopeEDI: A multilingual hope speech detection dataset for equality, diversity, and inclusion",
"authors": [
{
"first": "Chakravarthi",
"middle": [],
"last": "Bharathi Raja",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Media",
"volume": "",
"issue": "",
"pages": "41--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bharathi Raja Chakravarthi. 2020. HopeEDI: A mul- tilingual hope speech detection dataset for equality, diversity, and inclusion. In Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Me- dia, pages 41-53, Barcelona, Spain (Online). Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Findings of the shared task on Hope Speech Detection for Equality, Diversity, and Inclusion",
"authors": [
{
"first": "Vigneshwaran",
"middle": [],
"last": "Bharathi Raja Chakravarthi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Muralidaran",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bharathi Raja Chakravarthi and Vigneshwaran Mural- idaran. 2021. Findings of the shared task on Hope Speech Detection for Equality, Diversity, and Inclu- sion. In Proceedings of the First Workshop on Lan- guage Technology for Equality, Diversity and Inclu- sion. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A hierarchical neural model for target-based sentiment analysis. Concurrency and Computation: Practice and Experience",
"authors": [
{
"first": "Ke",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wende",
"middle": [],
"last": "Ke",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ke Chen and Wende Ke. A hierarchical neural model for target-based sentiment analysis. Concurrency and Computation: Practice and Experience, page e6184.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Memory network with hierarchical multi-head attention for aspect-based sentiment analysis",
"authors": [
{
"first": "Yuzhong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Tianhao",
"middle": [],
"last": "Zhuang",
"suffix": ""
},
{
"first": "Kun",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": null,
"venue": "Applied Intelligence",
"volume": "",
"issue": "",
"pages": "1--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuzhong Chen, Tianhao Zhuang, and Kun Guo. Mem- ory network with hierarchical multi-head attention for aspect-based sentiment analysis. Applied Intelli- gence, pages 1-18.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Determining the sentiment of opinions",
"authors": [
{
"first": "Min",
"middle": [],
"last": "Soo",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2004,
"venue": "COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1367--1373",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soo-Min Kim and Eduard Hovy. 2004. Determining the sentiment of opinions. In COLING 2004: Pro- ceedings of the 20th International Conference on Computational Linguistics, pages 1367-1373.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Nltk: The natural language toolkit",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Loper and Steven Bird. 2002. Nltk: The natu- ral language toolkit. arXiv preprint cs/0205028.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Bidirectional transfer learning model for sentiment analysis of natural language",
"authors": [
{
"first": "Shivani",
"middle": [],
"last": "Malhotra",
"suffix": ""
},
{
"first": "Vinay",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Alpana",
"middle": [],
"last": "Agarwal",
"suffix": ""
}
],
"year": null,
"venue": "Journal of Ambient Intelligence and Humanized Computing",
"volume": "",
"issue": "",
"pages": "1--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shivani Malhotra, Vinay Kumar, and Alpana Agarwal. Bidirectional transfer learning model for sentiment analysis of natural language. Journal of Ambient In- telligence and Humanized Computing, pages 1-21.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Semeval-2014 task 4: aspect based sentiment analysis",
"authors": [
{
"first": "Suresh",
"middle": [],
"last": "Manandhar",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 8th international workshop on semantic evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suresh Manandhar. 2014. Semeval-2014 task 4: aspect based sentiment analysis. In Proceedings of the 8th international workshop on semantic evaluation (Se- mEval 2014).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A multiobjective weighted voting ensemble classifier based on differential evolution algorithm for text sentiment classification",
"authors": [
{
"first": "Aytug",
"middle": [],
"last": "Onan",
"suffix": ""
},
{
"first": "Serdar",
"middle": [],
"last": "Korukoglu",
"suffix": ""
},
{
"first": "Hasan",
"middle": [],
"last": "Bulut",
"suffix": ""
}
],
"year": 2016,
"venue": "Expert Systems with Applications",
"volume": "62",
"issue": "",
"pages": "1--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aytug Onan, Serdar Korukoglu, and Hasan Bulut. 2016. A multiobjective weighted voting ensemble classi- fier based on differential evolution algorithm for text sentiment classification. Expert Systems with Appli- cations, 62:1-16.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Thumbs up or thumbs down? semantic orientation applied to unsupervised classification of reviews",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D Turney. 2002. Thumbs up or thumbs down? semantic orientation applied to unsupervised classi- fication of reviews. arXiv preprint cs/0212032.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Sentiment classification: The contribution of ensemble learning",
"authors": [
{
"first": "Gang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jianshan",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Kaiquan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jibao",
"middle": [],
"last": "Gu",
"suffix": ""
}
],
"year": 2014,
"venue": "Decision support systems",
"volume": "57",
"issue": "",
"pages": "77--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gang Wang, Jianshan Sun, Jian Ma, Kaiquan Xu, and Jibao Gu. 2014. Sentiment classification: The con- tribution of ensemble learning. Decision support systems, 57:77-93.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Schematic overview of the architecture of our model"
},
"TABREF1": {
"type_str": "table",
"text": "Train and Validation datasets description.",
"num": null,
"html": null,
"content": "<table/>"
}
}
}
}