| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T02:10:42.439812Z" |
| }, |
| "title": "Human Behavior Assessment using Ensemble Models", |
| "authors": [ |
| { |
| "first": "Abdullah", |
| "middle": [], |
| "last": "Faiz", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "abdullahug@cse" |
| }, |
| { |
| "first": "Ur", |
| "middle": [], |
| "last": "Rahman", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Rituparna", |
| "middle": [], |
| "last": "Khaund", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "National Institute of Technology Silchar Assam", |
| "location": { |
| "country": "India" |
| } |
| }, |
| "email": "rituparnaug@ee" |
| }, |
| { |
| "first": "Utkarsh", |
| "middle": [], |
| "last": "Sinha", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "utkarshug@cse.nits" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Behavioral analysis is a pertinent step in today's automated age. It is important to judge a statement on a variety of parameters before reaching a valid conclusion. In today's world of technology and automation, Natural language processing tools have benefited from growing access to data in order to analyze the context and scenario. A better understanding of human behaviors would empower a range of automated tools to provide users a customized experience. For precise analysis, behavior understanding is important. We have experimented with various machine learning techniques, and have obtained a maximum private score of 0.1033 with a public score of 0.1733. The methods are described as part of the ALTA 2020 shared task. In this work, we have enlisted our results and the challenges faced to solve the problem of the human behavior assessment.", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Behavioral analysis is a pertinent step in today's automated age. It is important to judge a statement on a variety of parameters before reaching a valid conclusion. In today's world of technology and automation, Natural language processing tools have benefited from growing access to data in order to analyze the context and scenario. A better understanding of human behaviors would empower a range of automated tools to provide users a customized experience. For precise analysis, behavior understanding is important. We have experimented with various machine learning techniques, and have obtained a maximum private score of 0.1033 with a public score of 0.1733. The methods are described as part of the ALTA 2020 shared task. In this work, we have enlisted our results and the challenges faced to solve the problem of the human behavior assessment.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Human behavior assessment is an important computation task that automates the task of detecting human behavior from textual data. The behavior in the text depends on many parameters. Some of these include words of different types including attitude and appraisal (Martin and White, 2003) . The use of evaluative language allows for a greater deal of solidarity in the text (Martin and White, 2005) . Various rule-based algorithms can be used to evaluate the essence of the sentence. The sentence judgment can be divided into two sections viz. social esteem and social sanction. The former comprises normality, capacity, and tenacity. Whereas the latter includes veracity and propriety. Sentence classes, their meaning, and sample explanation are included in Table 1 .", |
| "cite_spans": [ |
| { |
| "start": 263, |
| "end": 287, |
| "text": "(Martin and White, 2003)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 373, |
| "end": 397, |
| "text": "(Martin and White, 2005)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 758, |
| "end": 765, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Various approaches include natural language processing tools to extract the sentiment or detect human behavior from the text. The work by (Liu, 2012) , describes various aspects of the sentiment analysis and opinion mining problem. Since the above task belongs to the natural processing domain, it brings along various difficulties, including coreference resolution, negation handling among many (Bakshi et al., 2016) .", |
| "cite_spans": [ |
| { |
| "start": 138, |
| "end": 149, |
| "text": "(Liu, 2012)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 396, |
| "end": 417, |
| "text": "(Bakshi et al., 2016)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To classify the sentences into the above 5 classes, we have formulated the same into a machine learning multi-classification task. This paper investigates different approaches for the human behavior assessment, as part of the Australasian Language Technology Association (ALTA) 2020 shared task (Moll\u00e1, 2020) .", |
| "cite_spans": [ |
| { |
| "start": 295, |
| "end": 308, |
| "text": "(Moll\u00e1, 2020)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The rest of the paper is divided as follows. The related works are enlisted in Section 2. The dataset description is given in Section 3. The experimental setup is given in Section 4. The experimentation details are described in Section 5. Results and analysis are tabulated in Section 6. Finally, we conclude with discussion and conclusion in Section 7.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "On experimental investigation of the problem, we have found that the given problem closely resembles the multi-class human sentiment analysis such as the multi-class sentiment analysis using clustering and scoring (Farhadloo and Rolland, 2013) . The work by (Farhadloo and Rolland, 2013) , uses the semantic analysis and clustering on a bag of nouns to identify the class of the sentiments based on the textual description. Other works show the use of multi-class class SVM 1 (Lavanya and Deisy, 2017) which employs topic adaptive learning method to produce more generic and abstract based systems. There also exists machine learning systems that perform discourse analysis (Ote\u00edza, Normality \"How unusual one is.\" \"He is unfashionable.\" 2 Capacity \"How capable one is.\" \"The student is a child prodigy.\" 3 Tenacity \"How resolute one is.\" \"They are truthful and hardworking.\" 4 Social Sanction Veracity \"How honest/truthful one is.\"", |
| "cite_spans": [ |
| { |
| "start": 214, |
| "end": 243, |
| "text": "(Farhadloo and Rolland, 2013)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 258, |
| "end": 287, |
| "text": "(Farhadloo and Rolland, 2013)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 476, |
| "end": 501, |
| "text": "(Lavanya and Deisy, 2017)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 674, |
| "end": 682, |
| "text": "(Ote\u00edza,", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Works", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\"She is hard-working and truthful.\" 5 Propriety \"How ethical one is.\" \"He is too arrogant to learn form his mistakes.\" 2017) on the description to map out the sentiment. Some works also show that systems are performing better if there is a fusion of more than one architecture like that of the GME-LSTM(A) 2 (Chen et al., 2017) and (Prabowo and Thelwall, 2009) , which uses multi-phased architecture and thereby takes the advantage of those methods as well as the concept of word-level and fine level fusion techniques to surpass other state-of-the-art techniques.", |
| "cite_spans": [ |
| { |
| "start": 308, |
| "end": 327, |
| "text": "(Chen et al., 2017)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 332, |
| "end": 360, |
| "text": "(Prabowo and Thelwall, 2009)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Works", |
| "sec_num": "2" |
| }, |
| { |
| "text": "As a part of this experimentation, we have used ensemble models to tackle different aspects of the problem. Starting from the XLNet Pretraining as given in Section 4.1 to decision tree classifier is discussed in Section 4.4 and up to XGBoost (in Section 4.5). These were used in different phases of feature generation, multi-class classification, analysis, and validation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Works", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The labeled dataset 3 for the ALTA 2020 shared task was provided by the organizers. The dataset included single, multiple, or no labels for a single sentence as the output label. The train data contains a total of 200 instances of labeled data, whereas the test set contains 100 instances. The dataset provided was based on the Semeval 2018 AIT DISC dataset 4 (Mohammad et al., 2018). For the purpose of experimentation, we have worked with both sets of data, with and without preprocessing. Preprocessing steps include removal of punctuation and stop words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Since the data provided to us by the organizers is quite small as discussed in Section 3, we employed the use of machine learning techniques instead of data craving deep learning methods. For the word embeddings, we have experimented with the XL-Net (Yang et al., 2019) pre-trained embeddings and the freely available spaCy 5 word embeddings.", |
| "cite_spans": [ |
| { |
| "start": 250, |
| "end": 269, |
| "text": "(Yang et al., 2019)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "3" |
| }, |
| { |
| "text": "XLNet (Yang et al., 2019) is an efficient pretraining method in comparison to the Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018), due to the various improvements in the model. XLNet is a pretraining method based on generalized autoregressors, that learns bidirectional context information. The autoregressive nature overcomes the deficit of the BERT model. We have used the pretrained XLNet model as provided by spaCy and used the generated vectors for the downward classification tasks.", |
| "cite_spans": [ |
| { |
| "start": 6, |
| "end": 25, |
| "text": "(Yang et al., 2019)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "XLNet Pretraining", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Here, we have used the en core web lg model as provided by spaCy. The sentence vectors generated by the model is used directly for the multiclassification step.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SpaCy Pretraining", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Polynomial features are obtained by raising exponential powers to the existing set of features (James et al., 2013) . It can also be termed as a feature engineering task, wherein new inputs are generated based on the current set of inputs. For our experimentation, we have experimented with polynomial features of various degrees. ", |
| "cite_spans": [ |
| { |
| "start": 95, |
| "end": 115, |
| "text": "(James et al., 2013)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Polynomial Features", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Decision Tree (Swain and Hauska, 1977 ) is a machine learning technique based on the supervised approach. This algorithm is commonly used for both classification and regression tasks. It formulates the task as a graphical structure, wherein the features are represented as the internal nodes. The rules are represented by the tree branches. Finally, the outcome of the tree is given by the leaf.", |
| "cite_spans": [ |
| { |
| "start": 14, |
| "end": 37, |
| "text": "(Swain and Hauska, 1977", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decision Tree Classifier", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Extreme Gradient Boosting (XGBoost) (Chen and Guestrin, 2016) , is a scalable algorithm frequently obtaining state-of-the-art results in many machine learning tasks with limited dataset size. The given algorithm is a combined model of decision trees, which uses copies of itself to improve the model performance and minimizes error. It is an efficient version of the well known stochastic gradient boosting algorithm.", |
| "cite_spans": [ |
| { |
| "start": 36, |
| "end": 61, |
| "text": "(Chen and Guestrin, 2016)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "XGBoost", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "As discussed in Section 4, we have used various machine learning techniques for the given multiclassification problem and have used feature vectors generated from different deep learning approaches as discussed in Sections 4.1 and 4.2. The generated sentence vectors of each sentence are fixed to a length of 300. For reasons attributed to computational cost and efficiency, we have used polynomial features of degree 2 in our experiments. The results obtained using different approaches are tabulated in Table 3 . Table 3 is sorted based on the private score as provided by the organizers. We have experimented with various approaches, an overview of which is given in Section 4. We have also, experimented with our ensemble model having polynomial features with degree 2 trained on a decision tree classifier. This ensemble model has experimented been on both XLNet and spaCy word embeddings. The model incorporating the use of XGBoost has also been used. Various other approaches are employed and the obtained score is tabulated in Table 3 . ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 505, |
| "end": 512, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 515, |
| "end": 522, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 1035, |
| "end": 1042, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimentation", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The result from the experimentation, as discussed in Section 5 are tabulated in Section 3. As we can see from Table 3 , the highest score of 0.1033 on the private dataset is using the XGBoost approach with pretrained spaCy embeddings. The highest score of 0.2200 on the public leaderboard is using a decision tree classifier with polynomial features of degree 2.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 110, |
| "end": 117, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Analysis", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In our work, we have worked with various deep learning algorithms and fusion techniques to study and investigate human behavior. We have also set up the analogy between the human sentiment analysis and behavior in Section 2. We have also trained our system based on various architectures and the best results can be referred to in Section 2. As the dataset size was not so significant, the system is not trained on complex deep learning-based architectures. From Table 2 we can see that the first three predictions go with the original analysis and the last three contradicts the original interpretation, we can also see that the actual output contains more than one class (as shown in Table 2 ), our analysis engine can replicate the same, as can be seen from Table 2 , but since the textual description was so short, the system was not able to properly analyze and map it with the output. Thus, from the above observations, we can infer that a less complex framework can sometimes perform better than complex architecture, moreover, if the dataset size would be significantly more, then a more complex architecture could have been devised and incorporated. The semantic analysis could have been carried out using those datasets.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 463, |
| "end": 470, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 686, |
| "end": 693, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 761, |
| "end": 768, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discussion and Conclusion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Future works can involve a rule-based approach for the same problem statement. Such an approach would be able to provide much better results even on a smaller dataset. Various techniques could be used to improve on the dataset size, and a deep learning architecture can be developed to cater to the same.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion and Conclusion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "https://scikit-learn.org/stable/ modules/svm.html/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Gated Multimodal Embedding LSTM with Temporal Attention 3 https://www.kaggle.com/c/ alta-2020-challenge/data 4 https://competitions.codalab. org/competitions/17751#learn_the_ details-datasets", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://spacy.io/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "The authors are thankful to the organizers of the ALTA 2020 shared task for organizing the event and Kaggle InClass for hosting the competition. Special thanks to Dr. Diego Moll\u00e1-Aliod for his support.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgment", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Opinion mining and sentiment analysis", |
| "authors": [ |
| { |
| "first": "Navneet", |
| "middle": [], |
| "last": "Rushlene Kaur Bakshi", |
| "suffix": "" |
| }, |
| { |
| "first": "Ravneet", |
| "middle": [], |
| "last": "Kaur", |
| "suffix": "" |
| }, |
| { |
| "first": "Gurpreet", |
| "middle": [], |
| "last": "Kaur", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Kaur", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "2016 3rd International Conference on Computing for Sustainable Global Development (INDIACom)", |
| "volume": "", |
| "issue": "", |
| "pages": "452--455", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rushlene Kaur Bakshi, Navneet Kaur, Ravneet Kaur, and Gurpreet Kaur. 2016. Opinion mining and sen- timent analysis. In 2016 3rd International Confer- ence on Computing for Sustainable Global Develop- ment (INDIACom), pages 452-455. IEEE.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Multimodal sentiment analysis with wordlevel fusion and reinforcement learning", |
| "authors": [ |
| { |
| "first": "Minghai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Sen", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [ |
| "Pu" |
| ], |
| "last": "Liang", |
| "suffix": "" |
| }, |
| { |
| "first": "Tadas", |
| "middle": [], |
| "last": "Baltru\u0161aitis", |
| "suffix": "" |
| }, |
| { |
| "first": "Amir", |
| "middle": [], |
| "last": "Zadeh", |
| "suffix": "" |
| }, |
| { |
| "first": "Louis-Philippe", |
| "middle": [], |
| "last": "Morency", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 19th ACM International Conference on Multimodal Interaction", |
| "volume": "", |
| "issue": "", |
| "pages": "163--171", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Minghai Chen, Sen Wang, Paul Pu Liang, Tadas Bal- tru\u0161aitis, Amir Zadeh, and Louis-Philippe Morency. 2017. Multimodal sentiment analysis with word- level fusion and reinforcement learning. In Proceed- ings of the 19th ACM International Conference on Multimodal Interaction, pages 163-171.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Xgboost: A scalable tree boosting system", |
| "authors": [ |
| { |
| "first": "Tianqi", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "Guestrin", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining", |
| "volume": "", |
| "issue": "", |
| "pages": "785--794", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tianqi Chen and Carlos Guestrin. 2016. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowl- edge discovery and data mining, pages 785-794.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1810.04805" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Multiclass sentiment analysis with clustering and score representation", |
| "authors": [ |
| { |
| "first": "Mohsen", |
| "middle": [], |
| "last": "Farhadloo", |
| "suffix": "" |
| }, |
| { |
| "first": "Erik", |
| "middle": [], |
| "last": "Rolland", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "2013 IEEE 13th international conference on data mining workshops", |
| "volume": "", |
| "issue": "", |
| "pages": "904--912", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mohsen Farhadloo and Erik Rolland. 2013. Multi- class sentiment analysis with clustering and score representation. In 2013 IEEE 13th international conference on data mining workshops, pages 904- 912. IEEE.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "An introduction to statistical learning", |
| "authors": [ |
| { |
| "first": "Gareth", |
| "middle": [], |
| "last": "James", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniela", |
| "middle": [], |
| "last": "Witten", |
| "suffix": "" |
| }, |
| { |
| "first": "Trevor", |
| "middle": [], |
| "last": "Hastie", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Tibshirani", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "112", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gareth James, Daniela Witten, Trevor Hastie, and Robert Tibshirani. 2013. An introduction to statis- tical learning, volume 112. Springer.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Twitter sentiment analysis using multi-class svm", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Lavanya", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Deisy", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "2017 International Conference on Intelligent Computing and Control (I2C2)", |
| "volume": "", |
| "issue": "", |
| "pages": "1--6", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K Lavanya and C Deisy. 2017. Twitter sentiment anal- ysis using multi-class svm. In 2017 International Conference on Intelligent Computing and Control (I2C2), pages 1-6. IEEE.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Sentiment analysis and opinion mining. Synthesis lectures on human language technologies", |
| "authors": [ |
| { |
| "first": "Bing", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "", |
| "volume": "5", |
| "issue": "", |
| "pages": "1--167", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bing Liu. 2012. Sentiment analysis and opinion min- ing. Synthesis lectures on human language technolo- gies, 5(1):1-167.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "The language of evaluation", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "James", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Martin", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Peter R White", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "2", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "James R Martin and Peter R White. 2003. The lan- guage of evaluation, volume 2. Springer.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "The language of evaluation: Appraisal in english. Hampshire: Palgrave Macmillan", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "R" |
| ], |
| "last": "Martin", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "White", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "JR Martin and PR White. 2005. R 2005: The language of evaluation: Appraisal in english. Hampshire: Pal- grave Macmillan.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Semeval-2018 Task 1: Affect in tweets", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Saif", |
| "suffix": "" |
| }, |
| { |
| "first": "Felipe", |
| "middle": [], |
| "last": "Mohammad", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohammad", |
| "middle": [], |
| "last": "Bravo-Marquez", |
| "suffix": "" |
| }, |
| { |
| "first": "Svetlana", |
| "middle": [], |
| "last": "Salameh", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Kiritchenko", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of International Workshop on Semantic Evaluation (SemEval-2018)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Saif M. Mohammad, Felipe Bravo-Marquez, Mo- hammad Salameh, and Svetlana Kiritchenko. 2018. Semeval-2018 Task 1: Affect in tweets. In Proceed- ings of International Workshop on Semantic Evalua- tion (SemEval-2018), New Orleans, LA, USA.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Overview of the 2020 alta shared task: Assess human behaviour", |
| "authors": [ |
| { |
| "first": "Diego", |
| "middle": [], |
| "last": "Moll\u00e1", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 18th Annual Workshop of the Australasian Language Technology Association", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diego Moll\u00e1. 2020. Overview of the 2020 alta shared task: Assess human behaviour. In Proceedings of the 18th Annual Workshop of the Australasian Lan- guage Technology Association.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "The appraisal framework and discourse analysis", |
| "authors": [ |
| { |
| "first": "Teresa", |
| "middle": [], |
| "last": "Ote\u00edza", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "457--472", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Teresa Ote\u00edza. 2017. The appraisal framework and dis- course analysis, pages 457-472. Routledge Hand- books.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Sentiment analysis: A combined approach", |
| "authors": [ |
| { |
| "first": "Rudy", |
| "middle": [], |
| "last": "Prabowo", |
| "suffix": "" |
| }, |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Thelwall", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Journal of Informetrics", |
| "volume": "3", |
| "issue": "2", |
| "pages": "143--157", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rudy Prabowo and Mike Thelwall. 2009. Sentiment analysis: A combined approach. Journal of Infor- metrics, 3(2):143-157.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "The decision tree classifier: Design and potential", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Philip", |
| "suffix": "" |
| }, |
| { |
| "first": "Hans", |
| "middle": [], |
| "last": "Swain", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hauska", |
| "suffix": "" |
| } |
| ], |
| "year": 1977, |
| "venue": "IEEE Transactions on Geoscience Electronics", |
| "volume": "15", |
| "issue": "3", |
| "pages": "142--147", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Philip H Swain and Hans Hauska. 1977. The decision tree classifier: Design and potential. IEEE Transac- tions on Geoscience Electronics, 15(3):142-147.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Xlnet: Generalized autoregressive pretraining for language understanding", |
| "authors": [ |
| { |
| "first": "Zhilin", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Zihang", |
| "middle": [], |
| "last": "Dai", |
| "suffix": "" |
| }, |
| { |
| "first": "Yiming", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jaime", |
| "middle": [], |
| "last": "Carbonell", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Russ", |
| "suffix": "" |
| }, |
| { |
| "first": "Quoc V", |
| "middle": [], |
| "last": "Salakhutdinov", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Advances in neural information processing systems", |
| "volume": "", |
| "issue": "", |
| "pages": "5753--5763", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural in- formation processing systems, pages 5753-5763.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF0": { |
| "type_str": "table", |
| "content": "<table><tr><td>S. No. Category</td><td>Class Name Meaning</td><td>Example Sentence</td></tr><tr><td>1</td><td/><td/></tr><tr><td>Social Esteem</td><td/><td/></tr></table>", |
| "num": null, |
| "html": null, |
| "text": "Class Meaning, Category and Examples" |
| }, |
| "TABREF1": { |
| "type_str": "table", |
| "content": "<table><tr><td colspan=\"3\">S. No Prediction Text</td><td colspan=\"2\">Actual Behaviour Predicted Behaviour</td></tr><tr><td>1</td><td>Correct</td><td>Actually be arsed with my sis-</td><td>Normality</td><td>Normality</td></tr><tr><td/><td/><td>ter sometimes, she controls the</td><td/><td/></tr><tr><td/><td/><td>TV 90% of the time and when I</td><td/><td/></tr><tr><td/><td/><td>watch one thing she gets in a huff</td><td/><td/></tr><tr><td>2</td><td>Correct</td><td>You ever just be really irritated</td><td>Capacity</td><td>Capacity</td></tr><tr><td/><td/><td>with someone u love it's like god</td><td/><td/></tr><tr><td/><td/><td>damn ur makin me angry but I</td><td/><td/></tr><tr><td/><td/><td>love u so I forgive u but I'm an-</td><td/><td/></tr><tr><td/><td/><td>gry</td><td/><td/></tr><tr><td>3</td><td>Correct</td><td>@SaraLuvvXXX : Whaaaat?!?</td><td>Propriety</td><td>Propriety</td></tr><tr><td/><td/><td>Oh hell no. I was jealous because</td><td/><td/></tr><tr><td/><td/><td>you got paid to f**k, but this is</td><td/><td/></tr><tr><td/><td/><td>a whole new level. #anger #love</td><td/><td/></tr><tr><td/><td/><td>#conflicted& Propriety</td><td/><td/></tr><tr><td>4</td><td>Incorrect</td><td>it makes me so f**king irate je-</td><td>Propriety</td><td>Normality</td></tr><tr><td/><td/><td>sus. nobody is calling ppl who</td><td/><td/></tr><tr><td/><td/><td>like hajime abusive stop with the</td><td/><td/></tr><tr><td/><td/><td>strawmen lmao</td><td/><td/></tr><tr><td>5</td><td>Incorrect</td><td>Goddamn headache.</td><td>Propriety</td><td>Capacity, Tenacity</td></tr><tr><td>6</td><td>Incorrect</td><td>I wanna kill you and destroy you.</td><td colspan=\"2\">Capacity, Tenacity Propriety</td></tr><tr><td/><td/><td>I want you died and I want Flint</td><td/><td/></tr><tr><td/><td/><td>back. #emo #scene #f**k #die</td><td/><td/></tr><tr><td/><td/><td>#hatered</td><td/><td/></tr></table>", |
| "num": null, |
| "html": null, |
| "text": "Sample Predictions from the Model" |
| }, |
| "TABREF2": { |
| "type_str": "table", |
| "content": "<table><tr><td>S. No. Approach</td></tr></table>", |
| "num": null, |
| "html": null, |
| "text": "Techniques Employed with corresponding Public and Private Mean F-Score" |
| } |
| } |
| } |
| } |