ACL-OCL / Base_JSON /prefixS /json /S18 /S18-1024.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S18-1024",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:44:05.065665Z"
},
"title": "Tw-StAR at SemEval-2018 Task 1: Preprocessing Impact on Multi-label Emotion Classification",
"authors": [
{
"first": "Hala",
"middle": [],
"last": "Mulki",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Selcuk University",
"location": {
"country": "Turkey"
}
},
"email": "halamulki@selcuk.edu.tr"
},
{
"first": "Chedi",
"middle": [],
"last": "Bechikh",
"suffix": "",
"affiliation": {
"laboratory": "LISI laboratory",
"institution": "Carthage University",
"location": {
"country": "Tunisia"
}
},
"email": "chedi.bechikh@gmail.com"
},
{
"first": "Hatem",
"middle": [],
"last": "Haddad",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e9 Libre de Bruxelles",
"location": {
"country": "Belgium"
}
},
"email": "hatem.haddad@ulb.ac.be"
},
{
"first": "Ismail",
"middle": [],
"last": "Babaoglu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Selcuk University",
"location": {
"country": "Turkey"
}
},
"email": "ibabaoglu@selcuk.edu.tr"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we describe our contribution in SemEval-2018 contest. We tackled task 1 \"Affect in Tweets\", subtask E-c \"Detecting Emotions (multi-label classification)\". A multilabel classification system Tw-StAR was developed to recognize the emotions embedded in Arabic, English and Spanish tweets. To handle the multi-label classification problem via traditional classifiers, we employed the binary relevance transformation strategy while a TF-IDF scheme was used to generate the tweets' features. We investigated using single and combinations of several preprocessing tasks to further improve the performance. The results showed that specific combinations of preprocessing tasks could significantly improve the evaluation measures. This has been later emphasized by the official results as our system ranked 3 rd for both Arabic and Spanish datasets and 14 th for the English dataset.",
"pdf_parse": {
"paper_id": "S18-1024",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we describe our contribution in SemEval-2018 contest. We tackled task 1 \"Affect in Tweets\", subtask E-c \"Detecting Emotions (multi-label classification)\". A multilabel classification system Tw-StAR was developed to recognize the emotions embedded in Arabic, English and Spanish tweets. To handle the multi-label classification problem via traditional classifiers, we employed the binary relevance transformation strategy while a TF-IDF scheme was used to generate the tweets' features. We investigated using single and combinations of several preprocessing tasks to further improve the performance. The results showed that specific combinations of preprocessing tasks could significantly improve the evaluation measures. This has been later emphasized by the official results as our system ranked 3 rd for both Arabic and Spanish datasets and 14 th for the English dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Social media platforms and micro-blogging systems such as Twitter have recently witnessed a high rate of accessibility (Duggan et al., 2015) . Tweets usually combine multiple emotions expressed by the appraisal or criticism of a specific issue. Sentiment analysis represents a coarsegrained opinion classification as it detects either the subjectivity (objective/subjective) or the polarity orientation (positive, negative or neutral) (Piryani et al., 2017) .",
"cite_spans": [
{
"start": 119,
"end": 140,
"text": "(Duggan et al., 2015)",
"ref_id": "BIBREF4"
},
{
"start": 435,
"end": 457,
"text": "(Piryani et al., 2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For opinionated texts which are usually rich of several emotions, a fine-grained analysis is needed. Through such analysis, specific emotions can be recognized within a tweet which is crucial for many applications. For instance, recognizing anger emotions in the tweets representing the customers' opinions about a specific service in a hotel would definitely help to take the proper response to keep the customers satisfied (Li et al., 2016) .",
"cite_spans": [
{
"start": 425,
"end": 442,
"text": "(Li et al., 2016)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Existing MLC systems are conducted either by problem transformation approaches or algorithm adaptation ones. Each of which combines several methods and has different merits. While problem transformation methods are simpler and easier to implement, algorithm adaptation methods have a more accurate performance but with a high computational cost (Zhang and Zhou, 2014) . Therefore, to develop a multi-label classifier that combines the simplicity of the problem transformation methods along with accurate performance remains an interesting issue to investigate.",
"cite_spans": [
{
"start": 345,
"end": 367,
"text": "(Zhang and Zhou, 2014)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since preprocessing tasks have been found of positive impact on sentiment analysis of different languages (Haddi et al., 2013; Y\u0131ld\u0131r\u0131m et al., 2015; El-Beltagy et al., 2017) , we hypothesize that the application of single or combinations of various preprocessing techniques on tweets before feeding them to the multi-label emotion classifier, can improve the classification performance without the need to complex methods that consider the dependencies between labels.",
"cite_spans": [
{
"start": 106,
"end": 126,
"text": "(Haddi et al., 2013;",
"ref_id": "BIBREF6"
},
{
"start": 127,
"end": 149,
"text": "Y\u0131ld\u0131r\u0131m et al., 2015;",
"ref_id": "BIBREF20"
},
{
"start": 150,
"end": 174,
"text": "El-Beltagy et al., 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Here, we describe the participation of our team \"Tw-StAR\" (Twitter-Sentiment analysis team for ARabic) in Task 1, subtask E-c, in Arabic, English and Spanish tweets (Mohammad et al., 2018) . This task requires classifying the emotions embedded in tweets into one or more of 11 emotion labels.",
"cite_spans": [
{
"start": 165,
"end": 188,
"text": "(Mohammad et al., 2018)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To accomplish this mission, we have subjected tweets to single or combinations of the following preprocessing techniques: stopwords removal, stemming, lemmatization and common emoji recognition and tagging. Manipulated tweets were then fed into a multi-label classifier built via one of the problem transformation approaches called Binary Relevance (BR) and trained with TF-IDF features using the Support Vector Machines (SVM) algorithm. Experimental study indicated the positive impact of stopwords removal, emoji tagging and lemmatization on the classification per-formance. This was emphasized later through the contest's official results as Tw-StAR performed well in multi-label emotion classification of the three tackled languages where it was ranked third, for Arabic and Spanish and 14th for English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unlike single-label classification (binary or multiclass) which classifies an instance into one of two or more labels, each instance in MLC can be associated with a set of labels at the same time (Zhang and Zhou, 2014) . MLC problems have been targeted either by algorithm adaptation or problem transformation methods.",
"cite_spans": [
{
"start": 196,
"end": 218,
"text": "(Zhang and Zhou, 2014)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Label Classification Approaches",
"sec_num": "2"
},
{
"text": "Adapt traditional classification algorithms used in binary and multi-class classification to perform MLC such that multi-label outputs are obtained. Using these methods, several machine learning (ML) algorithms such as k-nearest neighbors (KNN), decision trees (DT) and neural networks were extended to address MLC (Tsoumakas et al., 2009) .",
"cite_spans": [
{
"start": 315,
"end": 339,
"text": "(Tsoumakas et al., 2009)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm Adaptation Methods",
"sec_num": "2.1"
},
{
"text": "Rather than modifying the classification algorithm, these methods alter the MLC problem itself by converting it into one or multiple singlelabel classification problems that could be handled by traditional single-label classifiers (Tsoumakas et al., 2009) . The most popular strategies used to conduct such transformation are:",
"cite_spans": [
{
"start": 231,
"end": 255,
"text": "(Tsoumakas et al., 2009)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Transformation Methods",
"sec_num": "2.2"
},
{
"text": "\u2022 Label Powerset (LP): transforms an MLC problem to a multi-class classification problem where the classes represent all the possible combinations of the given training labels. After transformation, each input instance is associated with a unique single class containing a potential combination of labels. Hence, LP strategy explicitly models label correlations which leads to more accurate classification however, it usually suffers from sparsity and overfitting issues (Alali, 2016) .",
"cite_spans": [
{
"start": 471,
"end": 484,
"text": "(Alali, 2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Transformation Methods",
"sec_num": "2.2"
},
{
"text": "\u2022 Binary Relevance (BR): decomposes the MLC problem into several single-label binary classification sub-problems; each of which corresponds to one label. Thus, for each sub-problem responsible of a specific label, a separate binary classifier is trained on the original dataset with the objective of determining the relevance of its particular label for a given instance. The predicted labels by all binary classifiers for a certain instance are then merged into one vector resulting in the multi-label class of this instance (Cherman et al., 2011) . As BR is implemented in parallel and scales linearly, it forms a low cost solution to MLC problems (Read et al., 2011; Luaces et al., 2012) . Several ML algorithms were used with BR approach such as KNN, DT and SVM. According to (Madjarov et al., 2012) , SVM-based methods suit small datasets and perform better than DTs especially for domains with large number of features as in text classification since they exploit the information from all the features, while DTs use only a (small) subset of features and may miss some crucial information.",
"cite_spans": [
{
"start": 526,
"end": 548,
"text": "(Cherman et al., 2011)",
"ref_id": "BIBREF2"
},
{
"start": 650,
"end": 669,
"text": "(Read et al., 2011;",
"ref_id": "BIBREF15"
},
{
"start": 670,
"end": 690,
"text": "Luaces et al., 2012)",
"ref_id": "BIBREF9"
},
{
"start": 780,
"end": 803,
"text": "(Madjarov et al., 2012)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Transformation Methods",
"sec_num": "2.2"
},
{
"text": "To recognize the emotions embedded in the Arabic, English and Spanish datasets (Mohammad et al., 2018), Tw-StAR was applied on tweets contained in the provided datasets using the following pipeline:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tw-StAR Framework",
"sec_num": "3"
},
{
"text": "\u2022 Initial Preprocessing: for all datasets, a common initial preprocessing step that includes removing the non-sentimental content such as URLs, usernames, dates, digits, hashtags symbols, and punctuation was performed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "3.1"
},
{
"text": "\u2022 Stopwords Removal (Stop): Stopwords are function words with high frequency of presence in texts; they usually do not carry significant semantic meaning by themselves. Therefore, it is preferable to ignore them while analyzing a textual content. In this task, Arabic was targeted by a list of 1,661 stopwords provided by the NLP group at King Abdulaziz University 1 . For English, we used a list of 1,012 words resulted from combining the list published with the Terrier package 2 and the list of snowball 3 . In Spanish, a list of 731 words from snowball 4 was used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "3.1"
},
{
"text": "\u2022 Stemming (Stem): concerns about reducing the variants of a word to their shared basic form (stem) or root. Therefore, it enables decreasing the vocabulary and increasing the recall (Darwish and Magdy, 2014 ).",
"cite_spans": [
{
"start": 183,
"end": 207,
"text": "(Darwish and Magdy, 2014",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "3.1"
},
{
"text": "In the current study, we used ISRI stemmer (Taghva et al., 2005) for Arabic, Porter2 (Porter, 1980) for english and Snowball for Spanish 5 . ISRI stemmer does not use a root dictionary and provide a normalized form for words whose root are not found. This is done through normalizing the hamza, removing diacritics representing vowels, remove connector if it precedes a word beginning with , etc. The English stemmer returns the root of a word by removing suffixes related to plural, tenses, adverbs, etc. Finally, the Snowball stemmer used for Spanish translates the rules of stemming algorithms expressed in natural way to an equivalent program.",
"cite_spans": [
{
"start": 43,
"end": 64,
"text": "(Taghva et al., 2005)",
"ref_id": "BIBREF17"
},
{
"start": 85,
"end": 99,
"text": "(Porter, 1980)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "3.1"
},
{
"text": "\u2022 Lemmatization (Lem): removes inflectional endings only and returns the base or dictionary form of a word. Farasa (Abdelali et al., 2016) lemmatizer was employed for Arabic while Treetagger (Schmid, 1995) was used for both English and Spanish. Farasa uses SVMrank to rank possible ways to segment words to prefixes, stems, and suffixes. On the other hand, TreeTagger 6 forms a languageindependent tool for annotating text with part-of-speech and lemma information included.",
"cite_spans": [
{
"start": 115,
"end": 138,
"text": "(Abdelali et al., 2016)",
"ref_id": "BIBREF0"
},
{
"start": 191,
"end": 205,
"text": "(Schmid, 1995)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "3.1"
},
{
"text": "\u2022 Common Emoji Recognition (Emo): we fixed a list of nine categories of the most common emoji detected in the tweets through UTF-8 encoding. Each emoji is replaced with a tag that implies the emoji's emotion. The tags included: AngryEmoj, HappyEmoj, FearEmoj, LoveEmoj, SadEmoj, SurpriseEmoj, DisgustedEmoj, OptimistEmoj and Pessimis-mEmoj. Thus, a tweet such: \"I hung up on my manager last night \" will be replaced by: \"I hung up on my manager last night SadEmoj\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "3.1"
},
{
"text": "Vector space model (VSM) was used to generate the features vectors. Each tweet was represented using a vector containing all corpus words denoted by their number of occurrences in this tweet referred to as term frequency (tf). A larger value of a term frequency indicates its prominence in a given tweet, however, if this term appears in too many tweets it will be less informative such as stop words (Maas et al., 2011) . Therefore, to enhance the classification and reduce the dimensionality, we focused on the most discriminative terms through applying Term Frequency-Inverse Document Frequency (TF-IDF) weighting scheme. This scheme increases the weight of a term proportionally to the number of times a term appears in the document, but is often offset by the frequency of the term in the corpus, which means how many documents it appears in (Taha and Tiun, 2016).",
"cite_spans": [
{
"start": 401,
"end": 420,
"text": "(Maas et al., 2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "3.2"
},
{
"text": "Having the data transformed using the BR method and the TF-IDF features generated, tweets were fed into a multi-label SVM classifier with the linear kernel. This classifier adopts one-Vs-All strategy such that each label has its own binary classifier. Consequently, a number of binary SVM classifiers equals to the number of emotion labels were trained in parallel to recognize the emotions embedded in a tweet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Emotions Classification",
"sec_num": "3.3"
},
{
"text": "The proposed model Tw-StAR was applied on Arabic, English and Spanish multi-labeled tweet datasets; their statistics are listed in Table 1 . Using One-Vs-All SVM classifier from Scikitlearn 7 , Tw-StAR was trained to recognize the following emotions: anger, anticipation, disgust, fear, joy, love, optimism, pessimism, sadness, surprise, trust in addition to \"noEmotion\" label that denotes tweets that have none of the previous emotions. Within the presented framework, the preprocessing tasks listed in Section 3 were examined separately and combined. This enabled defining the preprocessing technique/combination for which the MLC performance of each language is better improved.",
"cite_spans": [],
"ref_spans": [
{
"start": 131,
"end": 138,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "Tables 2, 3 and 4 list the results obtained for each language when applying several single/combinations of preprocessing tasks where accuracy, macro average f-measure and micro average f-measure are referred to as (Acc.), (Mac-F) and (Mic-F) respectively. Table 2 clearly suggests that for the Arabic tweets, stemming using ISRI stemmer improved the accuracy by 5.1% percentage points compared to that scored by stopwords removal was applied. Moreover, combining stemming with stopwords removal could further improve the micro F-measure as it increased from 55.9% to 56.4%. This is due to the fact that ISRI can handle wider range of Arabic vocabulary as it returns a normalized form of words having no stem rather than retaining them unchanged (Kreaa et al., 2014) .",
"cite_spans": [
{
"start": 745,
"end": 765,
"text": "(Kreaa et al., 2014)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 256,
"end": 263,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "Unlike Arabic dataset, Table 3 and Table 4 show that stemming had a different behavior when it was applied on both English and Spanish tweets. Compared to the accuracy achieved by stopwords removal, stemming has slightly increased the accuracy by 0.3% and 0.8% in English and Spanish datasets respectively. This could be related to the insufficiency of the stemming algorithms employed by both porter2 and snowball stemmers to handle informal English and Spanish tweets. Lemmatization by Treetagger, however, was a better choice to handle English and Spanish terms as it forms a language-independent lemmatizer with implicitly POS tagger included. Thus, combining emoji tagging with lemmatization and stopwords removal could achieve the best performances with a micro average F-measure of 60.6% and 52.3% for English and Spanish respectively. Since the provided tweets were rich of emoji, emoji tagging could effectively contribute in improving the performance in all datasets especially when it was combined with the other bestperformed tasks such as stem+stop in Arabic and lem+stop in both English and Spanish. This led to the best performances as the achieved micro F-measure was 58%, 60.2% and 52% in Arabic, English and Spanish datasets respectively. Hence, these preprocessing combinations were adopted for the official submission. Table 5 lists the official results of Tw-StAR against the systems ranked first for each language where (L.), (A.), (E.), (S.), (R.) (Mic) and (Mac) refer to language, Arabic, English, Spanish, rank, micro and macro fmeasure respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 23,
"end": 30,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 35,
"end": 42,
"text": "Table 4",
"ref_id": null
},
{
"start": 1339,
"end": 1346,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "Here we emphasized the key role of preprocessing in emotion MLC. Stemming, lemmatization and emoji tagging were found the most effective tasks for emotion MLC. For the future work, the obtained performances would be further improved if negation detection was included to infer the negative emotions. Moreover, other ML methods could be examined with BR and deep neural models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "https://github.com/abahanshal/arabic-stop-words-list1 2 https://bitbucket.org/kganes2/text-mining-resources/ 3 http://snowball.tartarus.org/algorithms/english/stop.txt 4 http://snowball.tartarus.org/algorithms/spanish/stop.txt",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://snowball.tartarus.org/texts/introduction.html 6 http://www.cis.uni-muenchen.de/ schmid/tools/TreeTagger/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://scikit-learn.org",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Farasa: A fast and furious segmenter for arabic",
"authors": [
{
"first": "Ahmed",
"middle": [],
"last": "Abdelali",
"suffix": ""
},
{
"first": "Kareem",
"middle": [],
"last": "Darwish",
"suffix": ""
},
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Hamdy",
"middle": [],
"last": "Mubarak",
"suffix": ""
}
],
"year": 2016,
"venue": "The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "11--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ahmed Abdelali, Kareem Darwish, Nadir Durrani, and Hamdy Mubarak. 2016. Farasa: A fast and furious segmenter for arabic. In Proceedings of the Demon- strations Session, NAACL HLT 2016, The 2016 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, San Diego California, USA, June 12-17, 2016, pages 11-16.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A novel stacking method for multi-label classification",
"authors": [
{
"first": "Abdulaziz",
"middle": [],
"last": "Alali",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abdulaziz Alali. 2016. A novel stacking method for multi-label classification. Ph.D. thesis, University of Miami.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Multi-label problem transformation methods: a case study",
"authors": [
{
"first": "Maria",
"middle": [
"Carolina"
],
"last": "Everton Alvares Cherman",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Monard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Metz",
"suffix": ""
}
],
"year": 2011,
"venue": "CLEI Electronic Journal",
"volume": "14",
"issue": "1",
"pages": "4--4",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Everton Alvares Cherman, Maria Carolina Monard, and Jean Metz. 2011. Multi-label problem trans- formation methods: a case study. CLEI Electronic Journal, 14(1):4-4.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Arabic information retrieval",
"authors": [
{
"first": "Kareem",
"middle": [],
"last": "Darwish",
"suffix": ""
},
{
"first": "Walid",
"middle": [],
"last": "Magdy",
"suffix": ""
}
],
"year": 2014,
"venue": "Foundations and Trends in Information Retrieval",
"volume": "7",
"issue": "4",
"pages": "239--342",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kareem Darwish and Walid Magdy. 2014. Arabic in- formation retrieval. Foundations and Trends in In- formation Retrieval, 7(4):239-342.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Social media update",
"authors": [
{
"first": "Maeve",
"middle": [],
"last": "Duggan",
"suffix": ""
},
{
"first": "Nicole",
"middle": [
"B"
],
"last": "Ellison",
"suffix": ""
},
{
"first": "Cliff",
"middle": [],
"last": "Lampe",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Lenhart",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Madden",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maeve Duggan, Nicole B Ellison, Cliff Lampe, Amanda Lenhart, and Mary Madden. 2015. Social media update 2014. Pew research center, 19.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Niletmrg at semeval-2017 task 4: Arabic sentiment analysis",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samhaa",
"suffix": ""
},
{
"first": "Mona",
"middle": [
"El"
],
"last": "El-Beltagy",
"suffix": ""
},
{
"first": "Abu Bakr",
"middle": [],
"last": "Soliman",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)",
"volume": "",
"issue": "",
"pages": "790--795",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samhaa R. El-Beltagy, Mona El kalamawy, and Abu Bakr Soliman. 2017. Niletmrg at semeval-2017 task 4: Arabic sentiment analysis. In Proceedings of the 11th International Workshop on Semantic Evalu- ation (SemEval-2017), pages 790-795. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The role of text pre-processing in sentiment analysis",
"authors": [
{
"first": "Emma",
"middle": [],
"last": "Haddi",
"suffix": ""
},
{
"first": "Xiaohui",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Shi",
"suffix": ""
}
],
"year": 2013,
"venue": "Procedia Computer Science",
"volume": "17",
"issue": "",
"pages": "26--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emma Haddi, Xiaohui Liu, and Yong Shi. 2013. The role of text pre-processing in sentiment analysis. Procedia Computer Science, 17:26-32.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Arabic words stemming approach using arabic wordnet",
"authors": [
{
"first": "Abdel",
"middle": [
"Hamid"
],
"last": "Kreaa",
"suffix": ""
},
{
"first": "Ahmad",
"middle": [
"S"
],
"last": "Ahmad",
"suffix": ""
},
{
"first": "Kassem",
"middle": [],
"last": "Kabalan",
"suffix": ""
}
],
"year": 2014,
"venue": "International Journal of Data Mining & Knowledge Management Process",
"volume": "4",
"issue": "6",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abdel Hamid Kreaa, Ahmad S Ahmad, and Kassem Kabalan. 2014. Arabic words stemming approach using arabic wordnet. International Journal of Data Mining & Knowledge Management Process, 4(6):1.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Multi-label maximum entropy model for social emotion classification over short text",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yanghui",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Fengmei",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Huijun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiyun",
"middle": [],
"last": "Xiang",
"suffix": ""
}
],
"year": 2016,
"venue": "Neurocomputing",
"volume": "210",
"issue": "",
"pages": "247--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun Li, Yanghui Rao, Fengmei Jin, Huijun Chen, and Xiyun Xiang. 2016. Multi-label maximum entropy model for social emotion classification over short text. Neurocomputing, 210:247-256.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Binary relevance efficacy for multilabel classification",
"authors": [
{
"first": "Oscar",
"middle": [],
"last": "Luaces",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "D\u00edez",
"suffix": ""
},
{
"first": "Jos\u00e9",
"middle": [],
"last": "Barranquero",
"suffix": ""
},
{
"first": "Juan Jos\u00e9 Del",
"middle": [],
"last": "Coz",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Bahamonde",
"suffix": ""
}
],
"year": 2012,
"venue": "Progress in Artificial Intelligence",
"volume": "1",
"issue": "4",
"pages": "303--313",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oscar Luaces, Jorge D\u00edez, Jos\u00e9 Barranquero, Juan Jos\u00e9 del Coz, and Antonio Bahamonde. 2012. Bi- nary relevance efficacy for multilabel classification. Progress in Artificial Intelligence, 1(4):303-313.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Learning word vectors for sentiment analysis",
"authors": [
{
"first": "L",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"E"
],
"last": "Maas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Daly",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies",
"volume": "1",
"issue": "",
"pages": "142--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the as- sociation for computational linguistics: Human lan- guage technologies-volume 1, pages 142-150. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "An extensive experimental comparison of methods for multi-label learning",
"authors": [
{
"first": "Gjorgji",
"middle": [],
"last": "Madjarov",
"suffix": ""
},
{
"first": "Dragi",
"middle": [],
"last": "Kocev",
"suffix": ""
},
{
"first": "Dejan",
"middle": [],
"last": "Gjorgjevikj",
"suffix": ""
},
{
"first": "Sa\u0161o",
"middle": [],
"last": "D\u017eeroski",
"suffix": ""
}
],
"year": 2012,
"venue": "Pattern recognition",
"volume": "45",
"issue": "9",
"pages": "3084--3104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gjorgji Madjarov, Dragi Kocev, Dejan Gjorgjevikj, and Sa\u0161o D\u017eeroski. 2012. An extensive experimen- tal comparison of methods for multi-label learning. Pattern recognition, 45(9):3084-3104.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Semeval-2018 Task 1: Affect in tweets",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Felipe",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Bravo-Marquez",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Salameh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of International Workshop on Semantic Evaluation (SemEval-2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M. Mohammad, Felipe Bravo-Marquez, Mo- hammad Salameh, and Svetlana Kiritchenko. 2018. Semeval-2018 Task 1: Affect in tweets. In Proceed- ings of International Workshop on Semantic Evalu- ation (SemEval-2018), New Orleans, LA, USA.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Analytical mapping of opinion mining and sentiment analysis research during 2000-2015",
"authors": [
{
"first": "Rajesh",
"middle": [],
"last": "Piryani",
"suffix": ""
},
{
"first": "Vivek Kumar",
"middle": [],
"last": "Madhavi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2017,
"venue": "formation Processing & Management",
"volume": "53",
"issue": "",
"pages": "122--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rajesh Piryani, D Madhavi, and Vivek Kumar Singh. 2017. Analytical mapping of opinion mining and sentiment analysis research during 2000-2015. In- formation Processing & Management, 53(1):122- 150.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "An algorithm for suffix stripping",
"authors": [
{
"first": "Martin",
"middle": [
"F"
],
"last": "Porter",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "14",
"issue": "",
"pages": "130--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin F. Porter. 1980. An algorithm for suffix strip- ping. Program, 14(3):130-137.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Classifier chains for multi-label classification",
"authors": [
{
"first": "Jesse",
"middle": [],
"last": "Read",
"suffix": ""
},
{
"first": "Bernhard",
"middle": [],
"last": "Pfahringer",
"suffix": ""
},
{
"first": "Geoff",
"middle": [],
"last": "Holmes",
"suffix": ""
},
{
"first": "Eibe",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2011,
"venue": "Machine learning",
"volume": "85",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jesse Read, Bernhard Pfahringer, Geoff Holmes, and Eibe Frank. 2011. Classifier chains for multi-label classification. Machine learning, 85(3):333.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Improvements in part-ofspeech tagging with an application to german",
"authors": [
{
"first": "Helmut",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 1995,
"venue": "proceedings of the acl sigdat-workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helmut Schmid. 1995. Improvements in part-of- speech tagging with an application to german. In In proceedings of the acl sigdat-workshop. Citeseer.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Arabic stemming without a root dictionary",
"authors": [
{
"first": "Kazem",
"middle": [],
"last": "Taghva",
"suffix": ""
},
{
"first": "Rania",
"middle": [],
"last": "Elkhoury",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Coombs",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the International Conference on Information Technology: Coding and Computing (ITCC'05",
"volume": "01",
"issue": "",
"pages": "152--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kazem Taghva, Rania Elkhoury, and Jeffrey Coombs. 2005. Arabic stemming without a root dictionary. In Proceedings of the International Conference on Information Technology: Coding and Computing (ITCC'05) -Volume I -Volume 01, ITCC '05, pages 152-157, Washington, DC, USA. IEEE Computer Society.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Binary relevance (br) method classifier of multi-label classification for arabic text",
"authors": [
{
"first": "Sabrina",
"middle": [],
"last": "Adil Yaseen Taha",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tiun",
"suffix": ""
}
],
"year": 2016,
"venue": "Journal of Theoretical and Applied Information Technology",
"volume": "84",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adil Yaseen Taha and Sabrina Tiun. 2016. Binary rel- evance (br) method classifier of multi-label classi- fication for arabic text. Journal of Theoretical and Applied Information Technology, 84(3):414.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Mining multi-label data",
"authors": [],
"year": 2009,
"venue": "Grigorios Tsoumakas, Ioannis Katakis, and Ioannis Vlahavas",
"volume": "",
"issue": "",
"pages": "667--685",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grigorios Tsoumakas, Ioannis Katakis, and Ioannis Vlahavas. 2009. Mining multi-label data. In Data mining and knowledge discovery handbook, pages 667-685. Springer.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The impact of nlp on turkish sentiment analysis. T\u00fcrkiye Bili\u015fim Vakf\u0131 Bilgisayar Bilimleri ve M\u00fchendisligi Dergisi",
"authors": [
{
"first": "Ezgi",
"middle": [],
"last": "Y\u0131ld\u0131r\u0131m",
"suffix": ""
},
{
"first": "Fatih",
"middle": [],
"last": "Samet \u00c7 Etin",
"suffix": ""
},
{
"first": "G\u00fcl\u015fen",
"middle": [],
"last": "Eryigit",
"suffix": ""
},
{
"first": "Tanel",
"middle": [],
"last": "Temel",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "7",
"issue": "",
"pages": "43--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ezgi Y\u0131ld\u0131r\u0131m, Fatih Samet \u00c7 etin, G\u00fcl\u015fen Eryigit, and Tanel Temel. 2015. The impact of nlp on turkish sentiment analysis. T\u00fcrkiye Bili\u015fim Vakf\u0131 Bilgisayar Bilimleri ve M\u00fchendisligi Dergisi, 7(1):43-51.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A review on multi-label learning algorithms",
"authors": [
{
"first": "Min-Ling",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhi-Hua",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2014,
"venue": "IEEE transactions on knowledge and data engineering",
"volume": "26",
"issue": "8",
"pages": "1819--1837",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Min-Ling Zhang and Zhi-Hua Zhou. 2014. A re- view on multi-label learning algorithms. IEEE transactions on knowledge and data engineering, 26(8):1819-1837.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"type_str": "table",
"content": "<table><tr><td>Preprocessing</td><td colspan=\"3\">Acc. Mic-F Mac-F</td></tr><tr><td>Stop</td><td>0.38</td><td colspan=\"2\">0.509 0.367</td></tr><tr><td>Stem</td><td colspan=\"3\">0.431 0.559 0.424</td></tr><tr><td>Emo</td><td colspan=\"3\">0.414 0.543 0.39</td></tr><tr><td>Stem+Stop</td><td colspan=\"3\">0.434 0.564 0.435</td></tr><tr><td>Emo+Lem+Stop</td><td colspan=\"3\">0.434 0.561 0.415</td></tr><tr><td colspan=\"3\">Emo+ Stem+Stop 0.449 0.58</td><td>0.444</td></tr></table>",
"text": "Statistics of the used datasets.",
"num": null,
"html": null
},
"TABREF2": {
"type_str": "table",
"content": "<table><tr><td>Preprocessing</td><td colspan=\"2\">Acc. Mic-F Mac-F</td></tr><tr><td>Stop</td><td colspan=\"2\">0.446 0.577 0.429</td></tr><tr><td>Stem</td><td>0.449 0.58</td><td>0.443</td></tr><tr><td>Emo</td><td colspan=\"2\">0.459 0.588 0.434</td></tr><tr><td>Stem+Stop</td><td colspan=\"2\">0.462 0.593 0.458</td></tr><tr><td colspan=\"3\">Emo+Lem+Stop Emo+ Stem+Stop 0.475 0.602 0.466 0.48 0.606 0.461</td></tr></table>",
"text": "Preprocessing impact on Arabic MLC.",
"num": null,
"html": null
},
"TABREF3": {
"type_str": "table",
"content": "<table/>",
"text": "Preprocessing impact on English MLC.",
"num": null,
"html": null
},
"TABREF5": {
"type_str": "table",
"content": "<table/>",
"text": "",
"num": null,
"html": null
}
}
}
}