text
stringlengths 41
31.4k
|
|---|
<s>authors declare no conflict of interest. References 1. Trusov, M.; Bucklin, R.E.; Pauwels, K. Effects of word-of-mouth versus traditional marketing: Findings from an internet social networking site. J. Mark. 2009, 73, 90–102. 2. Jeyapriya, A.; Selvi, C.K. Extracting Aspects and Mining Opinions in Product Reviews Using Supervised Learning Algorithm. In Proceedings of the 2015 2nd International Conference on Electronics and Communication Systems (ICECS), Coimbatore, India, 26–27 February 2015. 3. Pontiki, M.; Galanis, D.; Pavlopoulos, J.; Papageorgiou, H.; Androutsopoulos, I.; Manandhar, S. SemEval-2014 Task 4: Aspect Based Sentiment Analysis. Available Online: http://www.aclweb.org/anthology/S14-2004 (accessed on 3 May 2018). 4. Al-Smadi, M.; Qawasmeh, O.; Talafha, B.; Quwaider, M. Human Annotated Arabic Dataset of Book Reviews for Aspect Based Sentiment Analysis. In Proceedings of the 2015 3rd International Conference on Future Internet of Things and Cloud (FiCloud), Rome, Italy, 24–26 August 2015. 5. Tamchyna, A.; Fiala, O.; Veselovská, K. Czech Aspect-Based Sentiment Analysis: A New Dataset and Preliminary Results. Available Online: https://pdfs.semanticscholar.org/cbd8/7f4201c427db 33783b1890bca65f5bf99d2c.pdf (accessed on 3 May 2018). 6. Apidianaki, M.; Tannier, X.; Richart, C. Datasets for Aspect-Based Sentiment Analysis in French. Available Online: http://www.lrec-conf.org/proceedings/lrec2016/pdf/61_Paper.pdf (accessed on 3 May 2018). 0.10.20.30.40.50.60.70.80.9SVM RF KNN SVM RF KNNCricket RestaurantAccuracyPrecision Recall F1-scoreFigure 3. The result of three models of our datasets.These results can be improved if we process and train the datasets in a more sophisticatedway. In this work, we have taken all of the vocabulary as features for the evaluation after removingpunctuation, stop words, and digits. Some state-of-the-art techniques for information gain can beapplied to the dataset before classification and after the preprocessing steps to attain better results.4. Conclusions and Future WorkTwo datasets are provided for the ABSA of Bangla text. These datasets have been designed toperform two tasks covering aspect category extraction and the identification of polarity for that aspectcategory. We also report baseline results to evaluate the task of aspect category extraction.Data 2018, 3, 15 10 of 10As future plans, we aim to enhance our work by including further domains such as cars, mobiles,and laptops. We are working on more advanced methods for the ABSA of Bangla text using ourdatasets to achieve better performance.Author Contributions: All authors contributed equally to this work, and have read and approved thefinal manuscript.Conflicts of Interest: The authors declare no conflict of interest.References1. Trusov, M.; Bucklin, R.E.; Pauwels, K. Effects of word-of-mouth versus traditional marketing: Findings froman internet social networking site. J. Mark. 2009, 73, 90–102. [CrossRef]2. Jeyapriya, A.; Selvi, C.K. Extracting Aspects and Mining Opinions in Product Reviews Using SupervisedLearning Algorithm. In Proceedings of the 2015 2nd International Conference on Electronics andCommunication Systems (ICECS), Coimbatore, India, 26–27 February 2015.3. Pontiki, M.; Galanis, D.; Pavlopoulos, J.; Papageorgiou, H.; Androutsopoulos, I.; Manandhar, S. SemEval-2014Task 4: Aspect Based Sentiment Analysis. Available online: http://www.aclweb.org/anthology/S14-2004(accessed on 3 May 2018).4. Al-Smadi, M.; Qawasmeh, O.; Talafha, B.; Quwaider, M. Human Annotated Arabic Dataset of Book Reviewsfor Aspect Based Sentiment Analysis. In Proceedings of the 2015 3rd International Conference on FutureInternet of Things and Cloud (FiCloud), Rome, Italy, 24–26 August 2015.5. Tamchyna, A.; Fiala, O.; Veselovská,</s>
|
<s>K. Czech Aspect-Based Sentiment Analysis: A New Dataset and PreliminaryResults. Available online: https://pdfs.semanticscholar.org/cbd8/7f4201c427db33783b1890bca65f5bf99d2c.pdf(accessed on 3 May 2018).6. Apidianaki, M.; Tannier, X.; Richart, C. Datasets for Aspect-Based Sentiment Analysis in French. Availableonline: http://www.lrec-conf.org/proceedings/lrec2016/pdf/61_Paper.pdf (accessed on 3 May 2018).7. Gayatree, G.; Elhadad, N.; Marian, A. Beyond the Stars: Improving Rating Predictions Using Review TextContent. Available online: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.150.140&rep=rep1&type=pdf (accessed on 3 May 2018).8. Kiritchenko, S.; Zhu, X.; Cherry, C.; Mohammad, S. NRC-Canada-2014: Detecting Aspects and Sentimentin Customer Reviews. Available online: http://www.aclweb.org/anthology/S14-2076 (accessed on3 May 2018).9. Kiritchenko, S.; Zhu, X.; Cherry, C.; Mohammad, S. Supervised and Unsupervised Aspect Category Detectionfor Sentiment Analysis with Co-cccurrence Data. In IEEE Transactions on Cybernetics; IEEE: Piscataway, NJ,USA, 2017.10. Soujanya, P.; Cambria, E.; Gelbukh, A. Aspect extraction for opinion mining with a deep convolutionalneural network. Knowl.-Based Syst. 2016, 108, 42–49.11. Pengfei, L.; Joty, S.; Meng, H. Fine-Grained Opinion Mining with Recurrent Neural Networks and WordEmbeddings. Available online: http://www.aclweb.org/anthology/D15-1168 (accessed on 3 May 2018).12. Pontiki, M.; Galanis, D.; Papageorgiou, H.; Manandhar, S.; Androutsopoulos, I. Semeval-2015 Task 12: AspectBased Sentiment Analysis. Available online: http://www.aclweb.org/anthology/S15-2082 (accessed on3 May 2018).13. Pontiki, M.; Galanis, D.; Papageorgiou, H.; Androutsopoulos, I.; Manandhar, S.; AL-Smadi, M.;Al-Ayyoub, M.; Zhao, Y.; Qin, B.; De Clercq, O.; et al. SemEval-2016 Task 5: Aspect Based SentimentAnalysis. Available online: http://www.aclweb.org/anthology/S16-1002 (accessed on 3 May 2018).14. Pak, A.; Paroubek, P. Twitter as A Corpus for Sentiment Analysis and Opinion Mining. Available online:http://crowdsourcing-class.org/assignments/downloads/pak-paroubek.pdf (accessed on 3 May 2018).© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open accessarticle distributed under the terms and conditions of the Creative Commons Attribution(CC BY) license (http://creativecommons.org/licenses/by/4.0/).http://dx.doi.org/10.1509/jmkg.73.5.90http://www.aclweb.org/anthology/S14-2004https://pdfs.semanticscholar.org/cbd8/7f4201c427db33783b1890bca65f5bf99d2c.pdfhttp://www.lrec-conf.org/proceedings/lrec2016/pdf/61_Paper.pdfhttp://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.150.140&rep=rep1&type=pdfhttp://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.150.140&rep=rep1&type=pdfhttp://www.aclweb.org/anthology/S14-2076http://www.aclweb.org/anthology/D15-1168http://www.aclweb.org/anthology/S15-2082http://www.aclweb.org/anthology/S16-1002http://crowdsourcing-class.org/assignments/downloads/pak-paroubek.pdfhttp://creativecommons.org/http://creativecommons.org/licenses/by/4.0/. Summary Data Description Cricket Dataset Annotation of Cricket Dataset Analysis of Proposed Cricket Dataset Restaurant Dataset Baseline Evaluation Preprocessing and Feature Extraction Results Conclusions and Future Work References</s>
|
<s>Establishing a Formal Benchmarking Process for Sentiment Analysis for the Bangla LanguageA K M Shahariar Azad Rabby 1, Aminul Islam1 and Fuad Rahman21 Apurba Technologies, Dhaka, Bangladesh2 Apurba Technologies, Sunnyvale, CA, USArabby@apurbatech.com, aminul@apurbatech.com, fuad@apurbatech.comAbstract. Tracking sentiments is a critical task in many natural language processing applications. A lot of work has been done on many leading languages in the world, such as English. However, in many languages such as Bangla, sentiment analysis is still in early development. Most of the research on this topic suffers from three key issues: (a) the lack of standardized publicly available datasets, (b) the subjectivity of the reported results, which generally manifests as a lack of agreement on core sentiment categorizations, and finally, (c) the lack of an established framework where these efforts can be compared to a formal benchmark. Thus, this seems to be an opportune moment to establish a benchmark for sentiment analysis in Bangla. With that goal in mind, this paper presents benchmark results of ten different sentiment analysis solutions on three publicly available Bangla sentiment analysis corpora. As part of the benchmarking process, we have optimized these algorithms for the task at hand. Finally, we establish and present sixteen different evaluation matrices for benchmarking these algorithms. We hope that this paper will jumpstart an open and transparent benchmarking process, one that we plan to update every two years, to help validating newer and novel algorithms that will be reported in this area in future.Keywords: Sentiment Analysis, NLP, Bangla Sentiment Corpus, Annotation, Benchmarking.1 IntroductionThe explosion of information technology, especially the use of social media, has resulted in a vast amount of content that is thrown at human beings at any given moment. A lot of this content is tied to social, political, and economic interests, publishers of all of which have a vested interest in tracking whether the audience likes the content or not. For instance, data-driven trend analysis is an essential part of modern politics and advertising. Less dramatic, but equally critical applications of sentiment analysis are customer reviews on online shopping sites or opinion mining on newspapers to gauge public sentiment on national security issues, just to name a few. Bangla is spoken as the first language by almost 200 million people worldwide, 160 million of whom hold Bangladeshi citizenship. But Natural Language Processing (NLP) development of the Bangla language is in very early stages, and there is not yet enough labeled data to work with for the language. Because of the scarcity of labeled data and standardized corpora, little work has been reported in this space. Recently, a sentiment analysis corpus of about 10,000 sentences was made public by Apurba Technologies [1]. We searched and located two additional, albeit smaller, open-sourced datasets in this space [2]. We built ten different sentiment analysis algorithms using Machine Learning (ML), statistical modeling, and other methods. This paper benchmarks these 10 algorithms on the above-mentioned 3 annotated corpora.The paper is arranged as follows. We begin by reviewing the existing state of the art</s>
|
<s>of sentiment analysis in Bangla—which as stated already is not very rich—but the principal issue that becomes crystal clear is that whatever efforts have been reported on this topic, it is absolutely impossible to compare them since they use different datasets and almost always the datasets reported are not available to other researchers. As a natural segue from this topic, we then present how we combined all the possible sources of sentiment corpora available publicly and built a large dataset. We then move to designing 14 different matrices that form the benchmarking framework. We then describe 10 different sentiment analysis algorithms that have been reported in the literature. Although this list is not exhaustive in any sense, it does cover the majority of the work ever reported in this space. We not only implemented these algorithms, we also fine-tuned the parameters for optimizing each of these solutions. Finally, these 10 algorithms were benchmarked by the 14 different matrices identified earlier. The paper ends with a discussion on the reported work.2 Brief BackgroundThere are three classification levels in sentiment analysis: document-level, sentence-level, and aspect-level. In the document level, overall sentiment is assessed based on the complete text. The sentence-level analysis aims to classify sentiment expressed in each sentence. The first step is to identify whether the sentence is subjective or objective. If the sentence is subjective, sentence-level analysis will determine whether the sentence expresses positive or negative opinions [3]. In aspect-based sentiment analysis, sentiments are assessed on aspects or points of view of a topic, especially with multi-clausal sentences. For the rest of this paper, we will exclusively focus on sentence-level sentiment analysis.Machine learning techniques for sentiment analysis are getting better, especially for vector representation models, where some of these models can extract semantics that helps to understand the intent of the messages [4]. Many machine learning and deep learning techniques have been reported for identifying and classifying sentiment polarity in a document or sentence. Existing research demonstrates that that Long Short-Term Memory networks (LSTMs) are capable of learning the context and inherent meaning of a word and provide more accurate results for sentiments [5]. Classification algorithms such as Random Forest, Decision Tree Classifier, and the k-nearest neighbors (KNN) algorithm, are suitable for classification based on feature sets. Naive Bayes works based on Bayes’ theorem of a probability distribution. Convolutional Neural Networks (CNNs), a commonly used tool in deep learning, works well for sentiment analysis as its standard architecture can map the sentences of variable length intoTable 1. Bangla Sentiment analysis - Previous work Year 2017 2014 2017 2016 2018 2017 Acc 75.5% SVM 88%MaxEnt 88% Lr:75.91%SVM: 79.56% Tree:76.64% 78% 83.79% (MSE)0.0529 Availability Not publicly available Not publicly available Not publicly available Not publicly available Not publicly available Not publicly available Size 15,000 Comments 1,300 tweets 15,325 headlines 10,000 Banglatext samples 1,899,094 Sentences23,506,262 Words, 394,297 que Words Dataset Self-collected comments data Bangla Tweets Self-collected news head-line data. Self-collected Bangla Web CrawlBangla Sentiment Dataset Bangla tweets using Twitter APIs. Method word2vec and</s>
|
<s>Sentiment extraction of words Support Vector Machine(SVM) and Maximum Entropy (MaxEnt). Support Vector Machine, Logistic Regression, etc. LSTM, using two types of loss functions – binary cross-entropy and categorical cross-entropy Word embedding methods Word2vec Skip-Gram and Continuous Bag of Words with an addition Word to Index model for SA in Bangla language Fuzzy rules to represent semantic rules that are simple but greatly influence the actual polarity of the sentences Author Md. Al- Amin, Md. Saiful Islam, Shapan Das Uzzal Shaika Chow-dhury, Wasifa Chowdhury Mohammad Samman Hoss-ain, Israt Jahan Jui, Afia Zahin Suzana Asif Hassan, Mohammad Rashedul Amin, Abul Kalam Al Azad, Nabeel Mohammed Sakhawat Hosain Sumit, Md. Zakir Hossan, Tareq Al Muntasir and Tanvir Sourov Md. Asimuzzaman, Pinku Deb Nath, Farah Hossain, Asif Hossain, Rashedur M. Rahman Paper Title Sentiment Analysis of Bengali Comments with Word2Vec and Sentiment Information of Words [7] Performing Sentiment Analysis in Bangla Microblog Posts [8] Sentiment Analysis for Bengali Newspaper Headlines [9] Sentiment Analysis on Bangla and Romanized Bangla Text (BRBT) using Deep Recurrent models. [10] Exploring Word Embedding for Bangla Sentiment Analysis [11] Sentiment Analysis of Bangla Microblogs Using Adaptive Neuro Fuzzy System [12] Year 2019 2019 2019 2017 2016 2018 Acc Above 90% 84.4% 87% 99.87% 88.54% 65.97% three, 54.24% five labels Availability Not publicly available Not publicly available Not publicly available Not publicly available Not publicly available Not publicly available Size 7,500 Bangla sentences 9337 post 1050 Bangla texts 850 Bangla comments from different sources 68356 translated reviews 15689 YouTube comment Dataset Self-collected Dataset from Hasaan, Asif, et al. Self-collected Self-collected Generated from Amazon's Watches English dataset. Self-collected YouTube comment Method Naïve Bayes Classification Algorithm and Topical approach to extract the emotion. Long Short-term Memory (LSTM) Neural Networks for analyzing negative sentences in Bangla. Random Forest Classifier to classify sentiments. The model is generated by a neural network variance called Convolutional Neural Network Mutual Information (MI) for the feature selection process and also used Multinomial Naive Bayes (MNB) for the classification Deep learning based modelsto classify a Bangla sentence with a three-class Author Rashedul Amin Tuhin, Bechitra Kumar Paul, Faria Nawrine, Mahbuba Akter, Amit Kumar Das Abdul Hasib Uddin; Sumit Kumar Dam; Abu Shamim Mohammad ArifChakrabarty Nusrath Tabassum; Muhammad Ibrahim Khan Md. Habibul Alam ; Md-Mizanur Rahoman ; Md. Abul Kalam Azad Animesh Kumar Paul; Pintu Chandra Shill Nafis Irtiza Tripto ; Mohammed Eunus Ali Paper Title An Automated System of Sentiment Analysis from Bangla Text using Supervised Learning Techniques [13] Extracting Severe Negative Sentence Pattern from Bangla Data via Long Short-term Memory Neural Network [14] Design an Empirical Framework for Sentiment Analysis from Bangla Text using Machine Learning [15] Sentiment analysis for Bangla sentences using convolutional neural network [16] Sentiment mining from Bangla data using mutual information [17] Detecting Multilabel Sentiment and Emotions from Bangla YouTube Comments [18] Year 2016 2018 2019 2018 2016 2019 Acc 83% 89.271% 70% 80% 73% 80.48% Availability Not publicly available Not publicly available Not publicly available Not publicly available Not publicly available Not publicly</s>
|
<s>available Size 1500 short Bangla comment 9,500 comments 201 Comments 45,000 9000 words 1000 restaurant reviews Dataset Collected from various social sites Collected from different source Collected from YouTube Collected from Facebook using Facebook graph api Collected from Facebook Group Self-collected Method Used Tf.Idf to come out a better solution and give more accurate result by extracting different feature One vector containing more than one words using N-gram A backtracking algorithm used, where the heart of this approach is a sentiment lexicon Represent Bangla sentence based on characters and extract information from the characters using an RNN Naïve Bayes and Dictionary Based Approach used to Lexicon Based Sentiment Analysis Multinomial Na ̈ıve Bayes used for sentiment analysis. Author Muhammad Mahmudun Nabi, Md. Altaf, Sabir Ismail SM Abu Taher; Kazi Afsana Akhter ; K.M. Azharul Hasan Tapasy Rabeya ; Narayan Ranjan Chakraborty ; Sanjida Ferdous ; Manoranjan Dash ; Ahmed Al Marouf Mohammad Salman Haydar ; Mustakim Al Helal ; Syed Akhter Hossain Sanjida Akter; Muhammad Tareq Aziz Omar Sharif; Mohammed Moshiul Hoque; Eftekhar Hossain Paper Title Detecting Sentiment from Bangla Text using Machine Learning Technique and Feature Analysis [19] N-Gram Based Sentiment Mining for Bangla Text Using Support Vector Machine [20] Sentiment Analysis of Bangla Song Review- A Lexicon Based Backtracking Approach [21] Sentiment Extraction from Bangla Text: A Character Level Supervised Recurrent Neural Network Approach [ 22] Sentiment analysis on the Facebook group using lexicon-based approach [23] Sentiment Analysis of Bengali Texts on Online Restaurant Reviews Using Multinomial Naïve Bayes [24]sentences of fixed size scattered vectors [6].[footnoteRef:1] [1: Recently lots of pre-trained language models like BERT [30], ELMo [31], XLNet have been reported to achieve promising results on several NLP tasks including sentiment analysis. However, these models are mainly targeted to the English language, not Bangla.] Table 1 shows the state of the art of Bangla sentiment analysis research. One observation that is painfully plain in this table is that all of the authors of these papers spent valuable time in building and annotating their own datasets. What is even more alarming is that none of these datasets were then made publicly available. This has made it impossible to compare the validity and relative strengths or weaknesses for any of these solutions, making the task of establishing a benchmark framework impossible.3 DatasetIn this research, we used three different datasets. The first dataset is our own, that we previously published [1], representing the largest open-access sentiment analysis dataset for Bangla, with 9,630 samples. The second is the ABSA Sports dataset [2], with 2,979 samples. The third and final dataset [2] is the ABSA Restaurant dataset, with 2,059 samples. All datasets have three sentiment categorizations: positive, negative, and neutral. For simplicity, we excluded all of the neutral data from our datasets. After eliminating the neutral samples, the Apurba, ABSA Sports, and ABSA Restaurant datasets have 7,293, 2,718, and 1,808 positive and negative samples, respectively. The proposed benchmarking system has four stages: data collection, data pre-processing, training, and evaluation.3.1 Dataset CollectionThe Apurba</s>
|
<s>Dataset was collected from a popular online news portal “Prothom Alo” (প্রথম আলো), tagged manually and checked twice for validation. Also, the dataset is open-source for all types of non-commercial usage, intended for educational and research use. The other two datasets can easily be obtained from GitHub. We also merged these three datasets and made a mixed dataset. 3.2 Data Pre-ProcessingData cannot be used as-is in most machine learning algorithms—it needs to be processed before anything else can be done. In this research, we took the text and annotated sentiment values. We excluded the neutral samples and represent the positive class with 0 and the negative level with 1. We removed all unnecessary characters, including punctuation, URL, extra white space, emoticons, symbols, pictographs, transport and maps symbol, iOS flags, digits, and 123 other characters, and so forth. After all these steps, the preprocessed dataset looks as shown in Fig. 1. Fig. 1. Processed Dataset SampleTokenization is a task of separating the given sentence sequence each word, which are then known as tokens. Tokenizers accomplish this task by locating word boundaries. The ending point of a word and the beginning of the next word are our word boundaries. We tokenize each sentence based on white space. The next step is removing stop-words, which are commonly used words (such as "a" or “and”) which our algorithm ignores. Fig. 2 shows a typical example of these steps.Fig. 2. Pre-processing stepsWe then prepare a “term frequency-inverse document frequency” vectorization, commonly known as tf-idf, that creates a sparse matrix. The sparse matrix contains a vector representation of our data. The tf-idf output is used as a weighting factor to measure how important a word is in a document in a collection of given corpus.Then we split our data into two portions, 80% is for training purposes and 20% for test the model performance. Fig. 3 shows the flowchart of these pre-processing steps.Fig. 3. Flowchart of the pre-processing steps4.1.1 Benchmarking Indices Sensitivity analysis is a model that determines how target variables are affected based on changes in other variables known as input variables. This model, also referred to as what-if or simulation analysis, is a way to predict the outcome of a decision given a certain range of variables. By creating a given set of variables, an analyst can determine how changes in one variable affect the outcome. We have used a set of universally standardized indices for validating the algorithms including Confusion Matrix (CM), True Positive Rate (TPR), True Negative Rate (TNR), False Negative Rate (FNR), False Positive Rate (FPR), Positive Predictive Value (PPV), Negative Predictive Value (NPV), False Discovery Rate (FDR), False Omission Rate (FOR), Accuracy (ACC), F1 Score, R2 Score, Receiver Operating Characteristic (ROC), and Area Under the Curve (AUC) [25][26][27][28][29].Sentiment Analysis AlgorithmsWe used ten different algorithms, which are: Multinomial Naive Bayes, Bernoulli Naive Bayes, Logistic Regression, Decision Tree Classifier, K-Nearest Neighbors Classifier (KNN), Support Vector Machine (SVM), Ada-Boost Classifier, Extreme Gradient Boosting (XGBoost) and long short-term memory (LSTM). LSTM achieves the best</s>
|
<s>performance among them. We used K-fold cross-validation and Grid Search to find the best parameters for all of our algorithms.4.1 Multinomial Naive BayesMultinomial Naive Bayes estimates the conditional probability of a particular word given a class as the relative frequency of term t in samples belonging to class c. Multinomial Naive Bayes simply assumes a multinomial distribution for all the pairs, which seems to be a reasonable assumption in some cases, especially for word counts in documents.4.2 Bernoulli Naive BayesThe Bernoulli Naive Bayes classifier assumes that all our features are binary—that they take only two values. This is similar to the Multinomial Naive Bayes, but the predictors are Boolean variables. The parameters that we use to predict the class variable take up only values, yes or no, for example, if a word occurs in the text or not.4.3 Logistic RegressionLogistic Regression is the primary form of statistical method to find a binary dependent variable. In this technique, models try to find the probability of each class. Logistic Regression is a ML classification algorithm that used to predict the probability of a categorical dependent variable. In logistic regression, the dependent variable is a binary variable that contains data coded as either 1 (yes, success, etc.) or 0 (no, failure, etc.). In other words, the logistic regression model predicts P (Y = 1) as a function of X.4.4 Random ForestA forest usually consists of lots of trees; in a random forest, a large number of individual decision trees operated like ensemble. Every decision tree gives their vote to a particular class, and the class that gets the most votes is selected for model prediction.4.5 Decision Tree ClassifierA decision tree is the purest form of the classification algorithm. A decision tree contains nodes, edges, and leaf nodes for classifications. Decision trees consist of: (a) nodes to test for the value of a particular attribute, (b) edges/branches to correspond to the outcome of a test and connect to the next node or leaf, and (c) leaf nodes which are terminal nodes that predict the outcome (such as class labels or class distribution).4.6 KNN ClassifierIn the field of AI, the k-nearest neighbors’ algorithm is a non-parametric technique used for classifications. It is easy to implement, but the major problem is that it becomes slow as the amount of data increases.4.7 SVM ClassifierA Support Vector Machine (SVM) is a discriminative classifier formally defined by a separating hyperplane. In other words, given labeled training data (supervised learning), the algorithm builds an optimal hyperplane that separates new examples into constituent classes. In two-dimensional space, this hyperplane is a line dividing a plane into two parts wherein each class lies on either side.4.8 Ada-Boost ClassifierThe general idea behind boosting methods is to train predictors sequentially, each trying to correct its predecessor. The basic concept behind Ada-boost is to set the weights of classifiers and to train the data samples in each iteration such that it ensures accurate predictions, even for unusual observations.XGBoostXGBoost is a decision-tree-based ensemble ML algorithm that</s>
|
<s>uses a gradient boosting framework. XGBoost Gradients are fantastic models because they can increase accuracy over a traditional statistical or conditional model and can apply themselves quite well to the two primary types of targets.4.9 LSTMLong Short-Term Memory (LSTM) networks are a modified version of recurrent neural networks that enables the memory storage of past data. RNN's vanishing gradient problem is solved here. LSTM is ideal for classifying, analyzing, and forecasting time series owing to uncertain time lags. Performance 4. Multinomial Naive BayesWe found that if the alpha value set to 0.9, Multinomial Naive Bayes gets a maximum of 76.65 % accuracy. Table 2 shows the performance of Multinomial Naive Bayes. And Table 3 shows the sensitivity analysis for this algorithm. Table 2. Multinomial Naive Bayes performance Dataset ACC ROC AUC Apurba [342, 264][195, 658] 68.54% 73.05% ABSA Sports [[ 38, 72][ 55, 379]] 76.65% 67.93% ABSA Restaurant [225, 37][ 52, 48] 75.41% 72.64% All Dataset [ 566, 466][ 271, 1061] 68.82% 73.05%Table 3. Sensitivity Analysis of Multinomial Naive Bayes Dataset TPR TNR FNR FPR PPV NPV FDR FOR Apurba 77.14 56.44 22.86 43.56 71.37 63.69 28.63 36.31 74.14 ABSA Sports 87.33 34.55 12.67 65.45 84.04 40.86 15.96 59.14 85.65 ABSA Restaurant 48.0 85.88 52.0 14.12 56.47 81.23 43.53 18.77 51.89 All Dataset 79.65 54.84 20.35 45.16 69.48 67.62 30.52 32.38 74.224. Bernoulli Naive BayesFor all datasets, we found the alpha value of 0.8 got the best performance. Table 4 shows the performance, and Table 5 shows the sensitivity analysis for Bernoulli Naive Bayes. Table 4. Bernoulli Naive Bayes performance Dataset ACC ROC AUC Apurba [342, 264][195, 658] 69.16% 73.27% ABSA Sports [ 23, 87][ 20, 414] 80.33% 70.50% ABSA Restaurant [225, 37][ 52, 48] 71.82% 73.64% All Dataset [ 566, 466][271,1061] 67.98% 73.54%Table 5. Sensitivity Analysis of Bernoulli Naive Bayes Dataset TPR TNR FNR FPR PPV NPV FDR FOR Apurba 78.19 56.44 21.81 43.56 71.64 64.77 28.36 35.23 74.78 ABSA Sports 92.86 23.64 7.14 76.36 82.75 45.61 17.25 54.39 87.51 ABSA Restaurant 25.0 89.69 75.0 10.31 48.08 75.81 51.92 24.19 32.89 All Dataset 80.56 51.74 19.44 48.26 68.3 67.34 31.7 32.66 73.924. Logistic RegressionTable 6 shows the performance, and Table 7 shows the sensitivity analysis for Logistic Regression.Table 6. Logistic Regression performance Dataset ACC ROC AUC Apurba [338, 268][203, 650] 67.72% 72.51% ABSA Sports [ 23, 87][ 20, 414] 80.33% 70.50% ABSA Restaurant [237, 25][ 66, 34] 74.86% 75.39% All Dataset [ 566, 466][ 276, 1056] 68.61% 74.30%Table 7. Sensitivity Analysis of Logistic Regression Dataset TPR TNR FNR FPR PPV NPV FDR FOR Apurba 76.2 55.78 23.8 44.22 70.81 62.48 29.19 37.52 73.4 ABSA Sports 95.39 20.91 4.61 79.09 82.63 53.49 17.37 46.51 88.56 ABSA Restaurant 34.0 90.46 66.0 9.54 57.63 78.22 42.37 21.78 42.77 All Dataset 79.28 54.84 20.72 45.16 69.38 67.22 30.62 32.78 74.04. Random ForestTable 8 shows the performance, and Table 9 shows the sensitivity analysis for the Random Forest model.Table 8. Random Forest performance Dataset ACC ROC AUC Precision Recall Apurba [340, 266][309, 544] 60.59%</s>
|
<s>65.56% 65.42% 67.16% 63.77% ABSA Sports [ 47, 63][ 41, 393] 80.88% 73.30 88.31% 86.18% 90.55% ABSA Restaurant [240, 22][ 75, 25] 73.20% 70.00% 34.01% 53.19% 25% All Dataset [629, 403][387, 945] 66.58% 71.36% 70.52% 70.10% 70.94%Table 9. Sensitivity Analysis of Random Forest Dataset TPR TNR FNR FPR PPV NPV FDR FOR Apurba 64.71 59.08 35.29 40.92 69.0 54.32 31.0 45.68 66.79 ABSA Sports 88.71 43.64 11.29 56.36 86.13 49.48 13.87 50.52 87.4 ABSA Restaurant 28.0 91.98 72.0 8.02 57.14 77.0 42.86 23.0 37.58 All Dataset 68.77 62.02 31.23 37.98 70.03 60.61 29.97 39.39 69.394. Decision Tree ClassifierTable 10 shows the performance, and Table 11 shows the sensitivity analysis of the Decision Tree Classifier.Table 10. Decision Tree performance Dataset ACC ROC AUC Precision Recall Apurba [316, 290][341, 512] 56.75% 57.11% 61.87% 63.84% 60.02% ABSA Sports [ 49, 61][ 73, 361] 75.37% 65.88% 84.34% 85.55% 83.18% ABSA Restaurant [216, 46][ 55, 45] 72.10% 65.13% 47.12% 49.45% 45% All Dataset [601, 431][492, 840] 60.96% 60.99% 64.54% 66.09% 63.06%Table 11. Sensitivity Analysis of Decision Tree Dataset TPR TNR FNR FPR PPV NPV FDR FOR Apurba 58.85 55.61 41.15 44.39 65.11 48.98 34.89 51.02 61.82 ABSA Sports 83.18 47.27 16.82 52.73 86.16 41.6 13.84 58.4 84.64 ABSA Restaurant 41.0 82.06 59.0 17.94 46.59 78.47 53.41 21.53 43.62 All Dataset 63.21 60.95 36.79 39.05 67.63 56.21 32.37 43.79 65.354. K-NN ClassifierTable 12 shows the performance, and Table 13 shows the sensitivity analysis of KNN.Table 12. K-NN Classifier performance Dataset ACC ROC AUC Apurba [293, 313][308, 545] 57.44% 57.42% ABSA Sports [ 25, 85][ 29, 405] 79.04% 66.31% ABSA Restaurant [236, 26][ 77, 23] 71.55% 63.69% All Dataset [500, 532][368, 964] 61.92% 63.10%Table 13. Sensitivity Analysis of KNN Dataset TPR TNR FNR FPR PPV NPV FDR FOR Apurba 63.89 48.35 36.11 51.65 63.52 48.75 36.48 51.25 63.71 ABSA Sports 93.32 22.73 6.68 77.27 82.65 46.3 17.35 53.7 87.66 ABSA Restaurant 23.0 90.08 77.0 9.92 46.94 75.4 53.06 24.6 30.87 All Dataset 72.37 48.45 27.63 51.55 64.44 57.6 35.56 42.4 68.184. SVM ClassifierTable 15 shows the performance, and Table 14 shows the sensitivity analysis of the SVM.Table 14. SVM performance Dataset ACC ROC AUC Apurba [293, 313][308, 545] 66.83% 72.24% ABSA Sports [ 25, 85][ 29, 405] 70.77% 69.37% ABSA Restaurant [236, 26][ 77, 23] 69.89% 72.87% All Dataset [500, 532][368, 964] 67.94% 73.95%Table 15. Sensitivity Analysis of SVM Dataset TPR TNR FNR FPR PPV NPV FDR FOR Apurba 69.75 62.71 30.25 37.29 72.47 59.56 27.53 40.44 71.09 ABSA Sports 75.81 50.91 24.19 49.09 85.9 34.78 14.1 65.22 80.54 ABSA Restaurant 62.0 72.9 38.0 27.1 46.62 83.41 53.38 16.59 53.22 All Dataset 70.35 64.83 29.65 35.17 72.08 62.88 27.92 37.12 71.24. Ada-Boost ClassifierWe got the best accuracy for Ada-Boost if the number of the estimator set to 50. Table 16 shows the performance, and Table 17 shows the sensitivity analysis of the Ada-Boost Classifier.Table 16. ADA Boost performance Dataset ACC ROC AUC Apurba [293, 313][308, 545] 64.22% 65.92% ABSA Sports [ 25, 85][ 29, 405] 79.42% 66.74%</s>
|
<s>ABSA Restaurant [236, 26][ 77, 23] 73.20% 69.38% All Dataset [500, 532][368, 964] 65.44% 70.44%Table 17. Sensitivity Analysis of ADA Boost Dataset TPR TNR FNR FPR PPV NPV FDR FOR Apurba 82.77 38.12 17.23 61.88 65.31 61.11 34.69 38.89 73.01 ABSA Sports 96.77 11.82 3.23 88.18 81.24 48.15 18.76 51.85 88.33 ABSA Restaurant 18.0 93.89 82.0 6.11 52.94 75.0 47.06 25.0 26.87 All Dataset 82.88 42.93 17.12 57.07 65.21 66.02 34.79 33.98 72.994. XGBoostTable 18 shows the performance, and Table 19 shows the sensitivity analysis of XGBoost.Table 18. XGBoost performance Dataset ACC ROC AUC Apurba [291, 315][140, 713] 68.81% 6580 ABSA Sports [ 15, 95][ 16, 418] 79.60% 54.97% ABSA Restaurant [244, 18][ 67, 33] 76.52% 63.06% All Dataset [ 490, 542][ 185, 1147] 69.25% 66.80%Table 19. Sensitivity Analysis of XGBoost Dataset TPR TNR FNR FPR PPV NPV FDR FOR Apurba 83.59 48.02 16.41 51.98 69.36 67.52 30.64 32.48 75.81 ABSA Sports 96.31 13.64 3.69 86.36 81.48 48.39 18.52 51.61 88.28 ABSA Restaurant 33.0 93.13 67.0 6.87 64.71 78.46 35.29 21.54 43.71 All Dataset 86.11 47.48 13.89 52.52 67.91 72.59 32.09 27.41 75.944. LSTMIn word2vec [32], vector representations help to get a closer relationship among the words. Deep learning models such as LSTMs can remember important information across long stretches of sequences [33]. For semantic understanding or ‘meaning’ that based on context, it is important to get the actual sentiment of a sentence [4]. Hence LSTM model with word2vec has been implemented to get the results over the newly published corpora. Here are the implementation details:Word Embedding using vord2vecWindow size: 2Minimum word count frequency is 4 (ignored lower than 4)The dimensionality of the word vectors: 100Embedding layer dropout: 50LSTM layer dropout: 20Recurrent dropout: 20The dimensionality of the output space 100Activation function: SigmoidOptimizer: AdamLoss function: Binary cross-entropyNumber of Epoch: 10Batch Size: 100Table 20 shows the performance, and Table 21 shows the sensitivity analysis of the datasets. For the ABSA dataset, it doesn’t work well for the lack of enough data in both classes. So, the model was biased for those two ABSA datasets. Fig 4 is showing the proposed LSTM model.Table 20. LSTM performance Dataset ACC ROC AUC Apurba [361, 245][175, 678] 69.52% 69.53% ABSA Sports [ 0, 110][ 0, 434] 79.77% 50% ABSA Restaurant [262, 0][100, 0] 72.38% 50% All Dataset [ 579, 453][ 181, 1151] 73.18% 71.26%Fig. 4. Proposed LSTM ArchitectureTable 21. SenSSsitivity Analysis of LSTM Dataset TPR TNR FNR FPR PPV NPV FDR FOR Apurba 79.25 60.56 20.75 39.44 73.88 67.46 26.12 32.54 76.47 ABSA Sports 100 100 79.78 20.22 ABSA Restaurant 100 100 72.38 27.62 All Dataset 82.81 62.5 17.19 37.5 74.03 73.80 25.97 26.20 78.175.1 DiscussionIn this section, we will benchmark the ten algorithms. Table 22 shows the comparison of all the algorithms on all the datasets.The algorithms are sorted based on their performance on the merged dataset. According to this evaluation, LSTM performs the best, followed by XGBoost and Multinomial Naive Bayes and so forth.Table 22. Benchmark comparison - 1 Algorithm Acc Apurba Acc Sports</s>
|
<s>Acc Restaurant Acc All Data LSTM 69.52% 79.77% 72.38% 73.18% XGBoost 68.81% 79.60% 76.52% 69.25% Multinomial Naive Bayes 68.54% 76.65% 75.42% 68.82% Logistic Regression 67.72% 80.33% 74.86% 68.61% Bernoulli Naive Bayes 69.16% 80.33% 71.82% 67.98% SVM 66.83% 70.77% 69.89% 67.94% Random Forest 60.59% 80.88% 73.20% 66.58% ADA Boost 64.22% 79.42% 73.20% 65.44% K-NN Classifier 57.44% 79.04% 71.55% 61.92% Decision Tree Classifier 56.75% 75.37% 72.10% 60.96%Note that although LSTM performs best on the combined dataset, it was beaten by Random Forest on the Sports and by XGBoost on the Restaurant datasets, respectively, as noted by the highlighted cells in Table 22. Another point to note is that Bernoulli Naive Bayes is twice in the second-best position: on the Apurba and the Sports datasets, as indicated by the gray cells in Table 22. To rank these algorithms based on how consistent they are, we start by assigning 1, 2, … 10 positions for each dataset, and then adding up their ranks on each dataset. The algorithm with the smallest sum can be ranked as most consistent, assuming the degree of difficulty of each dataset is the same, which, admittedly, we cannot know for sure. But it still gives us a ‘sense’ of how they perform over a range of different problem domains. Table 23 shows this revised ranking. This indicates that LSTM and XGBoost are tied in the first place, followed by another tie between Multinomial Naive Bayes and Logistic Regression. Decision Tree Classifier is again at the bottom of this table.Table 23. Benchmark comparison - 2 Algorithm AccuracyApurba AccuracySports AccuracyRestaurant AccuracyAll Data Sum of Rankings Overall Ranking LSTM 1st XGBoost 1st Multinomial Naive Bayes 2nd Logistic Regression 2nd Bernoulli Naive Bayes 3rd SVM 6th Random Forest 4th ADA Boost 5th K-NN Classifier 7th Decision Tree Classifier 8thSince LSTM seems to be leading the ranking on both tables, we should take a closer look at this algorithm. LSTM is a deep learning algorithm. Therefore, it has a different way of learning from data. The other six models are classification algorithms using various types of features. As described earlier, LSTM learns the context or semantic meaning from word2vec, but the rest of the models work on the frequency of a given word from encoded vector representation. As the dataset contains only about 12,000 records, this is not enough for getting consistent and accurate output, especially for LSTM, as it is learning the context or semantic lexicon. It needs more data to perform better. We have tested the LSTM model by parameter tuning, input shuffling, and changing the input size. We found that it sometimes provides very different outputs for small changes in the value of the parameters.Conclusion and Future WorkThis paper presents a detailed benchmarking of ten sentiment-analysis algorithms on three publicly available Bangla datasets. One of the core issues that we face in Bangla natural language processing research is the unavailability of standard datasets. In other languages, such as English or Chinese, this is not a concern. The absence of a standard,</s>
|
<s>publicly available dataset means that every researcher has to first collect and label the data before any training can take place. And since each new algorithm is evaluated on a different dataset, it is also virtually impossible to compare the different approaches in terms of their accuracy and quality. We hope that this paper will alleviate those problems to some degree. Since we have fine-tuned the algorithms for these particular datasets, researchers in the future can improve on these algorithms by comparing their performance against these benchmarked datasets, which will aid in the overall improvement in the development of NLP tools for Bangla. One of the essential factors in sentiment analysis that has not been addressed in this paper is multi-aspect sentence evaluation. In a sentence, there might be multiple clauses, and different clauses may have different sentiments. For example, examine the following quote: “Sakib’s batting was good, but he did not bowl well.” Here, we need to take the sentiment based the aspects of batting and bowling. The same goes for customer reviews: a product may be bad or good from different perspectives. So, a future task would be to extend these benchmarking models for aspect-based sentiment analysis. For sentiment analysis, there are some smarter and more complicated models, such as CNN-LSTM, where the dimensional approach can provide more fine-grained sentiment analysis [14]. We decided not to include those models since we wanted to start the benchmarking with the fundamental, commonly used, algorithms, especially within the nascent Bangla NLP domain. In the next iteration of this research, we plan to include some of these more advanced models. Finally, the size of the datasets used in this benchmarking is still minimal. We hope that other researchers will come forward and fill this gap by publicly offering larger labeled datasets for Bangla sentiment analysis.ReferencesRahman, F., Khan, H., Hossain, Z., Begum, M., Mahanaz, S., Islam, A., & Islam, A. (2020). An Annotated Bangla Sentiment Analysis Corpus. 2019 International Conference on Bangla Speech and Language Processing (ICBSLP).Rahman, M., & Kumar Dey, E. (2018). Datasets for aspect-based sentiment analysis in Bangla and its baseline evaluation. Data, 3(2), 15.W. Medhat, A. Hassan, and H. Korashy, “Sentiment analysis algorithms and applications: A survey,” 2014.LeCun, Y., & Bengio, Y. (1995). Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks, 3361(10), 1995.M. Le, M. Postma, J. Urbani, and P. Vossen, “A deep dive into word sense disambiguation with LSTM,” in Proceedings of the 27th International Conference on Computational Linguistics. Santa Fe, New Mexico, USA: Association for Computational Linguistics, Aug. 2018, pp. 354–365. “Sentiment analysis using deep learning techniques: A review,” International Journal of Advanced Computer Science and Applications.Al-Amin, M., Islam, M. S., & Uzzal, S. D. (2017, February). Sentiment analysis of Bengali comments with word2vec and sentiment information of words. In 2017 International Conference on Electrical, Computer and Communication Engineering (ECCE) (pp. 186-190). IEEE.Chowdhury, S., & Chowdhury, W. (2014, May). Performing sentiment analysis in Bangla microblog posts. In 2014 International Conference</s>
|
<s>on Informatics, Electronics & Vision (ICIEV) (pp. 1-6). IEEE.Hossain, M. S., Jui, I. J., & Suzana, A. Z. (2017). Sentiment analysis for Bengali newspaper headlines (Doctoral dissertation, BRAC University).Hassan, A., Amin, M. R., Mohammed, N., & Azad, A. K. A. (2016). Sentiment analysis on Bangla and Romanized Bangla text (BRBT) using deep recurrent models. arXiv:1610.00369.Sumit, S. H., Hossan, M. Z., Al Muntasir, T., & Sourov, T. (2018, September). Exploring word embedding for bangla sentiment analysis. In 2018 International Conference on Bangla Speech and Language Processing (ICBSLP) (pp. 1-5). IEEE.Asimuzzaman, M., Nath, P. D., Hossain, F., Hossain, A., & Rahman, R. M..Sentiment analysis of Bangla microblogs using adaptive neuro fuzzy system. In 2017 13th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (pp. 1631-1638). Tuhin, R. A., Paul, B. K., Nawrine, F., Akter, M., & Das, A. K. (2019, February). An Automated System of Sentiment Analysis from Bangla Text using Supervised Learning Techniques. In 2019 IEEE 4th International Conference on Computer and Communication Systems (ICCCS) (pp. 360-364). IEEE.Uddin, A. H., Dam, S. K., & Arif, A. S. M. (2019, December). Extracting Severe Negative Sentence Pattern from Bangla Data via Long Short-term Memory Neural Network. In 2019 4th International Conference on Electrical Information and Communication Technology (EICT) (pp. 1-6). IEEE.Tabassum, N., & Khan, M. I. (2019, February). Design an Empirical Framework for Sentiment Analysis from Bangla Text using Machine Learning. In 2019 International Conference on Electrical, Computer and Communication Engineering (ECCE) (pp. 1-5). IEEE.Alam, M. H., Rahoman, M. M., & Azad, M. A. K. (2017, December). Sentiment analysis for Bangla sentences using convolutional neural network. In 2017 20th International Conference of Computer and Information Technology (ICCIT) (pp. 1-6). IEEE.Paul, A. K., & Shill, P. C. (2016, December). Sentiment mining from Bangla data using mutual information. In 2016 2nd International Conference on Electrical, Computer & Telecommunication Engineering (ICECTE) (pp. 1-4). IEEE.Tripto, N. I., & Ali, M. E. (2018, September). Detecting multilabel sentiment and emotions from Bangla YouTube comments. In 2018 International Conference on Bangla Speech and Language Processing (ICBSLP) (pp. 1-6). IEEE.Al-Amin, M., Islam, M. S., & Uzzal, S. D. (2017, February). Sentiment analysis of Bengali comments with word2vec and sentiment information of words. In 2017 International Conference on Electrical, Computer and Communication Engineering (ECCE) (pp. 186-190). IEEE. Taher, S. A., Akhter, K. A., & Hasan, K. A. (2018, September). N-gram based sentiment mining for Bangla text using support vector machine. In 2018 International Conference on Bangla Speech and Language Processing (ICBSLP) (pp. 1-5). IEEE. Rabeya, T., Chakraborty, N. R., Ferdous, S., Dash, M., & Al Marouf, A. (2019, February). Sentiment Analysis of Bangla Song Review-A Lexicon Based Backtracking Approach. In 2019 IEEE International Conference on Electrical, Computer and Communication Technologies (ICECCT) (pp. 1-7). IEEE.Haydar, M. S., Al Helal, M., & Hossain, S. A. (2018, February). Sentiment Extraction from Bangla Text: A Character Level Supervised Recurrent Neural Network Approach. In 2018 International Conference on Computer, Communication, Chemical, Material and Electronic Engineering (IC4ME2) (pp. 1-4). IEEE.Akter, S., & Aziz, M. T. (2016,</s>
|
<s>September). Sentiment analysis on Facebook group using lexicon-based approach. In 2016 3rd International Conference on Electrical Engineering and Information Communication Technology (ICEEICT) (pp. 1-4). IEEE.Sharif, O., Hoque, M. M., & Hossain, E. (2019, May). Sentiment Analysis of Bengali Texts on Online Restaurant Reviews Using Multinomial Naïve Bayes. In 2019 1st International Conference on Advances in Science, Engineering and Robotics Technology (ICASERT) (pp. 1-6). IEEE.Fawcett, Tom (2006). "An Introduction to ROC Analysis" (PDF). Pattern Recognition Letters. 27 (8): 861–874. doi:10.1016/j.patrec.2005.10.010Powers, David M W (2011). "Evaluation: From Precision, Recall and F-Measure to ROC, Informedness, Markedness & Correlation" (PDF). Journal of Machine Learning Technologies. 2 (1): 37–63.Ting, Kai Ming (2011). Encyclopedia of machine learning. Springer ISBN978-0-387-30164-8.Brooks, Harold; Brown, Barb; Ebert, Beth; Ferro, Chris; Jolliffe, Ian; Koh, Tieh-Yong; Roebber, Paul; Stephenson, David (2015-01-26). "WWRP/WGNE Joint Working Group on Forecast Verification Research". Collaboration for Australian Weather and Climate Research. World Meteorological Organisation. Retrieved 2019-07-17.Chicco D, Jurman G (January 2020). "The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation". BMC Genomics. 21 (6). doi:10.1186/s12864-019-6413-7. PMC 6941312. PMID 31898477.J. Devlin, M. Chang, K. Lee, and K. Toutanova, “BERT: pre-training of deep bidirectional transformers for language understanding,” CoRR, vol. abs/1810.04805, 2018. M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer, “Deep contextualized word representations,” in Proc. of NAACL, 2018.T. Mikolov, K. Chen, G. S. Corrado, and J. Dean, “Efficient estimation of word representations in vector space,” CoRR, vol. abs/1301.3781, 2013.S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Comput., vol. 9, no. 8, pp. 1735–1780, Nov. 1997. image2.svgimage3.pngimage4.pngimage5.pngimage6.jpgimage1.png</s>
|
<s>Retrieving YouTube Video by Sentiment analysis on User CommentSee discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/325119126Retrieving YouTube Video by Sentiment Analysis on User CommentConference Paper · May 2018DOI: 10.1109/ICSIPA.2017.8120658CITATIONSREADS6604 authors, including:Some of the authors of this publication are also working on these related projects:PhD Research View projectIndustrial motor's fault diagnosis and prognosis under variable conditions (i.e. speed, load) using deep learning. View projectHanif BhuiyanThe Commonwealth Scientific and Industrial Research Organisation11 PUBLICATIONS 28 CITATIONS SEE PROFILEDr. MD Rashedul IslamUniversity of Asia Pacific68 PUBLICATIONS 388 CITATIONS SEE PROFILEAll content following this page was uploaded by Hanif Bhuiyan on 18 July 2020.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/325119126_Retrieving_YouTube_Video_by_Sentiment_Analysis_on_User_Comment?enrichId=rgreq-a9a5104ba9aa13223811ce41a65ae333-XXX&enrichSource=Y292ZXJQYWdlOzMyNTExOTEyNjtBUzo5MTQ1ODcyMzQ4NjkyNDhAMTU5NTA2NTk5Nzk0MQ%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/325119126_Retrieving_YouTube_Video_by_Sentiment_Analysis_on_User_Comment?enrichId=rgreq-a9a5104ba9aa13223811ce41a65ae333-XXX&enrichSource=Y292ZXJQYWdlOzMyNTExOTEyNjtBUzo5MTQ1ODcyMzQ4NjkyNDhAMTU5NTA2NTk5Nzk0MQ%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/PhD-Research-300?enrichId=rgreq-a9a5104ba9aa13223811ce41a65ae333-XXX&enrichSource=Y292ZXJQYWdlOzMyNTExOTEyNjtBUzo5MTQ1ODcyMzQ4NjkyNDhAMTU5NTA2NTk5Nzk0MQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Industrial-motors-fault-diagnosis-and-prognosis-under-variable-conditions-ie-speed-load-using-deep-learning?enrichId=rgreq-a9a5104ba9aa13223811ce41a65ae333-XXX&enrichSource=Y292ZXJQYWdlOzMyNTExOTEyNjtBUzo5MTQ1ODcyMzQ4NjkyNDhAMTU5NTA2NTk5Nzk0MQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-a9a5104ba9aa13223811ce41a65ae333-XXX&enrichSource=Y292ZXJQYWdlOzMyNTExOTEyNjtBUzo5MTQ1ODcyMzQ4NjkyNDhAMTU5NTA2NTk5Nzk0MQ%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Hanif_Bhuiyan?enrichId=rgreq-a9a5104ba9aa13223811ce41a65ae333-XXX&enrichSource=Y292ZXJQYWdlOzMyNTExOTEyNjtBUzo5MTQ1ODcyMzQ4NjkyNDhAMTU5NTA2NTk5Nzk0MQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Hanif_Bhuiyan?enrichId=rgreq-a9a5104ba9aa13223811ce41a65ae333-XXX&enrichSource=Y292ZXJQYWdlOzMyNTExOTEyNjtBUzo5MTQ1ODcyMzQ4NjkyNDhAMTU5NTA2NTk5Nzk0MQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/The_Commonwealth_Scientific_and_Industrial_Research_Organisation?enrichId=rgreq-a9a5104ba9aa13223811ce41a65ae333-XXX&enrichSource=Y292ZXJQYWdlOzMyNTExOTEyNjtBUzo5MTQ1ODcyMzQ4NjkyNDhAMTU5NTA2NTk5Nzk0MQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Hanif_Bhuiyan?enrichId=rgreq-a9a5104ba9aa13223811ce41a65ae333-XXX&enrichSource=Y292ZXJQYWdlOzMyNTExOTEyNjtBUzo5MTQ1ODcyMzQ4NjkyNDhAMTU5NTA2NTk5Nzk0MQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Dr_Md_Rashedul_Islam?enrichId=rgreq-a9a5104ba9aa13223811ce41a65ae333-XXX&enrichSource=Y292ZXJQYWdlOzMyNTExOTEyNjtBUzo5MTQ1ODcyMzQ4NjkyNDhAMTU5NTA2NTk5Nzk0MQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Dr_Md_Rashedul_Islam?enrichId=rgreq-a9a5104ba9aa13223811ce41a65ae333-XXX&enrichSource=Y292ZXJQYWdlOzMyNTExOTEyNjtBUzo5MTQ1ODcyMzQ4NjkyNDhAMTU5NTA2NTk5Nzk0MQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/University_of_Asia_Pacific?enrichId=rgreq-a9a5104ba9aa13223811ce41a65ae333-XXX&enrichSource=Y292ZXJQYWdlOzMyNTExOTEyNjtBUzo5MTQ1ODcyMzQ4NjkyNDhAMTU5NTA2NTk5Nzk0MQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Dr_Md_Rashedul_Islam?enrichId=rgreq-a9a5104ba9aa13223811ce41a65ae333-XXX&enrichSource=Y292ZXJQYWdlOzMyNTExOTEyNjtBUzo5MTQ1ODcyMzQ4NjkyNDhAMTU5NTA2NTk5Nzk0MQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Hanif_Bhuiyan?enrichId=rgreq-a9a5104ba9aa13223811ce41a65ae333-XXX&enrichSource=Y292ZXJQYWdlOzMyNTExOTEyNjtBUzo5MTQ1ODcyMzQ4NjkyNDhAMTU5NTA2NTk5Nzk0MQ%3D%3D&el=1_x_10&_esc=publicationCoverPdfRetrieving YouTube Video by Sentiment Analysis on User Comment Hanif Bhuiyan Department of CSE University of Asia Pacific 74/A, Green Road, Dhaka hanif_tushar.cse@uap-bd.eduJinat Ara Department of CSE Southeast University Banani, Dhaka aracse2014@gmail.comRajon Bardhan Department of CSE Southeast University Banani, Dhaka rajonbardhan@gmail.com Md. Rashedul Islam Department of CSE University of Asia Pacific 74/A, Green Road, Dhaka rashed.cse@uap-bd.eduAbstract— YouTube is one of the comprehensive video information source on the web where video is uploading continuously in real time. It is one of the most popular site in social media, where users interact with sharing, commenting and rating (like/views) videos. Generally the quality, relevancy and popularity of the video is maintained based on this rating. Sometimes irrelevant and low quality videos ranked higher in the search result due to the number of views or likes, which seems untenable. To minimize this issue, we present a Natural Language processing (NLP) based sentiment analysis approach on user comments. This analysis helps to find out the most relevant and popular video of YouTube according to the search. The effectiveness of the proposed scheme has been proved by a data driven experiment in terms of accuracy of finding relevant, popular and high quality video. Keywords- YouTube; Comment; Sentiment analysis I. INTRODUCTION In recent year’s online social media like Facebook, Twitter, YouTube and Google+ make the space for millions of users to share their information and opinion with each other. With the rapidly increasing popularity, these sites have become a source of massive amount of real time data of videos, images etc. Among them, YouTube1 is one of the world's largest video sharing platforms, where videos are uploading continuously by the millions of users (companies, private persons, etc.) [1]. YouTube has emerged as a comprehensive and accessible compilation of video information source on the web. It is a unique environment with many facets such as multi-modal, multi-lingual, multi-domain and multi-cultural [1]. This versatility of variety and attractive shared content draw the widespread attention. Therefore, the importance of YouTube is successively increasing for the industry and research community day by day. YouTube was ranked as the second most popular site by Alexa Internet, a web traffic analysis company on Dec, 2016 [2]. In order to increase the user’s interaction it allows their users to express their opinion by rating the viewed objects (via clicking on the like/dislike buttons) and interacting</s>
|
<s>with the other community members (via the comments feature) [3]. These activities (like / dislike /number 1 https://www.youtube.com of views) of the users can serve as a global indicator of quality or popularity for a particular video [3, 4]. Moreover, these Meta data (like/dislike/number of views) serve the purpose of helping the community to filter relevant opinions more efficiently [3, 5, 6]. When we search for a specific video through some keyword on specific topic, the most popular video comes (which are rated based on views/likes by the users) first in search panel based on that given keywords. Therefore, sometimes some problematic issues arise in searching such as inconsistency, irrelevancy, etc. For an example when we do a query like “The Amazing Spider-Man 2”. For that search, Figure. 1 represents the result where none of the videos actually that movie. Most of the result for this query are inconsistent. Among the result some are movie’s cut scenes, some are movie’s specific clip like action, romance and some are about its trailer. However, there is nothing actually what we look for. This kind of scenario occurs in YouTube frequently. Until we run the video we could not understand. This situation comes because of the number of views and likes of those videos. Therefore, in order to find the perfect and relevant video, an effective and efficient process seems conceivable which would not depend on only those metadata (like/dislike/number of views). In order to solve these issues several works have done where some groups have been reported significant progress [3, 4, 7, 8]. The research community continuously showed keen interest in analyzing and exploiting the rich content shared on YouTube. Some earlier studies attempted to investigate the retrieval potential of the video using not only the Meta data (like/dislike/number of views) [1] but also using comments [3, 7]. In addition, from the self-observation it comes out that in YouTube, an appropriate video carry a high amount of positive comments rather than an inappropriate video. Therefore, it is speculated that comments might one of the vital source to perceive about the video quality, perfection, relevancy and its popularity. However, user comments usually found in unstructured way and it is quite difficult to analyze. Therefore, in this paper we present a Natural Language Processing (NLP) based sentiment analysis approach in order to find the popular video which is relevant. In this study, the answer is seek that, how useful is this metadata (comments), to improve the retrieval effectiveness? Proc. of the 2017 IEEE International Conference on Signal and Image Processing Applications (IEEE ICSIPA 2017), Malaysia,September 12-14, 2017978-1-5090-5559-3/17/$31.00 ©2017 IEEE 474Fig. 1. Searching in YouTube Moreover, the experimental result shows that metadata (comments) have the potential to correctly mention the relevancy or popularity of the video. The brief description of this proposed process is stated in the section III. II. RELATED WORK Several research has been undertaken on different aspects of YouTube video features [5]. Among them comments are one of the important one to</s>
|
<s>make a decision (comment rating, topic categories etc.,) about the particular video [6]. These comments are also used to annotate the video object [7, 9]. Comments also reflect the user’s behavior and could use to find the troll users [1]. Moreover, by analyzing the sentiment of comments it is possible to find the users positivity or negativity about video [10]. Based on comments, researchers categorize the videos in several category [11, 12]. Furthermore, for improving video retrieval process Altingovde et al. proposed a method based on basic feature and social feature [3]. Lehner et al. Work on YouTube video comments, like, dislikes for showing that user’s perception (like/dislike) are influenced by valuable comments [4]. These two methods worked to find the popularity of video using various features so that it could help to retrieve the useful video. Although these two proposed approaches [3, 4] showed impressive work for video retrieval process but they used like/dislike and views. Sometimes which may lead to inaccurate result. On the contrary, we only analyze a large amount of comments instead of others features (like/views etc.) for finding relevant video which might be useful for YouTube users. III. METHODOLOGY This section describes the NLP-based methodology of sentiment analysis on user’s comment in order to retrieve the most relevant and perfect YouTube videos. The proposed process works in four steps as shown in Figure. 2. First, comment collection and preprocessing module extracts data (comments) from the specific YouTube video and do some language preprocessing to prepare for the next process. Second, the processed text go through a NLP-based methods to generate data sets. Next, apply the sentiment classifier (Sentistrength) on the data sets to calculate the positivity and negativity scores. Finally, apply the Standard Deviation to get the rating result. In detail the methodology are explained in below. A. Comment Collection and Preprocessing: The goal of this section is acquisition of comment of a selected YouTube video. In order to address this task a focused crawler is implemented. According to the video URL, it extracts comment (up to 1000) of that video using web API through HTTP GET method. But, the extracted comments are heterogeneous in terms of languages and various notions used by the users. Therefore, we carried out some preprocessing on these unstructured comments to generate the data sets. After extracting the comments, following changes are performed: • Remove all the expressions which are irrelevant for the proposed methodology like date ("Dec 2-2010"or "2-12-2010"), link (www.imdb.com, www.tmdb.com etc.), numbers (12, 20 etc.) and special characters ("*","/","!","@","?","#","&","$"), emoticons (“, , <3 etc.”) and different language (Chinese, Arabic, Bangla , Hindi etc.). Fig. 2. Overall work process of sentiment analysis on user comment. Proc. of the 2017 IEEE International Conference on Signal and Image Processing Applications (IEEE ICSIPA 2017), Malaysia,September 12-14, 2017475• Remove all the punctuations such as period (“.”), space ("-"), commas (","), semicolon (“;”), hash (“-“) etc. B. Generating Data Sets: For each evaluating video two datasets are made according to the proposed method.</s>
|
<s>Both are made from the processed comments text. To make datasets first, in the processed text MySQL stop word is applied to remove all the stop words and then convert all the words into their singular form and thus make dataset 1. Next, for Dataset 2 all the adjectives [14] (important words of the comment text) of the comments are gathered. Empirically and from the self-analysis on YouTube video comments it seems that adjectives are the most important indicators for a user’s feeling and decision about the video’s quality and relevancy. Therefore, to make the list of adjectives of the comments Stanford Part-of-Speech tagger (POS Tagger)2 is applied and identify all the adjective words and thus make dataset 2. C. Sentiment Measure: In this section, we use SentiStrength3 thesaurus in both the dataset to figure out the overall sentiment of user comments. SentiStrength is a sentiment lexicon analysis classifier which estimates the strength of positive and negative sentiment of the comment words. It reports two sentiment strengths:- 1 (not negative) to -5 (extremely negative) and 1 (not positive) to 5 (extremely positive). If a word within a sentence got <1 rating, the classifier select it as a negative word and if got >=1 rating, then classifier selects it as a positive word. For an example, Table I shows the sentiment measure for a particular comment (“He changed the world as we know it and still remains super humble. Much respect.”) according to the proposed method. From Table I, it seems that, what that comment is meaning about the video, is it positive or negative. In overall, after calculating the sentiment value of all comments the standard deviation value is calculated (by applying standard deviation technique, describe in next section) of these values for next step of the process. Table I: Sentiment measure of a comment Word pos neg changed 1 -1 world 1 -1 remains 1 -1 super 1 -1 humble 1 -1 respect 3 -1 D. Video Rating: In this section the statistical comparison of the standard deviation (SD) values is conducted (both positive and 2 http://nlp.stanford.edu/software/tagger.shtml 3 http://sentistrength.wlv.ac.uk/ negative distributions) across the both dataset. After measuring the sentiment value of both data set the standard deviation technique is applied. The analysis of SD value for each group of data systematically resulted in a strong support of the significance of the difference across the two groups. For an example, here Figure. 3(a) and Figure. 3(b) represent the scenario of determining SD value for 10 videos for both types of dataset. These graphs depicts the SD values for negativity and positivity which revealing that negative sentiment values are predominant in negatively rated comments, whereas positive sentiment values are predominant in positively rated comments. For each video, there are two SD value come from two data set. Then, the calculated average SD value determined the relevancy, perfection and quality of the video. Fig. 3(a). SD value for dataset 1 Fig. 3(b). SD value for dataset 2 IV. EXPERIMENT This section presents</s>
|
<s>the experiment results of the proposed sentiment analysis approach. To evaluate the proposed approach the experiment is conducted on 1000 videos of YouTube which were selected randomly. However, precisely 10 categories (education, science and technology, entertainment, cartoon, etc.) were selected for those videos. For each category 100 videos were considered and for each video, 1,000 comments were considered. Before experiment, a manual inspection was performed on randomly selected 100 videos of YouTube. 10 volunteers were recruited to justify the result of the video through the proposed process. Volunteers checked the relevancy and quality of the video. Result suggested that individual video dataset having > 0.5 average SD is perfect and relevant Proc. of the 2017 IEEE International Conference on Signal and Image Processing Applications (IEEE ICSIPA 2017), Malaysia,September 12-14, 2017476Video category Number of videos Accuracy when considering each dataset Accuracy when considering both dataset together Education 100 Dataset1 73.66% 62.125% Dataset2 50.59% Science and technology 100 Dataset1 86.88% 75.435% Dataset2 63.99% Entertainment 100 Dataset1 82.33% 68.945% Dataset2 55.56% Pets and animals 100 Dataset1 83.77% 66.38% Dataset2 48.99% Cartoon 100 Dataset1 64.55% 51.21% Dataset2 37.87% Documentary 100 Dataset1 79.56% 64.665% Dataset2 49.77% Movie 100 Dataset1 77.24% 63.615% Dataset2 47.99% News and politics 100 Dataset1 80.24% 57.565% Dataset2 34.89% Games and animation 100 Dataset1 69.85% 57.755% Dataset2 45.66% Song 100 Dataset1 76.54% 61.216% Dataset2 45.883% Table II: Accuracy of finding relevant video based on comments according to the search. It was also found that the videos are of high quality, perfect, relevant and popular when the average SD become > 0.5 considering both dataset together. Therefore for the experiment of the proposed method the threshold value was set 0.5. Experiment result is shown in Table II. The result shows that in order to find the relevant and popular video in YouTube using comments are very effective. From the Table II, it seems, for dataset 1 the result is much better than for dataset 2. The highest accuracy 75.435% of the proposed approach was occurred for science and technology related videos where lowest accuracy 51.21% has been calculated for cartoon categories of video. In addition, for dataset 1 the average accuracy of the proposed approach was 77.462% when whole comments were considered. Although the accuracy was low for dataset 2 but using adjectives gives much more accurate result sometimes. Therefore, considering of both dataset could give the real picture of a particular video than using specific one dataset. For an example in Table III, when a video was searched like “Huawei Ascend p7 Review ", in the search panel it was shown that first few videos were relevant and perfect. But to determine which one was the best and more popular and Table III: An example of measuring video relevancy and quality through the proposed method Video Dataset Accuracy when considering each dataset Accuracy when considering both dataset together Video1 Dataset1 1.134411513 0.866642 Dataset2 0.598874 Video2 Dataset1 1.120566227 0.939081 Dataset2 0.757596 which one should be appear first. For this, analyzing the both dataset</s>
|
<s>was essential. From the result in Table III, it seems that first video of dataset 1 gives better accuracy than second video of dataset 1. However, accuracy of dataset 2 of second video is much better than dataset 2 of first video. Therefore, in such case considering both dataset gives us the real picture of those videos quality and perfection. Like, in this case second video is the perfect and much relevant than first one. So if both datasets is considered to find the video on YouTube then the result might be much better. From the result it can be concluded that, if the analysis of YouTube video performed based on the comments text, then our proposed approach might commit good result. However, it depends as much as semantically the text is analyzed, the more we analyze the text the more chance to get better accuracy. V. CONCLUSION This paper illustrates an automatic process for finding useful video by sentiment analysis of user’s comments based on Natural Language Processing (NLP). Our approach evaluated the quality, relevancy and popularity of YouTube videos considering the relationship of user’s sentiments expressed in comments. We analyzed a sample of almost 1 million YouTube comments. Large-scale studies of YouTube video Meta data (comment) using the NLP and SentiStrength revealed the importance of user sentiments. The experimental result shows the efficiency of the proposed approach by revealing maximum 75.435% accuracy in order to retrieving effective video and also provides the direction to pursue the work to do more analysis on comment. Proc. of the 2017 IEEE International Conference on Signal and Image Processing Applications (IEEE ICSIPA 2017), Malaysia,September 12-14, 2017477REFERENCES [1] A. Severyn, A. Moschitti, O. Uryupina, B. Plank and K. Filippova,“Multi-lingual opinion mining on youtube,” Information Processing & Management, 52(1), 2016, pp. 46-60. [2] http:// www.wikipedia .com. [3] S. Chelaru, C. Orellana-Rodriguez and I. S. Altingovde, “How useful is social feedback for learning to rank YouTube videos?” In World Wide Web, 17(5), 2013, pp. 1-29. [4] P. Schultes, V. Dorner and F. Lehner, “Leave a Comment! An In-Depth Analysis of User Comments on YouTube,” Wirtschaftsinformatik, 2013, pp. 659-673. [5] S. Siersdorfer, S. Chelaru, J. S. Pedro, I. S. Altingovde and W. Nejdl, “Analyzing and mining comments and comment ratings on the social web,” ACM Transactions on the Web (TWEB), 8(3), 2014, pp. 1-39. [6] S. Siersdorfer, S. Chelaru, W. Nejdl and J. San Pedro, “How useful are your comments? analyzing and predicting youtube comments and comment ratings,” In Proceedings of the 19th international conference on World wide web (ACM), 2010, pp. 891-900. [7] E. Momeni, C. Cardie and M. Ott, “Properties, Prediction, and Prevalence of Useful User-Generated Comments for Descriptive Annotation of Social Media Objects,” In Proceedings of ICWSM, 2013. [8] O. Uryupina, B. Plank, A. Severyn, A. Rotondi and A. Moschitti, “SenTube: A Corpus for Sentiment Analysis on YouTube Social Media,” In LREC, 2014, pp. 4244-4249. [9] E. Momeni, B. Haslhofer, K. Tao and G. J. Houben, “Sifting useful comments from Flickr Commons and</s>
|
<s>YouTube,” International Journal on Digital Libraries, 16(2), 2015, pp.161-179. [10] H. Lee, Y. Han, Y. Kim and K. Kim, “Sentiment analysis on online social network using probability Model,” In Proceedings of the Sixth International Conference on Advances in Future Internet, 2014, pp. 14-19. [11] K. Filippova and K. B. Hall, “Improved video categorization from text metadata and user comments,” In Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval, 2011, pp. 835-842. [12] A. Severyn, A. Moschitti, O. Uryupina, B. Plank and K. Filippova, “Opinion Mining on YouTube,” In ACL (1), 2014, pp. 1252-1261. [13] Z. Wu and E. Ito, “Correlation analysis between user's emotional comments and popularity measures,” In Advanced Applied Informatics (IIAIAAI), IIAI 3rd International Conference (IEEE), 2014, pp. 280-283. [14] H. Bhuiyan, K.J. Oh, M.k. Hong and G.S. Jo "An unsupervised approach for identifying the Infobox template of wikipedia article." In 18th International Conference on Computational Science and Engineering (CSE), 2015 IEEE, pp. 334-338. Proc. of the 2017 IEEE International Conference on Signal and Image Processing Applications (IEEE ICSIPA 2017), Malaysia,September 12-14, 2017478View publication statsView publication statshttps://www.researchgate.net/publication/325119126 /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.7 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 0 /ParseDSCComments false /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo false /PreserveFlatness true /PreserveHalftoneInfo true /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Remove /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /AbadiMT-CondensedLight /ACaslon-Italic /ACaslon-Regular /ACaslon-Semibold /ACaslon-SemiboldItalic /AdobeArabic-Bold /AdobeArabic-BoldItalic /AdobeArabic-Italic /AdobeArabic-Regular /AdobeHebrew-Bold /AdobeHebrew-BoldItalic /AdobeHebrew-Italic /AdobeHebrew-Regular /AdobeHeitiStd-Regular /AdobeMingStd-Light /AdobeMyungjoStd-Medium /AdobePiStd /AdobeSongStd-Light /AdobeThai-Bold /AdobeThai-BoldItalic /AdobeThai-Italic /AdobeThai-Regular /AGaramond-Bold /AGaramond-BoldItalic /AGaramond-Italic /AGaramond-Regular /AGaramond-Semibold /AGaramond-SemiboldItalic /AgencyFB-Bold /AgencyFB-Reg /AGOldFace-Outline /AharoniBold /Algerian /Americana /Americana-ExtraBold /AndaleMono /AndaleMonoIPA /AngsanaNew /AngsanaNew-Bold /AngsanaNew-BoldItalic /AngsanaNew-Italic /AngsanaUPC /AngsanaUPC-Bold /AngsanaUPC-BoldItalic /AngsanaUPC-Italic /Anna /ArialAlternative /ArialAlternativeSymbol /Arial-Black /Arial-BlackItalic /Arial-BoldItalicMT /Arial-BoldMT /Arial-ItalicMT /ArialMT /ArialMT-Black /ArialNarrow /ArialNarrow-Bold /ArialNarrow-BoldItalic /ArialNarrow-Italic /ArialRoundedMTBold /ArialUnicodeMS /ArrusBT-Bold /ArrusBT-BoldItalic /ArrusBT-Italic /ArrusBT-Roman /AvantGarde-Book /AvantGarde-BookOblique /AvantGarde-Demi /AvantGarde-DemiOblique /AvantGardeITCbyBT-Book /AvantGardeITCbyBT-BookOblique /BakerSignet /BankGothicBT-Medium /Barmeno-Bold /Barmeno-ExtraBold /Barmeno-Medium /Barmeno-Regular /Baskerville /BaskervilleBE-Italic /BaskervilleBE-Medium /BaskervilleBE-MediumItalic /BaskervilleBE-Regular /Baskerville-Bold /Baskerville-BoldItalic /Baskerville-Italic /BaskOldFace /Batang /BatangChe /Bauhaus93 /Bellevue /BellMT /BellMTBold /BellMTItalic /BerlingAntiqua-Bold /BerlingAntiqua-BoldItalic /BerlingAntiqua-Italic /BerlingAntiqua-Roman /BerlinSansFB-Bold /BerlinSansFBDemi-Bold /BerlinSansFB-Reg /BernardMT-Condensed /BernhardModernBT-Bold /BernhardModernBT-BoldItalic /BernhardModernBT-Italic /BernhardModernBT-Roman /BiffoMT /BinnerD /BinnerGothic /BlackadderITC-Regular /Blackoak /blex /blsy /Bodoni /Bodoni-Bold /Bodoni-BoldItalic /Bodoni-Italic /BodoniMT /BodoniMTBlack /BodoniMTBlack-Italic /BodoniMT-Bold /BodoniMT-BoldItalic /BodoniMTCondensed /BodoniMTCondensed-Bold /BodoniMTCondensed-BoldItalic /BodoniMTCondensed-Italic /BodoniMT-Italic /BodoniMTPosterCompressed /Bodoni-Poster /Bodoni-PosterCompressed /BookAntiqua /BookAntiqua-Bold /BookAntiqua-BoldItalic /BookAntiqua-Italic /Bookman-Demi /Bookman-DemiItalic /Bookman-Light /Bookman-LightItalic /BookmanOldStyle /BookmanOldStyle-Bold /BookmanOldStyle-BoldItalic /BookmanOldStyle-Italic /BookshelfSymbolOne-Regular /BookshelfSymbolSeven /BookshelfSymbolThree-Regular /BookshelfSymbolTwo-Regular /Botanical /Boton-Italic /Boton-Medium /Boton-MediumItalic /Boton-Regular /Boulevard /BradleyHandITC /Braggadocio /BritannicBold /Broadway /BrowalliaNew /BrowalliaNew-Bold /BrowalliaNew-BoldItalic /BrowalliaNew-Italic /BrowalliaUPC /BrowalliaUPC-Bold /BrowalliaUPC-BoldItalic /BrowalliaUPC-Italic /BrushScript /BrushScriptMT /CaflischScript-Bold /CaflischScript-Regular /Calibri /Calibri-Bold /Calibri-BoldItalic /Calibri-Italic /CalifornianFB-Bold /CalifornianFB-Italic /CalifornianFB-Reg /CalisMTBol /CalistoMT /CalistoMT-BoldItalic /CalistoMT-Italic /Cambria /Cambria-Bold /Cambria-BoldItalic /Cambria-Italic /CambriaMath /Candara /Candara-Bold /Candara-BoldItalic /Candara-Italic /Carta /CaslonOpenfaceBT-Regular /Castellar /CastellarMT /Centaur /Centaur-Italic /Century /CenturyGothic /CenturyGothic-Bold /CenturyGothic-BoldItalic /CenturyGothic-Italic /CenturySchL-Bold /CenturySchL-BoldItal /CenturySchL-Ital /CenturySchL-Roma /CenturySchoolbook /CenturySchoolbook-Bold /CenturySchoolbook-BoldItalic /CenturySchoolbook-Italic /CGTimes-Bold</s>
|
<s>/CGTimes-BoldItalic /CGTimes-Italic /CGTimes-Regular /CharterBT-Bold /CharterBT-BoldItalic /CharterBT-Italic /CharterBT-Roman /CheltenhamITCbyBT-Bold /CheltenhamITCbyBT-BoldItalic /CheltenhamITCbyBT-Book /CheltenhamITCbyBT-BookItalic /Chiller-Regular /Cmb10 /CMB10 /Cmbsy10 /CMBSY10 /CMBSY5 /CMBSY6 /CMBSY7 /CMBSY8 /CMBSY9 /Cmbx10 /CMBX10 /Cmbx12 /CMBX12 /Cmbx5 /CMBX5 /Cmbx6 /CMBX6 /Cmbx7 /CMBX7 /Cmbx8 /CMBX8 /Cmbx9 /CMBX9 /Cmbxsl10 /CMBXSL10 /Cmbxti10 /CMBXTI10 /Cmcsc10 /CMCSC10 /Cmcsc8 /CMCSC8 /Cmcsc9 /CMCSC9 /Cmdunh10 /CMDUNH10 /Cmex10 /CMEX10 /CMEX7 /CMEX8 /CMEX9 /Cmff10 /CMFF10 /Cmfi10 /CMFI10 /Cmfib8 /CMFIB8 /Cminch /CMINCH /Cmitt10 /CMITT10 /Cmmi10 /CMMI10 /Cmmi12 /CMMI12 /Cmmi5 /CMMI5 /Cmmi6 /CMMI6 /Cmmi7 /CMMI7 /Cmmi8 /CMMI8 /Cmmi9 /CMMI9 /Cmmib10 /CMMIB10 /CMMIB5 /CMMIB6 /CMMIB7 /CMMIB8 /CMMIB9 /Cmr10 /CMR10 /Cmr12 /CMR12 /Cmr17 /CMR17 /Cmr5 /CMR5 /Cmr6 /CMR6 /Cmr7 /CMR7 /Cmr8 /CMR8 /Cmr9 /CMR9 /Cmsl10 /CMSL10 /Cmsl12 /CMSL12 /Cmsl8 /CMSL8 /Cmsl9 /CMSL9 /Cmsltt10 /CMSLTT10 /Cmss10 /CMSS10 /Cmss12 /CMSS12 /Cmss17 /CMSS17 /Cmss8 /CMSS8 /Cmss9 /CMSS9 /Cmssbx10 /CMSSBX10 /Cmssdc10 /CMSSDC10 /Cmssi10 /CMSSI10 /Cmssi12 /CMSSI12 /Cmssi17 /CMSSI17 /Cmssi8 /CMSSI8 /Cmssi9 /CMSSI9 /Cmssq8 /CMSSQ8 /Cmssqi8 /CMSSQI8 /Cmsy10 /CMSY10 /Cmsy5 /CMSY5 /Cmsy6 /CMSY6 /Cmsy7 /CMSY7 /Cmsy8 /CMSY8 /Cmsy9 /CMSY9 /Cmtcsc10 /CMTCSC10 /Cmtex10 /CMTEX10 /Cmtex8 /CMTEX8 /Cmtex9 /CMTEX9 /Cmti10 /CMTI10 /Cmti12 /CMTI12 /Cmti7 /CMTI7 /Cmti8 /CMTI8 /Cmti9 /CMTI9 /Cmtt10 /CMTT10 /Cmtt12 /CMTT12 /Cmtt8 /CMTT8 /Cmtt9 /CMTT9 /Cmu10 /CMU10 /Cmvtt10 /CMVTT10 /ColonnaMT /Colossalis-Bold /ComicSansMS /ComicSansMS-Bold /Consolas /Consolas-Bold /Consolas-BoldItalic /Consolas-Italic /Constantia /Constantia-Bold /Constantia-BoldItalic /Constantia-Italic /CooperBlack /CopperplateGothic-Bold /CopperplateGothic-Light /Copperplate-ThirtyThreeBC /Corbel /Corbel-Bold /Corbel-BoldItalic /Corbel-Italic /CordiaNew /CordiaNew-Bold /CordiaNew-BoldItalic /CordiaNew-Italic /CordiaUPC /CordiaUPC-Bold /CordiaUPC-BoldItalic /CordiaUPC-Italic /Courier /Courier-Bold /Courier-BoldOblique /CourierNewPS-BoldItalicMT /CourierNewPS-BoldMT /CourierNewPS-ItalicMT /CourierNewPSMT /Courier-Oblique /CourierStd /CourierStd-Bold /CourierStd-BoldOblique /CourierStd-Oblique /CourierX-Bold /CourierX-BoldOblique /CourierX-Oblique /CourierX-Regular /CreepyRegular /CurlzMT /David-Bold /David-Reg /DavidTransparent /Dcb10 /Dcbx10 /Dcbxsl10 /Dcbxti10 /Dccsc10 /Dcitt10 /Dcr10 /Desdemona /DilleniaUPC /DilleniaUPCBold /DilleniaUPCBoldItalic /DilleniaUPCItalic /Dingbats /DomCasual /Dotum /DotumChe /DoulosSIL /EdwardianScriptITC /Elephant-Italic /Elephant-Regular /EngraversGothicBT-Regular /EngraversMT /EraserDust /ErasITC-Bold /ErasITC-Demi /ErasITC-Light /ErasITC-Medium /ErieBlackPSMT /ErieLightPSMT /EriePSMT /EstrangeloEdessa /Euclid /Euclid-Bold /Euclid-BoldItalic /EuclidExtra /EuclidExtra-Bold /EuclidFraktur /EuclidFraktur-Bold /Euclid-Italic /EuclidMathOne /EuclidMathOne-Bold /EuclidMathTwo /EuclidMathTwo-Bold /EuclidSymbol /EuclidSymbol-Bold /EuclidSymbol-BoldItalic /EuclidSymbol-Italic /EucrosiaUPC /EucrosiaUPCBold /EucrosiaUPCBoldItalic /EucrosiaUPCItalic /EUEX10 /EUEX7 /EUEX8 /EUEX9 /EUFB10 /EUFB5 /EUFB7 /EUFM10 /EUFM5 /EUFM7 /EURB10 /EURB5 /EURB7 /EURM10 /EURM5 /EURM7 /EuroMono-Bold /EuroMono-BoldItalic /EuroMono-Italic /EuroMono-Regular /EuroSans-Bold /EuroSans-BoldItalic /EuroSans-Italic /EuroSans-Regular /EuroSerif-Bold /EuroSerif-BoldItalic /EuroSerif-Italic /EuroSerif-Regular /EUSB10 /EUSB5 /EUSB7 /EUSM10 /EUSM5 /EUSM7 /FelixTitlingMT /Fences /FencesPlain /FigaroMT /FixedMiriamTransparent /FootlightMTLight /Formata-Italic /Formata-Medium /Formata-MediumItalic /Formata-Regular /ForteMT /FranklinGothic-Book /FranklinGothic-BookItalic /FranklinGothic-Demi /FranklinGothic-DemiCond /FranklinGothic-DemiItalic /FranklinGothic-Heavy /FranklinGothic-HeavyItalic /FranklinGothicITCbyBT-Book /FranklinGothicITCbyBT-BookItal /FranklinGothicITCbyBT-Demi /FranklinGothicITCbyBT-DemiItal /FranklinGothic-Medium /FranklinGothic-MediumCond /FranklinGothic-MediumItalic /FrankRuehl /FreesiaUPC /FreesiaUPCBold /FreesiaUPCBoldItalic /FreesiaUPCItalic /FreestyleScript-Regular /FrenchScriptMT /Frutiger-Black /Frutiger-BlackCn /Frutiger-BlackItalic /Frutiger-Bold /Frutiger-BoldCn /Frutiger-BoldItalic /Frutiger-Cn /Frutiger-ExtraBlackCn /Frutiger-Italic /Frutiger-Light /Frutiger-LightCn /Frutiger-LightItalic /Frutiger-Roman /Frutiger-UltraBlack /Futura-Bold /Futura-BoldOblique /Futura-Book /Futura-BookOblique /FuturaBT-Bold /FuturaBT-BoldItalic /FuturaBT-Book /FuturaBT-BookItalic /FuturaBT-Medium /FuturaBT-MediumItalic /Futura-Light /Futura-LightOblique /GalliardITCbyBT-Bold /GalliardITCbyBT-BoldItalic /GalliardITCbyBT-Italic /GalliardITCbyBT-Roman /Garamond /Garamond-Bold /Garamond-BoldCondensed /Garamond-BoldCondensedItalic /Garamond-BoldItalic /Garamond-BookCondensed /Garamond-BookCondensedItalic /Garamond-Italic /Garamond-LightCondensed /Garamond-LightCondensedItalic /Gautami /GeometricSlab703BT-Light /GeometricSlab703BT-LightItalic /Georgia /Georgia-Bold /Georgia-BoldItalic /Georgia-Italic /GeorgiaRef /Giddyup /Giddyup-Thangs /Gigi-Regular /GillSans /GillSans-Bold /GillSans-BoldItalic /GillSans-Condensed /GillSans-CondensedBold /GillSans-Italic /GillSans-Light /GillSans-LightItalic /GillSansMT /GillSansMT-Bold /GillSansMT-BoldItalic /GillSansMT-Condensed /GillSansMT-ExtraCondensedBold /GillSansMT-Italic /GillSans-UltraBold /GillSans-UltraBoldCondensed /GloucesterMT-ExtraCondensed /Gothic-Thirteen /GoudyOldStyleBT-Bold /GoudyOldStyleBT-BoldItalic /GoudyOldStyleBT-Italic /GoudyOldStyleBT-Roman /GoudyOldStyleT-Bold /GoudyOldStyleT-Italic /GoudyOldStyleT-Regular /GoudyStout /GoudyTextMT-LombardicCapitals /GSIDefaultSymbols /Gulim /GulimChe /Gungsuh /GungsuhChe /Haettenschweiler /HarlowSolid /Harrington /Helvetica /Helvetica-Black /Helvetica-BlackOblique /Helvetica-Bold /Helvetica-BoldOblique /Helvetica-Condensed /Helvetica-Condensed-Black /Helvetica-Condensed-BlackObl /Helvetica-Condensed-Bold /Helvetica-Condensed-BoldObl /Helvetica-Condensed-Light /Helvetica-Condensed-LightObl /Helvetica-Condensed-Oblique /Helvetica-Fraction /Helvetica-Narrow /Helvetica-Narrow-Bold /Helvetica-Narrow-BoldOblique /Helvetica-Narrow-Oblique /Helvetica-Oblique /HighTowerText-Italic /HighTowerText-Reg /Humanist521BT-BoldCondensed /Humanist521BT-Light /Humanist521BT-LightItalic /Humanist521BT-RomanCondensed /Imago-ExtraBold /Impact /ImprintMT-Shadow /InformalRoman-Regular /IrisUPC /IrisUPCBold /IrisUPCBoldItalic /IrisUPCItalic /Ironwood /ItcEras-Medium /ItcKabel-Bold /ItcKabel-Book /ItcKabel-Demi /ItcKabel-Medium /ItcKabel-Ultra /JasmineUPC /JasmineUPC-Bold /JasmineUPC-BoldItalic /JasmineUPC-Italic /JoannaMT /JoannaMT-Italic /Jokerman-Regular /JuiceITC-Regular /Kartika /Kaufmann /KaufmannBT-Bold /KaufmannBT-Regular /KidTYPEPaint /KinoMT /KodchiangUPC /KodchiangUPC-Bold /KodchiangUPC-BoldItalic /KodchiangUPC-Italic /KorinnaITCbyBT-Regular /KristenITC-Regular /KrutiDev040Bold /KrutiDev040BoldItalic /KrutiDev040Condensed /KrutiDev040Italic /KrutiDev040Thin /KrutiDev040Wide /KrutiDev060 /KrutiDev060Bold /KrutiDev060BoldItalic</s>
|
<s>/KrutiDev060Condensed /KrutiDev060Italic /KrutiDev060Thin /KrutiDev060Wide /KrutiDev070 /KrutiDev070Condensed /KrutiDev070Italic /KrutiDev070Thin /KrutiDev070Wide /KrutiDev080 /KrutiDev080Condensed /KrutiDev080Italic /KrutiDev080Wide /KrutiDev090 /KrutiDev090Bold /KrutiDev090BoldItalic /KrutiDev090Condensed /KrutiDev090Italic /KrutiDev090Thin /KrutiDev090Wide /KrutiDev100 /KrutiDev100Bold /KrutiDev100BoldItalic /KrutiDev100Condensed /KrutiDev100Italic /KrutiDev100Thin /KrutiDev100Wide /KrutiDev120 /KrutiDev120Condensed /KrutiDev120Thin /KrutiDev120Wide /KrutiDev130 /KrutiDev130Condensed /KrutiDev130Thin /KrutiDev130Wide /KunstlerScript /Latha /LatinWide /LetterGothic /LetterGothic-Bold /LetterGothic-BoldOblique /LetterGothic-BoldSlanted /LetterGothicMT /LetterGothicMT-Bold /LetterGothicMT-BoldOblique /LetterGothicMT-Oblique /LetterGothic-Slanted /LevenimMT /LevenimMTBold /LilyUPC /LilyUPCBold /LilyUPCBoldItalic /LilyUPCItalic /Lithos-Black /Lithos-Regular /LotusWPBox-Roman /LotusWPIcon-Roman /LotusWPIntA-Roman /LotusWPIntB-Roman /LotusWPType-Roman /LucidaBright /LucidaBright-Demi /LucidaBright-DemiItalic /LucidaBright-Italic /LucidaCalligraphy-Italic /LucidaConsole /LucidaFax /LucidaFax-Demi /LucidaFax-DemiItalic /LucidaFax-Italic /LucidaHandwriting-Italic /LucidaSans /LucidaSans-Demi /LucidaSans-DemiItalic /LucidaSans-Italic /LucidaSans-Typewriter /LucidaSans-TypewriterBold /LucidaSans-TypewriterBoldOblique /LucidaSans-TypewriterOblique /LucidaSansUnicode /Lydian /Magneto-Bold /MaiandraGD-Regular /Mangal-Regular /Map-Symbols /MathA /MathB /MathC /Mathematica1 /Mathematica1-Bold /Mathematica1Mono /Mathematica1Mono-Bold /Mathematica2 /Mathematica2-Bold /Mathematica2Mono /Mathematica2Mono-Bold /Mathematica3 /Mathematica3-Bold /Mathematica3Mono /Mathematica3Mono-Bold /Mathematica4 /Mathematica4-Bold /Mathematica4Mono /Mathematica4Mono-Bold /Mathematica5 /Mathematica5-Bold /Mathematica5Mono /Mathematica5Mono-Bold /Mathematica6 /Mathematica6Bold /Mathematica6Mono /Mathematica6MonoBold /Mathematica7 /Mathematica7Bold /Mathematica7Mono /Mathematica7MonoBold /MatisseITC-Regular /MaturaMTScriptCapitals /Mesquite /Mezz-Black /Mezz-Regular /MICR /MicrosoftSansSerif /MingLiU /Minion-BoldCondensed /Minion-BoldCondensedItalic /Minion-Condensed /Minion-CondensedItalic /Minion-Ornaments /MinionPro-Bold /MinionPro-BoldIt /MinionPro-It /MinionPro-Regular /Miriam /MiriamFixed /MiriamTransparent /Mistral /Modern-Regular /MonotypeCorsiva /MonotypeSorts /MSAM10 /MSAM5 /MSAM6 /MSAM7 /MSAM8 /MSAM9 /MSBM10 /MSBM5 /MSBM6 /MSBM7 /MSBM8 /MSBM9 /MS-Gothic /MSHei /MSLineDrawPSMT /MS-Mincho /MSOutlook /MS-PGothic /MS-PMincho /MSReference1 /MSReference2 /MSReferenceSansSerif /MSReferenceSansSerif-Bold /MSReferenceSansSerif-BoldItalic /MSReferenceSansSerif-Italic /MSReferenceSerif /MSReferenceSerif-Bold /MSReferenceSerif-BoldItalic /MSReferenceSerif-Italic /MSReferenceSpecialty /MSSong /MS-UIGothic /MT-Extra /MTExtraTiger /MT-Symbol /MT-Symbol-Italic /MVBoli /Myriad-Bold /Myriad-BoldItalic /Myriad-Italic /Myriad-Roman /Narkisim /NewCenturySchlbk-Bold /NewCenturySchlbk-BoldItalic /NewCenturySchlbk-Italic /NewCenturySchlbk-Roman /NewMilleniumSchlbk-BoldItalicSH /NewsGothic /NewsGothic-Bold /NewsGothicBT-Bold /NewsGothicBT-BoldItalic /NewsGothicBT-Italic /NewsGothicBT-Roman /NewsGothic-Condensed /NewsGothic-Italic /NewsGothicMT /NewsGothicMT-Bold /NewsGothicMT-Italic /NiagaraEngraved-Reg /NiagaraSolid-Reg /NimbusMonL-Bold /NimbusMonL-BoldObli /NimbusMonL-Regu /NimbusMonL-ReguObli /NimbusRomNo9L-Medi /NimbusRomNo9L-MediItal /NimbusRomNo9L-Regu /NimbusRomNo9L-ReguItal /NimbusSanL-Bold /NimbusSanL-BoldCond /NimbusSanL-BoldCondItal /NimbusSanL-BoldItal /NimbusSanL-Regu /NimbusSanL-ReguCond /NimbusSanL-ReguCondItal /NimbusSanL-ReguItal /Nimrod /Nimrod-Bold /Nimrod-BoldItalic /Nimrod-Italic /NSimSun /Nueva-BoldExtended /Nueva-BoldExtendedItalic /Nueva-Italic /Nueva-Roman /NuptialScript /OCRA /OCRA-Alternate /OCRAExtended /OCRB /OCRB-Alternate /OfficinaSans-Bold /OfficinaSans-BoldItalic /OfficinaSans-Book /OfficinaSans-BookItalic /OfficinaSerif-Bold /OfficinaSerif-BoldItalic /OfficinaSerif-Book /OfficinaSerif-BookItalic /OldEnglishTextMT /Onyx /OnyxBT-Regular /OzHandicraftBT-Roman /PalaceScriptMT /Palatino-Bold /Palatino-BoldItalic /Palatino-Italic /PalatinoLinotype-Bold /PalatinoLinotype-BoldItalic /PalatinoLinotype-Italic /PalatinoLinotype-Roman /Palatino-Roman /PapyrusPlain /Papyrus-Regular /Parchment-Regular /Parisian /ParkAvenue /Penumbra-SemiboldFlare /Penumbra-SemiboldSans /Penumbra-SemiboldSerif /PepitaMT /Perpetua /Perpetua-Bold /Perpetua-BoldItalic /Perpetua-Italic /PerpetuaTitlingMT-Bold /PerpetuaTitlingMT-Light /PhotinaCasualBlack /Playbill /PMingLiU /Poetica-SuppOrnaments /PoorRichard-Regular /PopplLaudatio-Italic /PopplLaudatio-Medium /PopplLaudatio-MediumItalic /PopplLaudatio-Regular /PrestigeElite /Pristina-Regular /PTBarnumBT-Regular /Raavi /RageItalic /Ravie /RefSpecialty /Ribbon131BT-Bold /Rockwell /Rockwell-Bold /Rockwell-BoldItalic /Rockwell-Condensed /Rockwell-CondensedBold /Rockwell-ExtraBold /Rockwell-Italic /Rockwell-Light /Rockwell-LightItalic /Rod /RodTransparent /RunicMT-Condensed /Sanvito-Light /Sanvito-Roman /ScriptC /ScriptMTBold /SegoeUI /SegoeUI-Bold /SegoeUI-BoldItalic /SegoeUI-Italic /Serpentine-BoldOblique /ShelleyVolanteBT-Regular /ShowcardGothic-Reg /Shruti /SILDoulosIPA /SimHei /SimSun /SimSun-PUA /SnapITC-Regular /StandardSymL /Stencil /StoneSans /StoneSans-Bold /StoneSans-BoldItalic /StoneSans-Italic /StoneSans-Semibold /StoneSans-SemiboldItalic /Stop /Swiss721BT-BlackExtended /Sylfaen /Symbol /SymbolMT /SymbolTiger /SymbolTigerExpert /Tahoma /Tahoma-Bold /Tci1 /Tci1Bold /Tci1BoldItalic /Tci1Italic /Tci2 /Tci2Bold /Tci2BoldItalic /Tci2Italic /Tci3 /Tci3Bold /Tci3BoldItalic /Tci3Italic /Tci4 /Tci4Bold /Tci4BoldItalic /Tci4Italic /TechnicalItalic /TechnicalPlain /Tekton /Tekton-Bold /TektonMM /Tempo-HeavyCondensed /Tempo-HeavyCondensedItalic /TempusSansITC /Tiger /TigerExpert /Times-Bold /Times-BoldItalic /Times-BoldItalicOsF /Times-BoldSC /Times-ExtraBold /Times-Italic /Times-ItalicOsF /TimesNewRomanMT-ExtraBold /TimesNewRomanPS-BoldItalicMT /TimesNewRomanPS-BoldMT /TimesNewRomanPS-ItalicMT /TimesNewRomanPSMT /Times-Roman /Times-RomanSC /Trajan-Bold /Trebuchet-BoldItalic /TrebuchetMS /TrebuchetMS-Bold /TrebuchetMS-Italic /Tunga-Regular /TwCenMT-Bold /TwCenMT-BoldItalic /TwCenMT-Condensed /TwCenMT-CondensedBold /TwCenMT-CondensedExtraBold /TwCenMT-CondensedMedium /TwCenMT-Italic /TwCenMT-Regular /Univers-Bold /Univers-BoldItalic /UniversCondensed-Bold /UniversCondensed-BoldItalic /UniversCondensed-Medium /UniversCondensed-MediumItalic /Univers-Medium /Univers-MediumItalic /URWBookmanL-DemiBold /URWBookmanL-DemiBoldItal /URWBookmanL-Ligh /URWBookmanL-LighItal /URWChanceryL-MediItal /URWGothicL-Book /URWGothicL-BookObli /URWGothicL-Demi /URWGothicL-DemiObli /URWPalladioL-Bold /URWPalladioL-BoldItal /URWPalladioL-Ital /URWPalladioL-Roma /USPSBarCode /VAGRounded-Black /VAGRounded-Bold /VAGRounded-Light /VAGRounded-Thin /Verdana /Verdana-Bold /Verdana-BoldItalic /Verdana-Italic /VerdanaRef /VinerHandITC /Viva-BoldExtraExtended /Vivaldii /Viva-LightCondensed /Viva-Regular /VladimirScript /Vrinda /Webdings /Westminster /Willow /Wingdings2 /Wingdings3 /Wingdings-Regular /WNCYB10 /WNCYI10 /WNCYR10 /WNCYSC10 /WNCYSS10 /WoodtypeOrnaments-One /WoodtypeOrnaments-Two /WP-ArabicScriptSihafa /WP-ArabicSihafa /WP-BoxDrawing /WP-CyrillicA /WP-CyrillicB /WP-GreekCentury /WP-GreekCourier /WP-GreekHelve /WP-HebrewDavid /WP-IconicSymbolsA /WP-IconicSymbolsB /WP-Japanese /WP-MathA /WP-MathB /WP-MathExtendedA /WP-MathExtendedB /WP-MultinationalAHelve /WP-MultinationalARoman /WP-MultinationalBCourier /WP-MultinationalBHelve /WP-MultinationalBRoman /WP-MultinationalCourier /WP-Phonetic /WPTypographicSymbols /XYATIP10 /XYBSQL10 /XYBTIP10 /XYCIRC10 /XYCMAT10 /XYCMBT10 /XYDASH10 /XYEUAT10 /XYEUBT10 /ZapfChancery-MediumItalic /ZapfDingbats /ZapfHumanist601BT-Bold /ZapfHumanist601BT-BoldItalic /ZapfHumanist601BT-Demi /ZapfHumanist601BT-DemiItalic /ZapfHumanist601BT-Italic /ZapfHumanist601BT-Roman /ZWAdobeF /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 150 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 2.00333</s>
|
<s>/EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 150 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 2.00333 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.00167 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /ARA <FEFF06270633062A062E062F0645002006470630064700200627064406250639062F0627062F0627062A002006440625064606340627062100200648062B062706260642002000410064006F00620065002000500044004600200645062A064806270641064206290020064506390020064506420627064A064A0633002006390631063600200648063706280627063906290020062706440648062B0627062606420020062706440645062A062F062706480644062900200641064A00200645062C062706440627062A002006270644062306390645062706440020062706440645062E062A064406410629061B0020064A06450643064600200641062A062D00200648062B0627062606420020005000440046002006270644064506460634062306290020062806270633062A062E062F062706450020004100630072006F0062006100740020064800410064006F006200650020005200650061006400650072002006250635062F0627063100200035002E0030002006480627064406250635062F062706310627062A0020062706440623062D062F062B002E> /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e55464e1a65876863768467e5770b548c62535370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc666e901a554652d965874ef6768467e5770b548c52175370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /CZE <FEFF005400610074006f0020006e006100730074006100760065006e00ed00200070006f0075017e0069006a007400650020006b0020007600790074007600e101590065006e00ed00200064006f006b0075006d0065006e0074016f002000410064006f006200650020005000440046002000760068006f0064006e00fd00630068002000700072006f002000730070006f006c00650068006c0069007600e90020007a006f006200720061007a006f007600e1006e00ed002000610020007400690073006b0020006f006200630068006f0064006e00ed0063006800200064006f006b0075006d0065006e0074016f002e002000200056007900740076006f01590065006e00e900200064006f006b0075006d0065006e007400790020005000440046002000620075006400650020006d006f017e006e00e90020006f007400650076015900ed007400200076002000700072006f006700720061006d0065006300680020004100630072006f00620061007400200061002000410064006f00620065002000520065006100640065007200200035002e0030002000610020006e006f0076011b006a016100ed00630068002e> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000650067006e006500720020007300690067002000740069006c00200064006500740061006c006a006500720065007400200073006b00e60072006d007600690073006e0069006e00670020006f00670020007500640073006b007200690076006e0069006e006700200061006600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200075006d002000650069006e00650020007a0075007600650072006c00e40073007300690067006500200041006e007a006500690067006500200075006e00640020004100750073006700610062006500200076006f006e00200047006500730063006800e40066007400730064006f006b0075006d0065006e00740065006e0020007a0075002000650072007a00690065006c0065006e002e00200044006900650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000520065006100640065007200200035002e003000200075006e00640020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f0073002000640065002000410064006f00620065002000500044004600200061006400650063007500610064006f007300200070006100720061002000760069007300750061006c0069007a00610063006900f3006e0020006500200069006d0070007200650073006900f3006e00200064006500200063006f006e006600690061006e007a006100200064006500200064006f00630075006d0065006e0074006f007300200063006f006d00650072006300690061006c00650073002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f006200650020005000440046002000700072006f00660065007300730069006f006e006e0065006c007300200066006900610062006c0065007300200070006f007500720020006c0061002000760069007300750061006c00690073006100740069006f006e0020006500740020006c00270069006d007000720065007300730069006f006e002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /GRE <FEFF03a703c103b703c303b903bc03bf03c003bf03b903ae03c303c403b5002003b103c503c403ad03c2002003c403b903c2002003c103c503b803bc03af03c303b503b903c2002003b303b903b1002003bd03b1002003b403b703bc03b903bf03c503c103b303ae03c303b503c403b5002003ad03b303b303c103b103c603b1002000410064006f006200650020005000440046002003ba03b103c403ac03bb03bb03b703bb03b1002003b303b903b1002003b103be03b903cc03c003b903c303c403b7002003c003c103bf03b203bf03bb03ae002003ba03b103b9002003b503ba03c403cd03c003c903c303b7002003b503c003b903c703b503b903c103b703bc03b103c403b903ba03ce03bd002003b503b303b303c103ac03c603c903bd002e0020002003a403b10020005000440046002003ad03b303b303c103b103c603b1002003c003bf03c5002003ad03c703b503c403b5002003b403b703bc03b903bf03c503c103b303ae03c303b503b9002003bc03c003bf03c103bf03cd03bd002003bd03b1002003b103bd03bf03b903c703c403bf03cd03bd002003bc03b5002003c403bf0020004100630072006f006200610074002c002003c403bf002000410064006f00620065002000520065006100640065007200200035002e0030002003ba03b103b9002003bc03b503c403b103b303b503bd03ad03c303c403b503c103b503c2002003b503ba03b403cc03c303b503b903c2002e> /HEB <FEFF05D405E905EA05DE05E905D5002005D105D405D205D305E805D505EA002005D005DC05D4002005DB05D305D9002005DC05D905E605D505E8002005DE05E105DE05DB05D9002000410064006F006200650020005000440046002005E205D105D505E8002005D405E605D205D4002005D505D405D305E405E105D4002005D005DE05D905E005D4002005E905DC002005DE05E105DE05DB05D905DD002005E205E105E705D905D905DD002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E05D905D505EA05E8002E002D0033002C002005E205D905D905E005D5002005D105DE05D305E805D905DA002005DC05DE05E905EA05DE05E9002005E905DC0020004100630072006F006200610074002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E> /HRV (Za stvaranje Adobe PDF dokumenata pogodnih za pouzdani prikaz i ispis poslovnih dokumenata koristite ove postavke. Stvoreni PDF dokumenti mogu se otvoriti Acrobat i Adobe Reader 5.0 i kasnijim verzijama.) /HUN <FEFF00410020006800690076006100740061006c006f007300200064006f006b0075006d0065006e00740075006d006f006b0020006d00650067006200ed007a00680061007400f30020006d0065006700740065006b0069006e007400e9007300e900720065002000e900730020006e0079006f006d00740061007400e1007300e10072006100200073007a00e1006e0074002000410064006f00620065002000500044004600200064006f006b0075006d0065006e00740075006d006f006b0061007400200065007a0065006b006b0065006c0020006100200062006500e1006c006c00ed007400e10073006f006b006b0061006c00200068006f007a006800610074006a00610020006c00e9007400720065002e0020002000410020006c00e90074007200650068006f007a006f00740074002000500044004600200064006f006b0075006d0065006e00740075006d006f006b00200061007a0020004100630072006f006200610074002000e9007300200061007a002000410064006f00620065002000520065006100640065007200200035002e0030002c0020007600610067007900200061007a002000610074007400f3006c0020006b00e9007301510062006200690020007600650072007a006900f3006b006b0061006c0020006e00790069007400680061007400f3006b0020006d00650067002e> /ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 5.0 e versioni successive.) /JPN <FEFF30d330b830cd30b9658766f8306e8868793a304a3088307353705237306b90693057305f002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b4f7f75283057307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a3067306f30d530a930f330c8306e57cb30818fbc307f3092884c3044307e30593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020be44c988b2c8c2a40020bb38c11cb97c0020c548c815c801c73cb85c0020bcf4ace00020c778c1c4d558b2940020b3700020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d002000650072002000650067006e0065007400200066006f00720020007000e5006c006900740065006c006900670020007600690073006e0069006e00670020006f00670020007500740073006b007200690066007400200061007600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002e> /POL <FEFF0055007300740061007700690065006e0069006100200064006f002000740077006f0072007a0065006e0069006100200064006f006b0075006d0065006e007400f300770020005000440046002000700072007a0065007a006e00610063007a006f006e00790063006800200064006f0020006e00690065007a00610077006f0064006e00650067006f002000770079015b0077006900650074006c0061006e00690061002000690020006400720075006b006f00770061006e0069006100200064006f006b0075006d0065006e007400f300770020006600690072006d006f0077007900630068002e002000200044006f006b0075006d0065006e0074007900200050004400460020006d006f017c006e00610020006f007400770069006500720061010700200077002000700072006f006700720061006d006900650020004100630072006f00620061007400200069002000410064006f00620065002000520065006100640065007200200035002e0030002000690020006e006f00770073007a0079006d002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f00620065002000500044004600200061006400650071007500610064006f00730020007000610072006100200061002000760069007300750061006c0069007a006100e700e3006f002000650020006100200069006d0070007200650073007300e3006f00200063006f006e0066006900e1007600650069007300200064006500200064006f00630075006d0065006e0074006f007300200063006f006d0065007200630069006100690073002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /RUM <FEFF005500740069006c0069007a00610163006900200061006300650073007400650020007300650074010300720069002000700065006e007400720075002000610020006300720065006100200064006f00630075006d0065006e00740065002000410064006f006200650020005000440046002000610064006500630076006100740065002000700065006e007400720075002000760069007a00750061006c0069007a00610072006500610020015f006900200074006900700103007200690072006500610020006c0061002000630061006c006900740061007400650020007300750070006500720069006f0061007201030020006100200064006f00630075006d0065006e00740065006c006f007200200064006500200061006600610063006500720069002e002000200044006f00630075006d0065006e00740065006c00650020005000440046002000630072006500610074006500200070006f00740020006600690020006400650073006300680069007300650020006300750020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e00300020015f00690020007600650072007300690075006e0069006c006500200075006c0074006500720069006f006100720065002e> /RUS <FEFF04180441043f043e043b044c04370443043904420435002004340430043d043d044b04350020043d0430044104420440043e0439043a043800200434043b044f00200441043e043704340430043d0438044f00200434043e043a0443043c0435043d0442043e0432002000410064006f006200650020005000440046002c0020043f043e04340445043e0434044f04490438044500200434043b044f0020043d0430043404350436043d043e0433043e0020043f0440043e0441043c043e044204400430002004380020043f04350447043004420438002004340435043b043e0432044b044500200434043e043a0443043c0435043d0442043e0432002e002000200421043e043704340430043d043d044b04350020005000440046002d0434043e043a0443043c0435043d0442044b0020043c043e0436043d043e0020043e0442043a0440044b043204300442044c002004410020043f043e043c043e0449044c044e0020004100630072006f00620061007400200438002000410064006f00620065002000520065006100640065007200200035002e00300020043800200431043e043b043504350020043f043e04370434043d043804450020043204350440044104380439002e> /SLV <FEFF005400650020006e006100730074006100760069007400760065002000750070006f0072006100620069007400650020007a00610020007500730074007600610072006a0061006e006a006500200064006f006b0075006d0065006e0074006f0076002000410064006f006200650020005000440046002c0020007000720069006d00650072006e006900680020007a00610020007a0061006e00650073006c006a00690076006f0020006f0067006c00650064006f00760061006e006a006500200069006e0020007400690073006b0061006e006a006500200070006f0073006c006f0076006e0069006800200064006f006b0075006d0065006e0074006f0076002e00200020005500730074007600610072006a0065006e006500200064006f006b0075006d0065006e0074006500200050004400460020006a00650020006d006f0067006f010d00650020006f0064007000720065007400690020007a0020004100630072006f00620061007400200069006e002000410064006f00620065002000520065006100640065007200200035002e003000200069006e0020006e006f00760065006a01610069006d002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f0074002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002c0020006a006f0074006b006100200073006f0070006900760061007400200079007200690074007900730061007300690061006b00690072006a006f006a0065006e0020006c0075006f00740065007400740061007600610061006e0020006e00e400790074007400e4006d0069007300650065006e0020006a0061002000740075006c006f007300740061006d0069007300650065006e002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d00200070006100730073006100720020006600f60072002000740069006c006c006600f60072006c00690074006c006900670020007600690073006e0069006e00670020006f006300680020007500740073006b007200690066007400650072002000610076002000610066006600e4007200730064006f006b0075006d0065006e0074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /TUR <FEFF005400690063006100720069002000620065006c00670065006c006500720069006e0020006700fc00760065006e0069006c0069007200200062006900720020015f0065006b0069006c006400650020006700f6007200fc006e007400fc006c0065006e006d006500730069002000760065002000790061007a0064013100720131006c006d006100730131006e006100200075007900670075006e002000410064006f006200650020005000440046002000620065006c00670065006c0065007200690020006f006c0075015f007400750072006d0061006b0020006900e70069006e00200062007500200061007900610072006c0061007201310020006b0075006c006c0061006e0131006e002e00200020004f006c0075015f0074007500720075006c0061006e0020005000440046002000620065006c00670065006c0065007200690020004100630072006f006200610074002000760065002000410064006f00620065002000520065006100640065007200200035002e003000200076006500200073006f006e0072006100730131006e00640061006b00690020007300fc007200fc006d006c00650072006c00650020006100e70131006c006100620069006c00690072002e> /ENU (Use these settings to create Adobe PDF documents suitable for reliable viewing and printing of business documents. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.)>> setdistillerparams /HWResolution [600 600] /PageSize [612.000 792.000]>> setpagedevice</s>
|
<s>Sentiment Extraction From Bangla Text : A Character Level Supervised Recurrent Neural Network ApproachSee discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/327820158Sentiment Extraction From Bangla Text : A Character Level SupervisedRecurrent Neural Network ApproachConference Paper · February 2018DOI: 10.1109/IC4ME2.2018.8465606CITATIONSREADS2303 authors:Some of the authors of this publication are also working on these related projects:Decision Support System View projectMS Thesis View projectMohammad Salman HaydarDataShall Analytics Ltd2 PUBLICATIONS 8 CITATIONS SEE PROFILEMustakim Al HelalUniversity of Regina7 PUBLICATIONS 24 CITATIONS SEE PROFILESyed Akhter HossainDaffodil International University99 PUBLICATIONS 476 CITATIONS SEE PROFILEAll content following this page was uploaded by Mohammad Salman Haydar on 23 March 2019.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/327820158_Sentiment_Extraction_From_Bangla_Text_A_Character_Level_Supervised_Recurrent_Neural_Network_Approach?enrichId=rgreq-b2a07c038b5dea5b91c1164bdb22a029-XXX&enrichSource=Y292ZXJQYWdlOzMyNzgyMDE1ODtBUzo3Mzk0NjgzMzYxODEyNDlAMTU1MzMxNDM5ODY5MQ%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/327820158_Sentiment_Extraction_From_Bangla_Text_A_Character_Level_Supervised_Recurrent_Neural_Network_Approach?enrichId=rgreq-b2a07c038b5dea5b91c1164bdb22a029-XXX&enrichSource=Y292ZXJQYWdlOzMyNzgyMDE1ODtBUzo3Mzk0NjgzMzYxODEyNDlAMTU1MzMxNDM5ODY5MQ%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Decision-Support-System-12?enrichId=rgreq-b2a07c038b5dea5b91c1164bdb22a029-XXX&enrichSource=Y292ZXJQYWdlOzMyNzgyMDE1ODtBUzo3Mzk0NjgzMzYxODEyNDlAMTU1MzMxNDM5ODY5MQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/MS-Thesis-27?enrichId=rgreq-b2a07c038b5dea5b91c1164bdb22a029-XXX&enrichSource=Y292ZXJQYWdlOzMyNzgyMDE1ODtBUzo3Mzk0NjgzMzYxODEyNDlAMTU1MzMxNDM5ODY5MQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-b2a07c038b5dea5b91c1164bdb22a029-XXX&enrichSource=Y292ZXJQYWdlOzMyNzgyMDE1ODtBUzo3Mzk0NjgzMzYxODEyNDlAMTU1MzMxNDM5ODY5MQ%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mohammad_Haydar?enrichId=rgreq-b2a07c038b5dea5b91c1164bdb22a029-XXX&enrichSource=Y292ZXJQYWdlOzMyNzgyMDE1ODtBUzo3Mzk0NjgzMzYxODEyNDlAMTU1MzMxNDM5ODY5MQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mohammad_Haydar?enrichId=rgreq-b2a07c038b5dea5b91c1164bdb22a029-XXX&enrichSource=Y292ZXJQYWdlOzMyNzgyMDE1ODtBUzo3Mzk0NjgzMzYxODEyNDlAMTU1MzMxNDM5ODY5MQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mohammad_Haydar?enrichId=rgreq-b2a07c038b5dea5b91c1164bdb22a029-XXX&enrichSource=Y292ZXJQYWdlOzMyNzgyMDE1ODtBUzo3Mzk0NjgzMzYxODEyNDlAMTU1MzMxNDM5ODY5MQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mustakim_Helal?enrichId=rgreq-b2a07c038b5dea5b91c1164bdb22a029-XXX&enrichSource=Y292ZXJQYWdlOzMyNzgyMDE1ODtBUzo3Mzk0NjgzMzYxODEyNDlAMTU1MzMxNDM5ODY5MQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mustakim_Helal?enrichId=rgreq-b2a07c038b5dea5b91c1164bdb22a029-XXX&enrichSource=Y292ZXJQYWdlOzMyNzgyMDE1ODtBUzo3Mzk0NjgzMzYxODEyNDlAMTU1MzMxNDM5ODY5MQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/University_of_Regina?enrichId=rgreq-b2a07c038b5dea5b91c1164bdb22a029-XXX&enrichSource=Y292ZXJQYWdlOzMyNzgyMDE1ODtBUzo3Mzk0NjgzMzYxODEyNDlAMTU1MzMxNDM5ODY5MQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mustakim_Helal?enrichId=rgreq-b2a07c038b5dea5b91c1164bdb22a029-XXX&enrichSource=Y292ZXJQYWdlOzMyNzgyMDE1ODtBUzo3Mzk0NjgzMzYxODEyNDlAMTU1MzMxNDM5ODY5MQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Syed_Hossain5?enrichId=rgreq-b2a07c038b5dea5b91c1164bdb22a029-XXX&enrichSource=Y292ZXJQYWdlOzMyNzgyMDE1ODtBUzo3Mzk0NjgzMzYxODEyNDlAMTU1MzMxNDM5ODY5MQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Syed_Hossain5?enrichId=rgreq-b2a07c038b5dea5b91c1164bdb22a029-XXX&enrichSource=Y292ZXJQYWdlOzMyNzgyMDE1ODtBUzo3Mzk0NjgzMzYxODEyNDlAMTU1MzMxNDM5ODY5MQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Daffodil_International_University?enrichId=rgreq-b2a07c038b5dea5b91c1164bdb22a029-XXX&enrichSource=Y292ZXJQYWdlOzMyNzgyMDE1ODtBUzo3Mzk0NjgzMzYxODEyNDlAMTU1MzMxNDM5ODY5MQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Syed_Hossain5?enrichId=rgreq-b2a07c038b5dea5b91c1164bdb22a029-XXX&enrichSource=Y292ZXJQYWdlOzMyNzgyMDE1ODtBUzo3Mzk0NjgzMzYxODEyNDlAMTU1MzMxNDM5ODY5MQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mohammad_Haydar?enrichId=rgreq-b2a07c038b5dea5b91c1164bdb22a029-XXX&enrichSource=Y292ZXJQYWdlOzMyNzgyMDE1ODtBUzo3Mzk0NjgzMzYxODEyNDlAMTU1MzMxNDM5ODY5MQ%3D%3D&el=1_x_10&_esc=publicationCoverPdfSentiment Extraction From Bangla Text : ACharacter Level Supervised Recurrent NeuralNetwork ApproachMohammad Salman HaydarComputer Science and EngineeringDaffodil International UniversityDhaka,BangladeshEmail: salman3045@diu.edu.bdMustakim Al HelalComputer ScienceUniversity of ReginaRegina, SK, CanadaEmail: mhx049@uregina.caSyed Akhter HossainComputer Science and EngineeringDaffodil International UniversityDhaka,BangladeshEmail: aktarhossain@daffodilvarsity.edu.bdAbstract—Over the recent years, people are heavily gettinginvolved in the virtual world to express their opinions andfeelings. Each second, hundreds of thousands of data are beinggathered in the social media sites. Extraction of informationfrom these data and finding their sentiments is known as asentiment analysis. Sentiment analysis (SA) is an autonomoustext summarization and analysis system. It is one of the mostactive research areas in the field of NLP and also widely studiedin data mining, web mining and text mining. The significanceof sentiment analysis is picking up day by day due to itsdirect impact on various businesses. However, it is not sostraightforward to extract the sentiments when it comes to theBangla language because of its complex grammatical structure.In this paper, a deep learning model was developed to trainwith Bangla language and mine the underlying sentiments. Acritical analysis was performed to compare with a different deeplearning model across different representation of words. Themain idea is to represent Bangla sentence based on charactersand extract information from the characters using a RecurrentNeural Network (RNN). These extracted information are decodedas positive, negative and neutral sentiment.Index Terms—Bangla, Sentiment Analysis, RNN, Deep Learn-ing, Character level RNN, NLP in BengaliI. INTRODUCTIONOne challenge in understanding the user opinions fromsocial media is to extract the information from the largeamount of opinionated text. It become more complicated whenopinions are not made explicitly. However, it is a difficult anda time consuming task for human beings to classify differentdata and extract the opinion. Sentiment analysis has becomevital to data science since the online review is becoming morepopular every day. There are many conventional methods forsentiment analysis. Deep learning techniques have also beenused in sentiment analysis. For instance, Convolutional NeuralNetwork (CNN), Recurrent Neural Network (RNN) etc havebeen used in practice to solve sentiment analysis problems.However, sentiment analysis for the reviews or short textsin Bangla has not been much of an addressed research in alarge scale till date. Bangla has now an increasing amount oftexts used in social media e.g Facebook, blogs etc. Thereforeanalyzing Bangla text will open a new horizon towards thereal life intelligence operations on online sectors. Judgingothers opinions has a better demand to various businesses.In order to</s>
|
<s>have a better analytics to generate a more accurateinformation, we need to be able to analyze people reviews.The main contributions of this paper are as follows:• Showing the effect of character-level representation inBangla language.• Making a comparison on traditional representation ofwords with our approach.The paper is organized in different sections. After a briefsummary of related work in section II we discussed aboutthe data collection, preprocessing and character encoding insection III. Then the methodology, the model and experimentalsetup has been discussed in section IV. The following sectionafter that, demonstrates the results of our experiment anddiscussed about the experimental process. Finally, future workand conclusion were drawn.II. RELATED WORKDue to the complex grammatical structure and less resourcesin Bangla, the language went through a very few research workso far and most of the researches on SA have been carried outin English language. Researchers proposed different methodsto get the state-of-the-art results. However, some past worksrelated to this topic were studied for this paper.In [1] a sentiment analysis was performed on RomanizedBangla and Bangla text collected from different social medias.They applied deep recurrent neural network (LSTM) to traintheir model and they got accuracy of 78% with categoricalcrossentropy loss.In [2], the authors have used a semi-supervised method toidentify the sentiment of twitter posts. They first annotatedthe post into positive and negative polarity using a rule basedclassifier to make training data and then used this data to traintheir sentiment classifier. They used support vector machine(SVM) and Maximum Entropy (MaxEnt) algorithm and theyachieved a result of 93% accuracy on SVM using emoticonsas features.A hybrid method was proposed in [3] to identify thesentiment from the sentence. The authors first determinedwhether a sentence is subjective or not and they designeda model from the mixture of various parts-of-speech-tagging(POS) features collected from phrase level similarity and thenthey used syntactic model to perform sentiment analysis. Bydoing so they achieved overall 63% recall rate using SVM onthe news data.Some other researchers have also worked on social mediadata to identify the sentiment in [4]. Here they used sentimentanalysis on a specific domain. They collected the data ( post )from a Facebook group and then applied two different methodsto identify the polarity of a post. One of the approaches areNaive Bayes and another one is by using lexical resources.After the experiment, they found that in specific domainslexicon based approach performs better than the other ones.III. DATA COLLECTION AND PREPROCESSINGA. Data CollectionThe Data have been collected from Facebook page usingFacebook graph api. The data are mostly comments of theusers in the posts on the Facebook and we have also collectedthe reviews from the pages, specifically from the e-commerceand restaurant Facebook pages. As reviews contains directopinions of the users. We collected 45 thousand plus data fromFacebook.B. Data preprocessingWe remove all the unnecessary data tuples except thosecontaining Bangla. Then we tagged those data manually intoPositive, Negative and Neutral class. Figure 1 is showing howthe data is looks like after cleaning noisy data.Figure 1. Dataset SampleAnd Table I showing the data statistics of our data setafter performing cleaning operation. Noisy data</s>
|
<s>are consideredthose containing english words or only emoticons or onlyrandomized bangla words which are not necessary for ourclassification.C. Character EncodingFor using this data to our model, we first representedthis dataset in a vector space. There are different methodsto represent text data. Tf-Idf, Bag of Words and distributedrepresentation of words (e.g word2vec, Glove) etc are someTable IDATA STATISTICSClass NumberPositive 8271Negative 14000Neutral 12000total 34271examples. The major drawback of these representations is thatthey strictly rely on the words of the documents and if anyword is found that was not observed during the training periodthen the model would not understand this and this word willhave no effect on the model. In most of the research, wordsare considered as the unit of the sentence but it can also be acharacter. In our research we have taken characters as a unitof the sentence.In [5], Xiang Zhang et al. performed an empirical study oncharacter level text classification using Convolutional Net onEnglish dataset and found that this method works well on thereal life data or on data that is generated by the users. Theaccuracy depends on some other factors, for instance choiceof alphabets, size of the dataset etc. In our work we chose67 alphabets of Bangla language including space and somespecial characters. Figure 2 is showing the characters that wehave included. We didnt include any numeric characters inFigure 2. CharactersBangla. The characters one, two and three in Bangla is usedinstead of three other Bengali letters for the representationpurpose due to pythons limitation to recognize them. Wethen encoded each character in a sentence using a uniqueid from the list of characters. This process is illustrated inFigure 3. Here the length of the sequence is l=1024 and weFigure 3. Illustration of Encodingbelieve within this length we can take most part of a sentence.Sentences less than the length of 1024 characters were paddedusing zero and sentences more than 1024 characters truncatedto 1024. The characters other than the selected 67 are removedbefore the encoding phase using regular expression.IV. METHODOLOGYDeep Learning method has been applied successfully tothe Natural Language Processing problems and achieved thestate-of-the-art results in this field. Recurrent Neural Net [6]is a kind of Neural Network which is used for processingsequential data. But later on, the researchers found somemathematical problems to model long sequences using RNN[7][8].A clever idea was proposed by Hochreiter and Schmidhuberto solve this problem. The idea is to create a path and let thegradient flow over the time steps dynamically [9]. It is knowas Long Short Term Memory (LSTM). It is a very popular andsuccessful technique to handle long-term dependency problem.There are some variants of LSTM. One of them is GatedRecurrent Unit (GRU) proposed by Cho et al. [10]. Thedifference between LSTM and GRU is that it merged forgetand input gates into a update gate which means it can controlthe flow of information but without the use of memory unitand it combines cell state and hidden state along with someother changes. The rest of the thing is the same as LSTM. In[11] Junyoung Chung et al. conducted an empirical</s>
|
<s>study onthree types of RNN and found that Gated Recurrent Unit issuperior then other two. GRU is also computationally moreefficient than LSTM.zt = σ(Wz.[ht−1, xt]) (1)rt = σ(Wr.[ht−1, xt]) (2)h̃t = tanh(W.[rt ∗ ht−1, xt]) (3)ht = (1− zt) ∗ ht−1 + zt ∗ h̃t (4)Here are the equations that demonstrates how the hiddenstate ht is calculated in GRU. It has two gates, one is theupdate gate z, another one is the reset gate r. equation (1)and (2) are showing how these two are calculated. The resetgate determines how to combine the new input with theprevious memory, and the update gate defines how much ofthe previous memory to keep around. And finally hidden stateht is calculated as equation (4).However, the classification task of the sentiment is a stepby step process. For example, if we want to classify the firstsentence from the Figure 1 then at first it will go through thepreprocessing step. Here all the characters except the definedones above will be filtered out from the sentence and theremaining sentence will be represented in a vector space.Every character will be given a numeric id and then it will bepadded by zero to 1024 characters (any sentence with morethan 1024 character will be compressed down to 1024). Thisvector will be fed through the model and eventually the modelmaps the input sentence to a sentiment class. In each hiddenlayer of the model, more lower level and meaningful featuresare extracted from the previous layer and the output layercalculates the softmax probability of each of the class. Theclass which has the highest probability is the predicted result.For simplicity and better understanding of the reader we triednot to put all the mathematical details about how the modellearned through backpropagation.A. ModelThe baseline model that we compared with consists of oneembedding layer with 80 units and 3 hidden layers wheretwo with 128 LSTM unit and another one is the vanilla layerwith 1024 unit and output layer with 3 units. We have useda dropout [12] layer between output layer and the last hiddenlayer with the probability of 0.3.In our model we used an embedding layer with 67 units, 3hidden layers where two layers are with 128 GRU units eachand one vanilla layer with 1024 units stacked up serially andat last the output layer. Here we have also used a dropout of0.3 between the output layer and the last hidden layer. Ourmodel is illustrated in Figure 4.Figure 4. Model StructureB. Experimental SetupWe ran our model in 6 epoch with batch size of 512 andwe used Adam [13] as our optimizer. We also used categoricalcross entropy as our loss function. We set the learning rateat 0.01 to train our model. Many different hyperparameters(learning rate, number of layers, layer size, optimizer) wereused and this gave us an optimal result. The embedding sizewas kept 67 as we have 67 characters and the dropout wasset to 0.3 between the output layer and the dense layer of thetwo models. Early stopping was used to avoid overfitting. Allour experiments were done in python library named</s>
|
<s>keras [14]which is a high-level neural networks API.C. Results and DiscussionThe result that we achieved from the character level modelover word level model is pretty good. We came up with 80%accuracy on character level mode and 77% accuracy from ourbaseline model with word level representation. Over the recenttime, sentiment analysis achieved a highest of 78% accuracyin [1] using LSTM in Bangla with two class classification.Figure 5 showing the training and testing loss of our model.Here we can see that after a certain epoch the training lossstarted decreasing more than the testing loss. Training keepsdecreasing. The testing loss on the other hand decreases at aslower rate compared to the rate of the training loss. So westopped training at epoch 6 resulting in the saving the modelfrom overfitting. Figure 6 is showing the training and testingaccuracy of our character level model. and Figure 7 is showingthe comparison between the two models.The most important observation from our experiments isthat the character-level RNN could work for text classificationwithout the need for words semantic meanings. It may alsoextract the information even if the word is not correct as weare now going through each of the characters individually.Figure 5. Training and Testing lossFigure 6. Training and Testing accuracyHowever, we need to undergo more study to prove this forBangla. So we can use this representation to handle real lifedata from social media. However, to observe the performanceof this model across different datasets, more research isneeded. Nevertheless, the result depends on various factorsincluding the size of the dataset, alphabet choice, data qualityetc. But our dataset is focused on a specific telecommunicationcampaign domain. So this model can be helpful on somespecific application.We calculated the accuracy as a ratio of correctly classifieddata and a total number of data from the test set. The equationis as follows:Accuracy =Tp+ TnTp+ Tn+ Fp+ Fn(5)V. FUTURE WORK AND CONCLUSIONTo conclude, this paper offers a research based study oncharacter-level RNN for sentiment analysis in Bangla. Wecompared it with a deep learning model with word levelrepresentation and found a good result. However, the model isnot a generic of it kind since it worked well with data from aspecific domain. Also, we did not address sarcastic sentenceanalysis in this model. So, if a positive word is used in thesentence with a negative sarcastic perspective the model willnot be able to detect this. Hence, this needs to be addressedwhich is a challenge due to the level of abstraction an user cancreate through one sentence. So, intensive research is neededin this regard. Our analysis shows that character-level RNNis an effective method to extract the sentiment from Bangla.The model however is still immature and yet to be appliedto Romanized Bangla. So, making the model more reliableFigure 7. Comparison of the two modelsacross different data is one future goal of this project that willmake it useable in the industry level to extract sentiment fromthe social media reviews and comments.REFERENCES[1] Hassan, Asif, et al. ”Sentiment analysis on bangla and romanized banglatext using deep recurrent models.” Computational Intelligence (IWCI),International Workshop on. IEEE, 2016.[2]</s>
|
<s>Chowdhury, Shaika, and Wasifa Chowdhury. ”Performing sentimentanalysis in Bangla microblog posts.” Informatics, Electronics & Vision(ICIEV), 2014 International Conference on. IEEE, 2014.[3] Das, Amitava, and Sivaji Bandyopadhyay. ”Phrase-level polarity identifi-cation for Bangla.” Int. J. Comput. Linguist. Appl.(IJCLA) 1.1-2 (2010):169-182.[4] Akter, Sanjida, and Muhammad Tareq Aziz. ”Sentiment analysis onfacebook group using lexicon based approach.” Electrical Engineeringand Information Communication Technology (ICEEICT), 2016 3rd Inter-national Conference on. IEEE, 2016.[5] Zhang, Xiang, Junbo Zhao, and Yann LeCun. ”Character-level convolu-tional networks for text classification.” Advances in neural informationprocessing systems. 2015.[6] Rumelhart, David E., Geoffrey E. Hinton, and Ronald J. Williams.”Learning representations by back-propagating errors.” nature 323.6088(1986): 533.[7] Hochreiter, Sepp. ”Untersuchungen zu dynamischen neuronalen Netzen.”Diploma, Technische Universitt Mnchen 91 (1991).[8] Bengio, Yoshua, Patrice Simard, and Paolo Frasconi. ”Learning long-term dependencies with gradient descent is difficult.” IEEE transactionson neural networks 5.2 (1994): 157-166.[9] Hochreiter, Sepp, and Jrgen Schmidhuber. ”Long short-term memory.”Neural computation 9.8 (1997): 1735-1780.[10] Cho, Kyunghyun, et al. ”On the properties of neural machine translation:Encoder-decoder approaches.” arXiv preprint arXiv:1409.1259 (2014).[11] Chung, Junyoung, et al. ”Empirical evaluation of gated recurrent neuralnetworks on sequence modeling.” arXiv preprint arXiv:1412.3555 (2014).[12] Srivastava, Nitish, et al. ”Dropout: a simple way to prevent neuralnetworks from overfitting.” Journal of machine learning research 15.1(2014): 1929-1958.[13] Kingma, Diederik, and Jimmy Ba. ”Adam: A method for stochasticoptimization.” arXiv preprint arXiv:1412.6980 (2014).[14] C. François and others, ”Keras”, Keras.io, 2015. [Online]. Available:http://keras.io. [Accessed: 16- Nov- 2017].View publication statsView publication statshttps://www.researchgate.net/publication/327820158 2018-09-10T10:42:00-0400 Preflight Ticket Signature</s>
|
<s>See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/341384787ASPECT BASED SENTIMENT ANALYSIS IN BANGLA DATASET BASED ONASPECT TERM EXTRACTIONConference Paper · May 2020CITATIONREADS2249 authors, including:Some of the authors of this publication are also working on these related projects:Bangladeshi Stock Price Prediction and Analysis with Potent Machine Learning Approaches View projectEducation View projectSabrina HaqueDaffodil International University2 PUBLICATIONS 1 CITATION SEE PROFILETasnim Rahman9 PUBLICATIONS 4 CITATIONS SEE PROFILEAsif Khan ShakirDaffodil International University9 PUBLICATIONS 23 CITATIONS SEE PROFILEMd. Shohel ArmanDaffodil International University12 PUBLICATIONS 4 CITATIONS SEE PROFILEAll content following this page was uploaded by Asif Khan Shakir on 21 May 2020.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/341384787_ASPECT_BASED_SENTIMENT_ANALYSIS_IN_BANGLA_DATASET_BASED_ON_ASPECT_TERM_EXTRACTION?enrichId=rgreq-61d51a3c4ba985bc4324bdaa5c25dcb6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTM4NDc4NztBUzo4OTM3Mzc0Nzk3MjUwNTdAMTU5MDA5NTAyODE1MA%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/341384787_ASPECT_BASED_SENTIMENT_ANALYSIS_IN_BANGLA_DATASET_BASED_ON_ASPECT_TERM_EXTRACTION?enrichId=rgreq-61d51a3c4ba985bc4324bdaa5c25dcb6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTM4NDc4NztBUzo4OTM3Mzc0Nzk3MjUwNTdAMTU5MDA5NTAyODE1MA%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Bangladeshi-Stock-Price-Prediction-and-Analysis-with-Potent-Machine-Learning-Approaches?enrichId=rgreq-61d51a3c4ba985bc4324bdaa5c25dcb6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTM4NDc4NztBUzo4OTM3Mzc0Nzk3MjUwNTdAMTU5MDA5NTAyODE1MA%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Education-714?enrichId=rgreq-61d51a3c4ba985bc4324bdaa5c25dcb6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTM4NDc4NztBUzo4OTM3Mzc0Nzk3MjUwNTdAMTU5MDA5NTAyODE1MA%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-61d51a3c4ba985bc4324bdaa5c25dcb6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTM4NDc4NztBUzo4OTM3Mzc0Nzk3MjUwNTdAMTU5MDA5NTAyODE1MA%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sabrina_Haque2?enrichId=rgreq-61d51a3c4ba985bc4324bdaa5c25dcb6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTM4NDc4NztBUzo4OTM3Mzc0Nzk3MjUwNTdAMTU5MDA5NTAyODE1MA%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sabrina_Haque2?enrichId=rgreq-61d51a3c4ba985bc4324bdaa5c25dcb6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTM4NDc4NztBUzo4OTM3Mzc0Nzk3MjUwNTdAMTU5MDA5NTAyODE1MA%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Daffodil_International_University?enrichId=rgreq-61d51a3c4ba985bc4324bdaa5c25dcb6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTM4NDc4NztBUzo4OTM3Mzc0Nzk3MjUwNTdAMTU5MDA5NTAyODE1MA%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sabrina_Haque2?enrichId=rgreq-61d51a3c4ba985bc4324bdaa5c25dcb6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTM4NDc4NztBUzo4OTM3Mzc0Nzk3MjUwNTdAMTU5MDA5NTAyODE1MA%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Tasnim_Rahman12?enrichId=rgreq-61d51a3c4ba985bc4324bdaa5c25dcb6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTM4NDc4NztBUzo4OTM3Mzc0Nzk3MjUwNTdAMTU5MDA5NTAyODE1MA%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Tasnim_Rahman12?enrichId=rgreq-61d51a3c4ba985bc4324bdaa5c25dcb6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTM4NDc4NztBUzo4OTM3Mzc0Nzk3MjUwNTdAMTU5MDA5NTAyODE1MA%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Tasnim_Rahman12?enrichId=rgreq-61d51a3c4ba985bc4324bdaa5c25dcb6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTM4NDc4NztBUzo4OTM3Mzc0Nzk3MjUwNTdAMTU5MDA5NTAyODE1MA%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Asif_Shakir?enrichId=rgreq-61d51a3c4ba985bc4324bdaa5c25dcb6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTM4NDc4NztBUzo4OTM3Mzc0Nzk3MjUwNTdAMTU5MDA5NTAyODE1MA%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Asif_Shakir?enrichId=rgreq-61d51a3c4ba985bc4324bdaa5c25dcb6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTM4NDc4NztBUzo4OTM3Mzc0Nzk3MjUwNTdAMTU5MDA5NTAyODE1MA%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Daffodil_International_University?enrichId=rgreq-61d51a3c4ba985bc4324bdaa5c25dcb6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTM4NDc4NztBUzo4OTM3Mzc0Nzk3MjUwNTdAMTU5MDA5NTAyODE1MA%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Asif_Shakir?enrichId=rgreq-61d51a3c4ba985bc4324bdaa5c25dcb6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTM4NDc4NztBUzo4OTM3Mzc0Nzk3MjUwNTdAMTU5MDA5NTAyODE1MA%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Arman?enrichId=rgreq-61d51a3c4ba985bc4324bdaa5c25dcb6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTM4NDc4NztBUzo4OTM3Mzc0Nzk3MjUwNTdAMTU5MDA5NTAyODE1MA%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Arman?enrichId=rgreq-61d51a3c4ba985bc4324bdaa5c25dcb6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTM4NDc4NztBUzo4OTM3Mzc0Nzk3MjUwNTdAMTU5MDA5NTAyODE1MA%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Daffodil_International_University?enrichId=rgreq-61d51a3c4ba985bc4324bdaa5c25dcb6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTM4NDc4NztBUzo4OTM3Mzc0Nzk3MjUwNTdAMTU5MDA5NTAyODE1MA%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Arman?enrichId=rgreq-61d51a3c4ba985bc4324bdaa5c25dcb6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTM4NDc4NztBUzo4OTM3Mzc0Nzk3MjUwNTdAMTU5MDA5NTAyODE1MA%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Asif_Shakir?enrichId=rgreq-61d51a3c4ba985bc4324bdaa5c25dcb6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTM4NDc4NztBUzo4OTM3Mzc0Nzk3MjUwNTdAMTU5MDA5NTAyODE1MA%3D%3D&el=1_x_10&_esc=publicationCoverPdfASPECT BASED SENTIMENT ANALYSIS IN BANGLA DATASET BASED ON ASPECT TERM EXTRACTION Sabrina Haque 1, Tasnim Rahman 1, Asif khan Shakir 1, Md. Shohel Arman 1, Khalid Been Badruzzaman Biplob 1, Farhan Anan Himu 1, Dipta Das 1 1 Daffodil International University, Dhaka, Bangladesh haque35-1235@diu.edu.bd, tasnim.swe@diu.edu.bd, asif.swe@diu.edu.bd, ar-man.swe@diu.edu.bd, Khalid@daffodilvarsity.edu.bd, himu.swe@diu.edu.bd, diptadas73@gmail.com Abstract. Recent years have seen rapid growth of research on sentiment analysis. In aspect-based sentiment analysis, the idea is to take sentiment analysis a step further and find out what exactly someone is talking about, and then measuring the sentiment if she or he likes or dislikes it. Sentiment analysis in Bengali lan-guage is progressing and is considered as an important research interest. Due to scarcity of resources like proper annotated dataset, corpora, lexicon such as part of speech tagger etc. aspect-based sentiment analysis hardly has been done in Bengali language. In this paper, we have conducted our experiments based on a recent work from 2018 using conventional supervised machine learning algo-rithms (RF, SVM, KNN) to perform one of the ABSA’s tasks - aspect category extraction. The work is done on two datasets named – Cricket and Restaurant. We then compared our results with the existing work. We used two traditional steps to clean data and found that less preprocessing leads to better F1 Score. For Cricket dataset, SVM and KNN performed better, resulting F1 score of 37% and 27%. For Restaurant dataset, RF and SVM achieved improved score of 35% and 39% respectively. Additionally, we selected two more algorithms LR and NB, LR achieved best F1 score (43%) for Restaurant dataset among all. Keywords: ABSA dataset, ABSA in Bangla, aspect extraction, aspect category extraction 1 Introduction We are in the age of internet where every day we generate over 2.5 quintillion of data [5] and sentiment analysis has become one of the key tools for making sense of these user generated data. Sentiment Analysis (SA) (or Opinion Mining) is a field re-garding NLP (Natural Language Processing) that builds systems which generally tries to extract opinions within text in natural language understanding [16]. Even SA has occupied a wide area in the real-world applications and both business importance and academic interest [3]. The typical sentiment analysis generally focuses on predicting the overall polarity (positive or negative or neutral) of the given sentence. mailto:haque35-1235@diu.edu.bdmailto:tasnim.swe@diu.edu.bdmailto:asif.swe@diu.edu.bdmailto:arman.swe@diu.edu.bdmailto:arman.swe@diu.edu.bdmailto:himu.swe@diu.edu.bdmailto:diptadas73@gmail.comIf we imagine having a large dataset that contains feedbacks</s>
|
<s>of customers from var-ious sources like social media, online reviews or customer’s online surveys.e.g. “Food is decent but service is so bad”, it is evident that the sentiment towards food is positive however contains a powerful negative sentiment towards facet service. So, after classi-fying the overall sentiment, existence of a strong negative sentiment would neglect the positive fact that food was actually good [3]. But to make the information more helpful and get a complete picture, the nitty-gritty of each feedback must be retrieved. To solve this issue Aspect Based Sentiment Analysis (ABSA) comes up being an advanced tool to make it possible to analyze these reviews and predict opinions not only for an overall feedback, but also on an aspect-level [10]. ABSA task has been added since 2014 in the annual SemEval (Semantic Evaluation, a reputed workshop in the NLP domain) competition [3]. SemEval has introduced a complete dataset in English for the ABSA task and later they expanded it in multi-lingual datasets in which eight languages over seven domains were included [13]. Da-tasets of several languages, such as Arabic, Czech, and French were created to perform ABSA. Moreover, there are plenty of powerful libraries like NLTK, Textblob and Spacy that have become major part while performing SA or ABSA. Also, they’ve pub-lished benchmark datasets Restaurant and Laptop [11] with gold annotations. In SemEval 2014 [20] ABSA’s task was divided into four subtasks- Aspect term extraction- An aspect term refers to a particular aspect of the target entity [25]. Aspect term extraction is returning a list containing all distinct aspect terms from a set of sentences with pre-identified entities(E) e.g., (restaurants; laptop) by iden-tifying the aspect terms e.g. (delicious; hard-disk). Aspect term polarity- From a given set of aspect terms within a sentence, deter-mining whether the polarity of each aspect term is positive, negative, neutral or conflict [25]. Aspect category detection- From a predefined set of aspect categories e.g. (food, display), identifying the aspect categories discussed in a given sentence [25]. Aspect categories are typically crude comparing with the aspect terms of and they do not nec-essarily occur as terms in the given sentence. Aspect category polarity- From a set of pre-identified aspect categories e.g. (food; display), determine the polarity (positive, negative, neutral or conflict) of each aspect [25]. While performing ABSA, it involves around two crucial tasks – 1) extracting the specific areas or aspects, 2) identifying the polarity for every aspect. As one sentence or review might contain different polarities [13], an overall decision will not be bene-ficial every time. Aspect extraction is necessary to first deconstruct sentences into prod-uct features and after the task is done only then assign a separate polarity value to each of these features. There are several approaches in previous studies that has already been developed to perform ABSA in English and some other languages that includes super-vised, semi supervised, unsupervised approaches, rule-based approaches and more. Most of the approaches were machine learning centric [3], [6]. In early 2010 ABSA was introduced</s>
|
<s>as a framework titled “aspect-based sentiment analysis” [26] address the problem of getting only the overall sentiment from a sentence where aspect refers to a component or attribute of an entity. One of the first studies for both explicit and implicit aspects extraction from product reviews, proposed a rule-based approach [24]. Two popular review datasets (Restaurant and Laptop) were used for evaluating the system where the proposed framework achieved highest precision of 94.15% among their five kind of review categories. In SemEval 2014 ABSA’s task is divided into before mentioned four subtasks [20]. Also, they’ve published benchmark datasets Restaurant and Laptop with gold annota-tion [11]. With continuation of SemEval 2014, in 2015 an aspect category extraction was modeled as a multiclass classification problem with features based on n-grams, parsing, and word clusters. SVM with a linear kernel was trained for category extraction [10]. The highest F-1 scores in both datasets are 50.86% and 62.68% respectively. In another work CNN has been adopted in work of Wang’s aspect-based sentiment analysis [3]. They have introduced a combined model with aspect prediction and senti-ment prediction that left behind the highest scores achieved by the wining team in SemEval 2015. There F1 score was 51.3%. Above discussed reviews are done regarding English language. If we highlight some other languages for ABSA then language, Arabic [2], Czech [1], French [12], Hindi [14] can be mentioned. Czech language is progressing very successfully in ABSA and several label corpora has been built both for supervise and unsupervised training, mor-phological tools and lexicons. For aspect term extraction both rule-based and machine learning algorithms were applied on the new dataset that consists of segments from user reviews of IT products [1]. 65.70% and 30.27% F1 score were achieved for short-term and long-term reviews. In another work the authors introduced two new corpora in Czech language to attempt ABSA for both supervised and unsupervised training [23]. The four subtasks of ABSA have been done where word clusters are created and used as features. F1 score came out of 71.4% and 71.7% for aspect term and aspect category extraction. Regarding Hindi language a new dataset has introduced that includes several do-mains [14]. CRF and SVM are used for aspect term extraction and sentiment analysis. The average F1-score is 41.07% for aspect term extraction and accuracy is 54.05% for sentiment classification. Bengali is the 7th most spoken languages in the world [27]. People are using it fre-quently over the social media for expressing reviews, sentiments or feedbacks. But there is no proper dataset available and very few works have been done regarding ABSA. Very recently in 2018, an annotated dataset to perform ABSA has been pub-lished in Bengali language where the authors have “extracted aspects”, one of the SemEval 2014 tasks [20] [13]. The dataset contains two domains - Cricket and Restau-rant. SVM, RF and KNN classifiers has been used and highest F1 score of 34% and 42% has been achieved from Cricket and Restaurant domains. Bengali language is far behind and</s>
|
<s>remains un-explored due to very less availability and lack of various re-sources and tools such as annotated corpora, lexicons, Part-of-Speech (PoS) tagger etc. that plays vital role while performing ABSA. Therefore, the concentration of this paper is to use the annotated dataset from [13] and perform ABSA’s aspect extraction task to take ahead the possibilities of ABSA’s aspect category extraction in Bengali language. We have used supervised machine learning algorithms SVM, RF, KNN, LR, and NB. We have also compared our results with the previous work by [13]. Table 1. Example of Aspect Based Sentiment Analysis (Cricket & Restaurant Dataset) Review Text Aspect Category Polarity Original Text Translated ব োলোররো বে পররমোনে শর্ট ল রিনে তোনত রোে কত ব রশ হয় বের্োই বিখোর র ষয় It is a matter of watching how much runs the bowler is making in the short ball Bowling Bowling Negative Negative Original Text Translated েরিও খোিয ভোনলো রিল পররন শেো রিল র শ্রী Although the food was good, the serving was awkward Service Service Negative Negative Rest of the paper is organised as follows: Section 2 presents the proposed model, Section 3 depicts the experimental results and discusses on the major findings based on the experimental results. Finally, section 4 concludes the paper with future research leads with some future indications. 2 Methodology The methodology proposed on this paper is divided into following sections: data collection, data preprocessing, data analysis and visualizing the outcome. The proposed model of this research is shown in Figure 1. Figure 1. Proposed model for Aspect Based Sentiment Analysis 2.1 Dataset Collection We have used the datasets created for ABSA and specially designed for aspect term and polarity extraction for the first time in Bengali [13] by Md. A.R and Emon K.D, 2018 (https://github.com/AtikRahman/Bangla_ABSA_Dataset). The two different da-tasets, are named, Cricket dataset and Restaurant dataset. Cricket dataset consists of human-annotated user comments with five different as-pect categories - bowling, batting, team, team management, and other. On the other hand, Restaurant dataset is an abstractly translated in Bengali form of the SemEval 2014‘s English dataset [10], consistting five aspect categories-Food, Price, Service, Ambiance, and Miscellaneous. To make the overall of understanding of the datasets (Cricket and Restaurant) a complete statistic has been presented in Table 2. Table 2. Overall statistics of both datasets Dataset No. of Reviews Aspect Category Polarity Cricket 2979 Batting Bowling Team management Other Positive (19%) Negative (72%) Conflict (9%) Restaurant 2059 Food Price Service Ambiance Miscellaneous Positive (59%) Negative (23%) Conflict (12%) Neutral (6%) 2.2 Data Preprocessing Data preprocessing plays a vital role on text analysis to make the model understand the data. Text data contains a lot of noise and as a result, it’s a challenge to clean the texts. Data pre-processing reduces the size of the input text documents significantly and is done by various steps: 1) Removing special characters: Removed the special characters as they sometime create confusion, we feel in these kind of Bengali datasets special characters will lead to complexity for classification.</s>
|
<s>2) Removing punctuations: One of the very popular and often applied preprocessing is removing punctuations. Even the full-stop “.” in Bengali language refers to “|” sign. So, we have removed punctuations. 2.3 Feature Extraction We have represented reviews (texts) into numeric form to use them as features. The process is stated here: BOW- For training the statistical algorithms using machine learning, the dataset should be in numeric form. We have first converted the texts into numbers in order to make these statistical algorithms work. Bag of words is one of the approaches that helps to do so. It is a representation of text that reflects the occurrence of words within a document [4]. TF-IDF- It is almost a similar approach like BOW but has little different idea behind it. It is evident that it has two terms, where TF (Term Frequency) refers to the number of times a word occurs in a document and IDF (Inverse Document Frequency) refers to how important the word is in the document [19]. The equation for TF and IDF given bellow- TF (t) = (Number of times term t appears in a document / (Total number of terms in the document) IDF (t) = log_e (Total number of documents / Number of documents with term t in it). We have used sklearn library [21] that contains the TFiDfVectorizer class which has been used to convert the features into TF-IDF feature vectors. We have set the limita-tion of maximum features to 2500. It only uses the 2500 most frequently occurring words to create BOW feature vector. For classification we have passed the known label corresponding to the review. Table 3: Sample Bengali Pre-processed Data Original Review েময় োাংলোনিনশর ভোগ্য ড্র বরনখনি, েোহয় হোর চোড়ো উপোয় রিনলোেো.!! Processed Review েময় োাংলোনিনশর ভোগ্য ড্র বরনখনি েোহয় হোর চোড়ো উপোয় রিনলোেো Tokeniza-tion ‘সেময়’ ‘ োাংলোনিনশর’ ‘ভোগ্য’ ‘ড্র’ ‘বরনখনি’ ‘েোহয়’ ‘হোর’ ‘চোড়ো’ ‘উপোয়’‘সরিনলোেো’ Uni-gram েময়’ স োাংলোনিনশর’.‘ভোগ্য’ ‘ড্র’ ‘বরনখনি’ ‘েোহয়’ ‘হোর’ ‘চোড়ো’ ‘উপোয়’‘সরিনলোেো’ 2.4 Fitting Algorithm to Train Finding a well-performing machine learning algorithm for a particular dataset is a challenging task. We went through “Trial and Error” process to determine a sufficient list of algorithms that works on these datasets. We studied several algorithms that has been using for ABSA [8][15][20][13] [22]. Finally, we have selected frequently used following algorithms for classifying- 1. As we want to compare result with [13] we used same algorithms SVM (Support vector machine), RF (Random Forest) and KNN (K-Nearest Neighbor) 2. Algorithms not used in [13], LR (Logistic Regression) and NB (Naïve Bayes). Logistic Regression (LR) is an appropriate regression analysis to act on dichotomous variable (binary variable). To describe data and to explain the relationship between one dependent binary variable and one or more nominal, ordinal even interval level inde-pendent variables we have used logistic regression [17]. On the contrary, Naive Bayes classifier is surprisingly a powerful algorithm for pre-dictive modeling. We selected NB as it has often been used in sentiment analysis as it remains less affected by data scarcity</s>
|
<s>and text classification tasks [7] [9]. 2.5 Languages and Tools The system is implemented using Python 3(Jupyter NoteBook) in Anaconda. The Python modules skLearn- which provides a set of modules for machine learning and data mining [23] is used. For NLP tasks NLTK (A leading platform for building Python programs and rich with libraries to perform NLP tasks [18]) has been used. 3 Results and Discussion 3.1 Split Dataset Working with supervised machine learning requires the dataset to be split into two parts, training set and testing set. In this work the data split has been done by a split function named Train_test_split function which is imported from scikit-learn library [23]. We used 80% of dataset for training and 20% for testing. 3.2 Aspect Category Classifying: We have provided the experimental average results of precision, recall, and F1 score for the classification task using supervised machine learning algorithms SVM, RF, KNN, LR and NB, calculating from both of the dataset’s different aspect categories, in Table 4. To represent the results, precision, recall and F1-score terms have been used and the measurements are done using four parameters - true positive (tp), true negative (tn), false positive(fp) and false negative(fn). Table 4 denotes the overall confusion matrix of the experimental results for aspect category extraction using RF, SVM, KNN, LR and NB for both Cricket and Restaurant datasets. From this the experimental results can be depicted from five algorithms per-formed on Cricket and Restaurant dataset. For Cricket dataset, the precision rate from Logistic regression is highest, which is 41%, and the recall rate from Random Forest is highest, 36%. As in this particular problem for aspect category extraction, both preci-sion and recall are important, thus, we can see Random Forest has given the highest F-1 score, 37%. Similarly, for the Restaurant dataset, both Logistic Regression provided the highest precision score, 42% and recall score 43% resulting highest F-1 score of 43%. The other algorithms such as RF, SVM and KNN also performed well with F1-Score of 35%, 39% and 38%. For both of the Cricket and Restaurant dataset, NB algorithm performed significantly low with the F1-Score of 18% and 17%. Table 4. Results experiments using RF, SVM, KNN, LR & NB Dataset Algorithm Precision Recall F1 score Cricket RF SVM KNN 0.39 0.40 0.27 0.41 0.23 0.36 0.35 0.27 0.34 0.27 0.37 0.35 0.27 0.34 0.18 Restaurant RF SVM KNN 0.70 0.79 0.39 0.42 0.25 0.27 0.30 0.38 0.43 0.26 0.35 0.39 0.38 0.43 0.17 It is apparent that in Cricket dataset the scores for RF, LR and SVM are compara-tively better in overall. KNN and NB performed less comparative to other algorithms. Conversely, in the case of Restaurant dataset results, SVM, and LR provided the highest result for the Restaurant dataset. Table 5 and Table 6 shows the comparison among our experimental results and the results achieved from [13] for both cricket and restaurant dataset on different algo-rithms. For Cricket dataset, the F1 score from our experiment is 35% from SVM</s>
|
<s>and 27% from KNN resulting higher score than [13], where the previous results for both algorithms were 34% and 25% respectively. However, RF provided same result as [13], which is 37%. Table 5. Model Performance Comparison of cricket dataset Model Precision (Previ-ous re-sult) Precision (Our re-sult) Recall (Previous result) Recall (Our re-sult) F1 score (Previous result) F1 score (Our re-sult) SVM KNN .71 .45 .60 .40 .27 .39 0.22 0.21 0.27 0.35 0.27 0.36 0.34 0.25 0.37 0.35 0.27 0.37 From Table 5, the results for restaurant dataset can be outlined. SVM and RF algo-rithms has given better F1 Score than previous one, 39% and 35%, where the previous results were 38% and 33% for these algorithms. In this case, KNN performed less than the previous one resulting F1 score of 38%, lower than 42%. Table 6. Model Performance Comparison for restaurant dataset Model Precision (Previous result) Precision (Our result) Recall (Previous result) Recall (Our re-sult) F1 score (Previous result) F1 score (Our result) SVM KNN 0.77 0.54 0.64 0.79 0.39 0.70 0.30 0.34 0.26 0.30 0.38 0.27 0.38 0.42 0.33 0.39 0.38 0.35 From the experimental results and discussions we can conclude that our experiemnts have provided improved results than the previous work [13] and we have eliminated some preporcessing steps. As a result this less preprocessing leads to better F1 score in both dataset. Aspect category extraction is a multi-label classification problem [13] where one opinion might carry several categories. Hence, our supervised classifiers might skip some of these aspect categories. Better results can be attained, if we can train the datasets in more advanced way using POS tagging. 4 Conclusions and Recommendations In this study we have used conventional supervised aspect category model of ma-chine learning with less preprocessing. We used two traditional steps to clean data and achieved better results comparing ABSA Bangla dataset’s paper for both of the dataset. The advanced researchers use better and detailed dataset for ABSA these days which results high accuracy. More detailed annotated dataset like SemEval for Bengali lan-guage can lead to impressive results. Moreover, sentiment analysis is getting popular for spam review/comment detection and fraud app detection these days. So, increasing research with more non-English lan-guages and aspects can lead to more precise notion about users and their reviews that can help to make better business decision as well as improve cyber security. Besides, unsupervised approaches have been seen to build good enough impact on ABSA even in different language like Czech. However, in this trend, very effective deep learning powerful neural networks model can lead to more satisfying work and results in ABSA proven in the paper comparing with SemEval 2015 task. In future, we would like to explore more advanced techniques of deep learning (CNN) applied in NLP for ABSA for both aspect category and sentiment analysis extraction. Also, we would like to use Bangla POS tagger to train model in aspect term extraction and train classifier with more preprocessing steps. References [1] Ales Tamchyna, Ondrej Fiala,</s>
|
<s>Katerina Veselovská. (2015) Czech aspect-based sentiment anal-ysis: A new dataset and preliminary results. In: ITAT 2015 [2] Mohammad Al-Smadi, Omar Qawasmeh, Bashar Talafha, Muhannad Quwaider (2015) Human annotated arabic dataset of book reviews for aspect-based sentiment analysis. In: 3rd Interna-tional Conference on Future Internet of Things and Cloud (FiCloud 2015), IEEE, Rome Italy, pp 726-730. Doi: 10.1109/FiCloud.2015.62 [3] Bo Wang, Min Liu. (2015) Deep learning for aspect-based sentiment analysis. https://cs224d.stanford.edu/reports/WangBo.pdf Accessed 22 April 2019 [4] Brownlee Jason. (2017). A Gentle introduction to the bag-of-words model. https://machinelearn-ingmastery.com/gentle-introduction-bag-words-model/ Accessed 12 May 2019 [5] domo.com. Data never sleeps 5.0. data never sleeps. https://www.domo.com/learn/data-never-sleeps-5 Accessed 26 Aug 2019 [6] Hasib, Tamanna Rahin, Saima Ahmed. (2017) Apsect-based sentiment analysis using semeval and amazon datasets. http://dspace.bracu.ac.bd/xmlui/handle/10361/9542 Accessed 21 April 2019 [7] Heba Ismail, Saad Harous, Boumediene Belkhouche. (2016) A comparative analysis of machine learning classifiers for twitter sentiment analysis. In: 17th International Conference on Intelli-gent Text Processing and Computational Linguistics - CICLing 2016, Turkey. [8] Hussam Hamdan, Patrice Bellot, Frederic Bechet. (2015) Lsislif: crf and logistic regression for opinion target extraction and sentiment polarity analysis. In: Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), Association for Computational Linguistics Denver Colorado, pp 753–758. DOI: 10.18653/v1/S15-2128 [9] Jurafsky, D. (2018, October). Stanford University, Index of /class/cs124/lec: https://web.stan-ford.edu/class/cs124/lec/naivebayes.pdf Accessed 24 April 2019 [10] Maria Pontiki, Dimitrios Galanis, Haris Papageorgiou, Suresh Manandhar, Ion Androutsopou-los. (2015) SemEval-2015 task 12: Aspect based sentiment analysis. In: Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), Association for Computa-tional Linguistics Denver Colorado, pp 486-495. DOI: 10.18653/v1/S15-2082 https://cs224d.stanford.edu/reports/WangBo.pdfhttps://machinelearningmastery.com/gentle-introduction-bag-words-model/https://machinelearningmastery.com/gentle-introduction-bag-words-model/https://www.domo.com/learn/data-never-sleeps-5https://www.domo.com/learn/data-never-sleeps-5http://dspace.bracu.ac.bd/xmlui/handle/10361/9542%20Accessed%2021%20April%202019http://dspace.bracu.ac.bd/xmlui/handle/10361/9542%20Accessed%2021%20April%202019https://web.stanford.edu/class/cs124/lec/naivebayes.pdfhttps://web.stanford.edu/class/cs124/lec/naivebayes.pdf[11] Maria Pontiki, Juli Bakagianni. (2014, November 04). SemEval-2014 ABSA Test Data - Gold Annotations Corpus http://metashare.elda.org/repository/browse/semeval-2014-absa-test-data-gold-annotations/b98d11cec18211e38229842b2b6a04d77591d40acd7542b7af823a54fb03a155/ Accessed 22 February 2019 [12] Marianna Apidianaki, Xavier Tannier, Cecile Richart. (2016) Datasets for aspect-based senti-ment analysis in french, In: Proceedings of the Tenth International Conference on Language Re-sources and Evaluation (LREC'16), European Language Resources Association (ELRA), Slove-nia, pp 1122-1126. [13] Md. Atikur Rahman, Emon Kumar Dey. (2018) Datasets for aspect-based sentiment analysis in bangla dataset. MDPI Journals. doi: 10.3390/data3020015 [14] Md. Shad Akhtar, Asif Ekbal, Pushpak Bhattacharyya. (2016) Aspect based sentiment analysis in hindi: Resource creation and evaluation. In: Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), European Language Resources Association, Slovenie, pp 2703-2709 [15] Mohamad Syahrul Mubarok, Adiwijaya, and Muhammad Dwi Aldhi. (2017) Aspect-based sen-timent analysis to review products using naive bayes. In: International Conference on Mathe-matics: Pure, Applied and Computation: Empowering Engineering using Mathematics. DOI: 10.1063/1.4994463 [16] MonkeyLearn. (No Date). Sentiment Analysis. https://monkeylearn.com/sentiment-analy-sis/#what-is-sentiment-analysis Accessed 12 March 2019 [17] Murat KORKMAZ, Selami GÜNEY2. Şule Yüksel YİĞÎTER. (2012) The importance of logistic regression implementations in the turkish livestock sector and logistic regression implementa-tions/fields. HR.Ü.Z.F.: 25-36 [18] NLTK 3.4.4 documentation. (No Date) https://www.nltk.org/ Accessed 22 May 2019 [19] Panchal A. (2010, June). Text Summarization using TF-IDF. In:Towards Datascience https://to-wardsdatascience.com/text-summarization-using-tf-idf-e64a0644ace3 Accessed 22 May 2019 [20] Pontiki, M.; Galanis, D.; Pavlopoulos, J.; Papageorgiou, H.; Androutsopoulos, I.; Manandhar. (2014) S. SemEval-2014 Task 4: Aspect Based Sentiment Analysis. In: Proceedings of the 8th International Workshop on</s>
|
<s>Semantic Evaluation (SemEval 2014), Association for Computa-tional Linguistics, Dublin Ireland, pp 27-35. DOI: 10.3115/v1/S14-2004 [21] ScikitLearn. (n.d.). sklearn.feature_extraction.text.TfidfVectorizer¶. Retrieved May 2019, from scikit learn: https://scikit-learn.org/stable/modules/generated/sklearn.feature_extrac-tion.text.TfidfVectorizer.html [22] Shaika Chowdhury, Wasifa Chowdhury (2014) Performing Sentiment Analysis in Bangla Mi-croblog Posts. In: 2014 International Conference on Informatics, Electronics & Vision (ICIEV), IEEE, Dhaka, Bangladesh. DOI: 10.1109/ICIEV.2014.6850712 [23] Sklearn 0.0. (No Date). https://pypi.org/project/sklearn/ Accessed 28 May 2019 [24] Soujanya Poria, Erik Cambria, Lun-Wei Ku, Chen Gui, Alexander Gelbukh. (2014). A Rule-Based Approach to Aspect Extraction from Product Reviews, Proceedings of the Second Work-shop on Natural Language Processing for Social Media (SocialNLP), Association for Computa-tional Linguistics and Dublin City University, Dublin Ireland, pp 28-37. DOI: 10.3115/v1/W14-5905 [25] Toma´s Hercig, Toma´s Brychc, Luka´s Svoboda, Michal Konko, Michal Konko. (2016) Unsu-pervised Methods to Improve Aspect-Based Sentiment Analysis in Czech. In: Semantic Scholar, Computación y Sistemas 2016, pp 365-375. DOI:10.13053/cys-20-3-2469 http://metashare.elda.org/repository/browse/semeval-2014-absa-test-data-gold-annotations/b98d11cec18211e38229842b2b6a04d77591d40acd7542b7af823a54fb03a155/http://metashare.elda.org/repository/browse/semeval-2014-absa-test-data-gold-annotations/b98d11cec18211e38229842b2b6a04d77591d40acd7542b7af823a54fb03a155/https://monkeylearn.com/sentiment-analysis/#what-is-sentiment-analysishttps://monkeylearn.com/sentiment-analysis/#what-is-sentiment-analysishttps://www.nltk.org/https://towardsdatascience.com/text-summarization-using-tf-idf-e64a0644ace3https://towardsdatascience.com/text-summarization-using-tf-idf-e64a0644ace3https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.htmlhttps://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.htmlhttps://pypi.org/project/sklearn/[26] Tun Thura Thet, Jin-Cheon Na, Christopher S.G. Khoo. (2010, November). Aspect-based senti-ment analysis of movie reviews on discussion boards:823-848. SAGE Journals. doi: 10.1177/0165551510388123 [27] Wikipedia. (No Date). Bengali Language. https://en.wikipedia.org/wiki/Bengali_language Ac-cessed 28 April 2019 View publication statsView publication statshttps://en.wikipedia.org/wiki/Bengali_language%20Accessed%2028%20April%202019https://en.wikipedia.org/wiki/Bengali_language%20Accessed%2028%20April%202019https://www.researchgate.net/publication/341384787</s>
|
<s>An Annotated Bangla Sentiment Analysis CorpusInternational Conference on Bangla Speech and Language Processing (ICBSLP), 27-28 September 2019 978-1-7281-5242-4/19 ©2019 IEEE An Annotated Bangla Sentiment Analysis Corpus Fuad Rahman Apurba Technologies Ltd. Dhaka, Bangladesh fuad@apurbatech.com Mahfuza Begum Apurba Technologies Ltd. Dhaka, Bangladesh mahfuza@apurbatech.com Aminul Islam Apurba Technologies Ltd. Dhaka, Bangladesh aminul@apurbatech.com Habibur Khan Apurba Technologies Ltd. Dhaka, Bangladesh habib@apurbatech.com Sadia Mahanaz Apurba Technologies Ltd. Dhaka, Bangladesh sadia@apurbatech.com Zakir Hossain Apurba Technologies Ltd. Dhaka, Bangladesh zakir@apurbatech.com Ashraful Islam Apurba Technologies Ltd. Dhaka, Bangladesh ashraful@apurbatech.com 978-1-7281-5241-7/19/$31.00 ©2019 IEEEl C/ICAuthorized licensed use limited to: UNIVERSITY OF BIRMINGHAM. Downloaded on May 11,2020 at 10:49:30 UTC from IEEE Xplore. Restrictions apply. International Conference on Bangla Speech and Language Processing (ICBSLP), 27-28 September 2019 Abstract – This paper presents a Bangla corpus specifically targeted for sentiment analysis and made available to researchers under an open-source licensing scheme1. We have collected and manually annotated over 10,000 sentences with sentiment polarity. We then moved to the Word domain and annotated over 15,000 words derived from these sentences with sentiment polarity. Each entry is the corpus has been cross-annotated by at least two and sometimes three annotators for ensuring quality. Also as a pre-requisite of creating a high quality sentiment analysis corpus, we had to build a secondary corpus for Bangla word stemming, which is also been cross-validated by at least two and sometimes three annotators for ensuring quality. Index Terms – Sentiment Analysis, NLP, Bangla Corpus, Annotation, Open Source Corpus I. INTRODUCTION Sentiment analysis is a very important part of natural language processing. While very robust solutions for English already exists both in academic and commercial domains, for Bangla language, work in this area is still in its infancy. As the focus of tools for sentiment analysis has now shifted from rule based to machine learning methods, the need for annotated and ground truth data for training these solutions are of utmost importance. Unfortunately there is almost no serious corpus for Bangla language that is available for sentiment analysis, forcing researchers to stich together their own small corpora which are neither standardized and nor rigorously quality controlled. In this paper, we present a fully annotated corpus for sentiment analysis. A. Brief Background In recent times, there has been some research reported on Bangla sentiment analysis. One common approach seems to be to translate a Bangla word and then use the polarity from the English translated word. [1][6][7]. While it works on straightforward words, it cannot handle the nuances of a language. For example the word “জ"ল” means “complex” [2] and it has a negative sense. But in Bangla it is often used in a positive sense, for example, “তািমম আজ জ"ল !খলেছ”. Another example is the word “খাওয়া”which is commonly translated to “eat”, the common polarity of which is neutral. But in the sentence “তার খাওয়া নাই”, the polarity is distinctly negative. The reason for the popularity of this type of approach is very simple, a distinct lack of a ground truth corpus suited for training machine learning algorithms.</s>
|
<s>Although there are some existing data set for sentiment analysis, but most of these are not available publicly. Some publicly available data set are small, e.g. [4] has about 4,000 sentences, whereas [1] has about 7,000 sentences. One of the most significant resources is described in [6]. This corpus size is around 10,000 sentences. These sentences were collected from Facebook, Twitter, YouTube, online news portals and product review pages. 1 See Section VI for details II. SENTIMENT ANALYSIS CORPUS A. Data Source The source of this data is the online Sports section of Prothom Alo, as shown in Fig 1 below. Fig. 1 Daily Prothom Alo Online Edition. Most of source sentences for the corpus were collected from the comments Section as shown in Fig 2 below. Fig. 2 Comments in the Sports Section. B. Methodology The corpus was prepared in a combination of manual and automated steps. Initially the sentences appearing in the Sports Section were copied manually. These data were labeled by hand into three categories “positive”, “negative” and “neutral”, by a “Content Team” and then crosschecked by a “QA Team”. The truth labeling is done at two levels. Authorized licensed use limited to: UNIVERSITY OF BIRMINGHAM. Downloaded on May 11,2020 at 10:49:30 UTC from IEEE Xplore. Restrictions apply. International Conference on Bangla Speech and Language Processing (ICBSLP), 27-28 September 2019 Sentence Level: The first level is the sentence level, as seen in Fig 3 below. In this case, the polarity applies to the overall sentence. Fig. 3 Sentence level polarity. • Manual collection and preprocessing: We collected more than 10,000 sentences from comments from online Bangla newspaper (https://www.prothomalo.com/), primarily from cricket sports news. We only included valid and complete single sentences. The task was distributed among the 5 members team members, who are all native Bangla speakers. Each member tagged sentiment polarity for the sentences allocated to him/her. Another member then crosschecked this. A third member then re-assigned the polarity sentences if the first and second members disagreed. The same methodology was applied to crosschecking sentence validation. In case there are disagreements, the assigned final polarity is at least assigned by two team members. If all three team members disagreed, the sentence was considered to be too ambiguous and dropped from the corpus. • Automatic processing: We removed unwanted characters, words and symbols from the sentences, such as: o ['১','২','৩','৪','৫','৬','৭','৮','৯','1','2','3','4','5','6','7','8','9'] o ['�','?',',',';',':','.','(','-','–','/','_','*','%','!','\'','+','<','>','—','০','='] o ['’','"','|','…',')','`','@','#','৷','‘','&','–', '_','😗','🎐','👌','😆','😂'] o [A-Z] o [a-z] We also removed duplicate sentences. Word Level: The second is polarity on the word level, as shown in Fig 4 below. • Automatic processing: We tokenized all collected sentences and removed numbers, digits, and symbols from the tokenized word list. We then identified the unique words from the word list of nearly 15,000 words. We stemmed the word list using two different stemmers, StemmerR[10] and StemmerP[11]. We then identified the words that produced the same result from these two stemmers. Fig. 4 Word level polarity. • Manual collection and preprocessing: We checked whether the un-stemmed word is already</s>
|
<s>a root or not and manually corrected the roots for those words that were stemmed wrongly. Once a clean word list was created, we then tagged the polarity of each word manually, using the same three-tiered approach as described before. This step also resulted in identifying some words that were ambiguous. These are then dropped for the final corpus. A snapshot of this corpus is shows in Fig 5 below. Fig. 5 Corpus after stemming using two different algorithms. III. CORPUS STATISTICS This Section qualifies the corpus. TABLE I RAW DATA COLLECTED FOR THE SENTIMENT CORPUS Total number of sentences 10,008 Total number of words before filtering 19,731 Authorized licensed use limited to: UNIVERSITY OF BIRMINGHAM. Downloaded on May 11,2020 at 10:49:30 UTC from IEEE Xplore. Restrictions apply. International Conference on Bangla Speech and Language Processing (ICBSLP), 27-28 September 2019 Total number of words after filtering 14,874 How many ambiguous 2,140 How many words accepted 12,734 Table 1 shows the statistics of raw data collected from the sentiment corpus. TABLE 2 COMPARING THE TWO STEMMERS StemmerP StemmerR How many words were stemmed 14,874 14,874 How many times the stemmer was able to stem a word 14,662 14,874 How many times the stemmer was not able to stem a word 212 14,874 Table 2 compares the performance of the two stemmers. This step was completely manual and three-level peer-viewed. We found that 1.5% times the stemmers did not agree with each other. In order to correct that, we manually fixed the stemming. TABLE 3 MANUAL STEMMING Category Number Accuracy of StemmerP 58.08% Accuracy of StemmerR 59.65% How many words were manually stemmed 5,138 How many spelling were corrected manually 682 Table 3 shows the results of manually fixing stemming issues. After all this cleanup and manual fixes, the final corpus has the following entries, as seen in Table 4. TABLE I RAW DATA COLLECTED FOR THE SENTIMENT CORPUS Positive Negative Neutral Actual % Actual % Actual % Total Number of sentences 3,183 33.0 4,110 42.67 2,337 24.27 Total Number of words 824 6.47 1,068 8.38 10,804 84.80 Fig. 6 WordCloud of corpus sentences with both positive and negative polarity. IV. CORPUS PROPERTIES A. WordCloud of Collected Sentences Fig. 6 shows a WordCloud of the collected sentences with positive and negative polarity. Please note that we have used a stop word list to filter the words before this was created. We have identified 398 stop words. The same applies to building WordClouds for negative and neutral sentences. Fig. 7 shows a WordCloud of the collected sentences with positive polarity. Fig. 7 WordCloud of corpus sentences with a positive polarity. Fig. 8 shows a WordCloud of the collected sentences with negative polarity. Fig. 8 WordCloud of corpus sentences with a negative polarity. Fig. 9 Word frequency of top 20 words B. Frequency Distribution of Top 20 Words Authorized licensed use limited to: UNIVERSITY OF BIRMINGHAM. Downloaded on May 11,2020 at 10:49:30 UTC from IEEE Xplore. Restrictions apply. International Conference on Bangla Speech and</s>
|
<s>Language Processing (ICBSLP), 27-28 September 2019 Fig. 9 shows the frequency of the top 20 words in our corpus. Fig. 10 Word frequency of top 20 positive words Fig. 10 shows the frequency of the top 20 positive words in our corpus. Fig. 11 Word frequency of top 20 negative words Fig. 11 shows the frequency of the top 20 negative words in our corpus. V. SOME OBSERVATIONS We have created this corpus from a very focused source, the sports news domain and have incorporated sentences, unique words and stemmed words. We extensively cleaned the data using a combination of filtering and stop word list — employing both manual and automated process. Every entry is cross-validated using at least two, and sometimes three annotators. We have manually corrected misspelling and stemming errors. So this is not just an annotated and ground truth corpus on sentiment analysis, it is also a corpus for training stemming engines. It was also very important for us to build in auditability in the corpus. That is why every word and root word is cross-referenced against the source sentences. This way this corpus can be adopted for other NLP solutions with ease. The other aspect of our corpus design is the transparency of the data collection process. It is a natural extension of the auditability of the data mentioned above. VI. OPEN SOURCE LICENSING Not-For-Profit and academic organizations and government agencies may use this corpus for noncommercial linguistic research and education only. For-profit organizations may use this corpus after signing a commercial technology development contract. Not-For-Profit and academic organizations and government agencies cannot use this corpus to develop or test products for commercialization, nor can they use this in any commercial product or for any commercial purpose. VII. CONCLUSIONS AND FUTURE WORK We have presented a Bangla corpus specifically targeted for sentiment analysis. We described the methodology, source and clean up process. In the future, we plan to extend the corpus to support aspect based sentiment analysis for Bangla, where different clauses of a single sentence may have different sentiments. We also plan to extend this by adding sentiments for phrases, idioms and clauses. In addition, we plan to offer a set of machine learning models that can use this corpus. REFERENCES [1] Adrija Roy and Abhishek Anand Singh. Sentimental Analysis (Bengali) https://github.com/abhie19/Sentiment-Analysis-Bangla-Language. [2] English & Bengali Online Dictionary & Grammar. http://www.english-bangla.com/bntoen/index/%E0%A6%9C%E0%A6%9F%E0%A6%BF%E0%A6%B2. [3] Tazim Hoque. Word and Doc2Vec file for Bengali Sentiment Analysis. https://www.kaggle.com/tazimhoque/bengali-sentiment-text/. [4] Atik Rahman. Bangla Aspect Based Sentiment Analysis Dataset. https://github.com/AtikRahman/Bangla_ABSA_Datasets. [5] Md. Atikur Rahman and Emon Kumar Dey. Datasets for Aspect-Based Sentiment Analysis in Bangla and its Baseline Evaluation. 4 May 2018. Institute of Information Technology, University of Dhaka, Dhaka 1000, Bangladesh. https://res.mdpi.com/data/data-03-00015/article_deploy/data-03-00015.pdf?filename=&attachment=1 [6] Asif Hassan, Mohammad Rashedul Amin, Abul Kalam Al Azada, Nabeel Mohammed. Sentiment Analysis on Bangla and Romanized Bangla Text (BRBT) using Deep Recurrent models. 24 Nov 2016. Dept. of Computer Science and Engineering University of Liberal Arts Bangladesh (ULAB), Bangladesh. https://docs.google.com/viewerng/viewer?url=http://resources.apurbatech.com/publication/upload/1610.00369.pdf [7] D. Das and S. Bandyopadhyay, Developing</s>
|
<s>Bengali WordNet Affect for Analyzing Emotion. Int. Conf. on the Computer. Processing of Oriental Languages, pp. 35-40, 2010. [8] Cliff Goddard. Natural Language Processing, Edition: 2nd edition, Chapter: 5, Publisher: CRC Press, Taylor & Francis, Editors: Nitin Indurkhya, Fred J. Damerau, pp.92-120 [9] Mohammad Daniul Huq. Semantic values in Translating from English to Bangla, The Dhaka University Journal of Linguistics: Vol. 1 No.2 August, 2008 Pages: 45-66. [10] Rafi Kamal. Bangla Stemmer. https://github.com/rafi-kamal/Bangla-Stemmer. [11] Sazedul Islam. Rule based Bengali Stemmer written in python. https://pypi.org/project/py-bangla-stemmer/. Authorized licensed use limited to: UNIVERSITY OF BIRMINGHAM. Downloaded on May 11,2020 at 10:49:30 UTC from IEEE Xplore. Restrictions apply.</s>
|
<s>See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/336853602Depression Analysis of Bangla Social Media Data using Gated Recurrent NeuralNetworkConference Paper · May 2019DOI: 10.1109/ICASERT.2019.8934455CITATIONSREADS3 authors, including:Some of the authors of this publication are also working on these related projects:Bangla Language Based Depression Analysis for Social Media Data using Recurrent Neural Network View projectAbdul Hasib UddinShiram System Limited5 PUBLICATIONS 0 CITATIONS SEE PROFILEAll content following this page was uploaded by Abdul Hasib Uddin on 28 October 2019.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/336853602_Depression_Analysis_of_Bangla_Social_Media_Data_using_Gated_Recurrent_Neural_Network?enrichId=rgreq-4bfbe13384ea07634577a9460ca22ad4-XXX&enrichSource=Y292ZXJQYWdlOzMzNjg1MzYwMjtBUzo4MTkwMjI1MDU0NDc0MzBAMTU3MjI4MTU5MDIyMQ%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/336853602_Depression_Analysis_of_Bangla_Social_Media_Data_using_Gated_Recurrent_Neural_Network?enrichId=rgreq-4bfbe13384ea07634577a9460ca22ad4-XXX&enrichSource=Y292ZXJQYWdlOzMzNjg1MzYwMjtBUzo4MTkwMjI1MDU0NDc0MzBAMTU3MjI4MTU5MDIyMQ%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Bangla-Language-Based-Depression-Analysis-for-Social-Media-Data-using-Recurrent-Neural-Network?enrichId=rgreq-4bfbe13384ea07634577a9460ca22ad4-XXX&enrichSource=Y292ZXJQYWdlOzMzNjg1MzYwMjtBUzo4MTkwMjI1MDU0NDc0MzBAMTU3MjI4MTU5MDIyMQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-4bfbe13384ea07634577a9460ca22ad4-XXX&enrichSource=Y292ZXJQYWdlOzMzNjg1MzYwMjtBUzo4MTkwMjI1MDU0NDc0MzBAMTU3MjI4MTU5MDIyMQ%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Abdul_Hasib_Uddin?enrichId=rgreq-4bfbe13384ea07634577a9460ca22ad4-XXX&enrichSource=Y292ZXJQYWdlOzMzNjg1MzYwMjtBUzo4MTkwMjI1MDU0NDc0MzBAMTU3MjI4MTU5MDIyMQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Abdul_Hasib_Uddin?enrichId=rgreq-4bfbe13384ea07634577a9460ca22ad4-XXX&enrichSource=Y292ZXJQYWdlOzMzNjg1MzYwMjtBUzo4MTkwMjI1MDU0NDc0MzBAMTU3MjI4MTU5MDIyMQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Abdul_Hasib_Uddin?enrichId=rgreq-4bfbe13384ea07634577a9460ca22ad4-XXX&enrichSource=Y292ZXJQYWdlOzMzNjg1MzYwMjtBUzo4MTkwMjI1MDU0NDc0MzBAMTU3MjI4MTU5MDIyMQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Abdul_Hasib_Uddin?enrichId=rgreq-4bfbe13384ea07634577a9460ca22ad4-XXX&enrichSource=Y292ZXJQYWdlOzMzNjg1MzYwMjtBUzo4MTkwMjI1MDU0NDc0MzBAMTU3MjI4MTU5MDIyMQ%3D%3D&el=1_x_10&_esc=publicationCoverPdfDepression Analysis of Bangla Social Media Datausing Gated Recurrent Neural NetworkAbdul Hasib Uddin1, Durjoy Bapery2, Abu Shamim Mohammad Arif3Computer Science and Engineering Discipline, Khulna UniversityKhulna, BangladeshEmail: 1abdulhasibuddin@gmail.com, 2durjoybapery@gmail.com, 3shamimarif@yahoo.comAbstract—Nowadays, micro-blogging sites like Twit-ter, Facebook, YouTube, etc., have become much pop-ular for social interactions. People are expressing theirdepression over social media, which can be analyzedto identify the causes behind their depression. Mostof the researches on emotion and depression analysisare based on questionnaires and academic interviews innon-Bengali languages, especially English. These tradi-tional methods are not always suitable for detecting hu-man depression. In this paper, we introduced a GatedRecurrent Neural Network based depression analysisapproach on Bangla social media data. We collectedBangla data from Twitter, Facebook and other sources.We selected four hyper-parameters, namely, number ofGated Recurrent Unit (GRU) layers, layer size, batchsize and number of epochs, and presented step bystep tuning for these Hyper-parameters. The resultsshow the effects of these tuning steps and how thesteps can be beneficial in configuring GRU models forgaining high accuracy on a significantly smaller dataset. This work will help psychologists and concernedauthorities of society detect depression among Banglaspeaking social media users. It will also help researchersto implement Natural Language Processing tasks withDeep Learning methods.Index Terms—Depression, Bangla, Social Media,Gated Recurrent Neural Network, Hyper-parameterTuningI. IntroductionThe objective of Artificial Intelligence is to imitateand analyze human behaviors. In this course, detectinghuman sentiment and emotion is an important part forwhich Machine and Deep Learning approaches are beingwidely used. Sentiment and emotion classification can beanalyzed further from two different perspectives, specifi-cally, detecting sentiment and emotion from image data,and sentiment and emotion detection from textual data.In both cases, psychological and technical knowledge isessential to analyze the data. In a general sense, sentimentanalysis covers the overall area of positive, negative, andneutral sentiment classification tasks. Emotions, such ashappiness, sadness, depression, disgust, etc., are ratherdeep sentiments which are much more difficult to analyze.Some of these emotions are deeper than others whichrequires high-level psychological knowledge and more so-phisticated technical approaches to study.Depression is one of the deepest human emotions. With amechanical way of life, more and more people are fallinginto depression [1]. Depression is a mental disorder whichdestroys not only the corresponding person but also affectsthe morality of that person causing him/her to commitunsocial activities, even suicide and murder. According tothe World Health Organization (WHO), over 300 millionpeople were suffering from depression in 2017, which in-dicates an increase of more than 18% between 2005 and2015 [1].Most of the research work on depression analysis aresurvey and one-to-one communication-based. Technicalknowledge of depression analysis can help psychologiststo detect</s>
|
<s>and analyze depression and causes behind it.Micro-blogging sites like Facebook, Twitter, LinkedIn arenow more popular than ever before. On these sites, peoplefrequently share their daily activities and emotional reac-tions. Wang et al., used Sina, a Chinese micro-bloggingwebsite, to collect data and applied psychological knowl-edge to extract features from it [2]. They demonstratedthe use of Naive Bayes, Decision Tree and Rule-basedclassifiers for their depression detection task.Bangla is the seventh most spoken language in the world[3]. However, depression analysis in Bangla is still tobe done. Riyadh et al., used Naive Bayes classifier withLaplacian Add-1 Smoothing for emotion classification inBangla [4]. They collected Twitter data from Sentiment140and used only the core texts for their research work. Theiremotion classification covered happiness, sadness, surprise,and disgust. In another research work, Chowdhury et al.performed sentiment analysis in Bangla [5]. They used ahybrid mechanism consisting of both Lexicon based andMachine Learning approaches. They acted upon a semi-supervised bootstrapping approach with Support VectorMachine (SVM) and Maximum Entropy (ME). They col-lected their training corpus from Twitter.Neural Network based Deep Learning approaches are be-coming more and more popular for semantic analysis, suchas emotion classification. Recently, in 2017, Suhara et al.,introduced a self-reported history based depression detec-tion procedure using Recurrent Neural Network (RNN)[6]. Hassan et al., used Long Short Term Memory Re-current Neural Network (LSTM-RNN), a Deep Learningmodel, for performing sentiment analysis in Bangla [7].Their work involved both normal and Romanized Bangla.1st International Conference on Advances in Science, Engineering and Robotics Technology (ICASERT 2019)1068Can et al., performed multilingual sentiment analysis onlimited data using RNN based model [8]. They assembleddata from Amazon reviews, Yelp restaurant reviews, andCompetition restaurant reviews. Their work covered sen-timent analysis in English, Spanish, Turkish, Dutch, andRussian languages. Ayata et al. used both Deep Learningand Machine Learning approach for sentiment analysis[9]. They gathered data from Twitter and trained Sup-port Vector Machine (SVM), Random Forest (RF), NaiveBayes, and LSTM-RNN. Their work concluded with thedecision that in most of the cases, Deep Neural Networkbased approaches perform better than Machine Learningapproaches. In their work, only SVM accomplished slightlybetter performance than LSTM.In this paper, we introduced the Gated Recurrent Unit(GRU) Neural Network based depression analysis on smallBangla dataset, collected from Twitter, Facebook, andnative Bangla speakers using Google form. Labeling hu-man emotion related data requires knowledge of humanbehaviors. Hence, we hired a Sociology student, who isexperienced in dealing with human emotions, to labelour dataset. We applied step by step Hyper-parametertuning while implementing GRU model. Then, we com-pared each implementation of Hyper-parameter tuningand showed how to tune different Hyper-parameters inan easy way to accomplish better performance on a verysmall dataset. We have manipulated four different Hyper-parameters, specifically, model size, no. of layers, batchsize, and no. of epochs. Apart from applying the GRUmodel for depression detection from Bangla social mediadata, our other aim was to focus specifically on GRUmodel performance over a significantly small dataset. InBangla, it is reasonably difficult to manage a large datasetdue to lack of enough research works. Hence, for Hyper-parameter tuning, we only focused on the aforementionedfour Hyper-parameters, without following any specificHyper-parameter optimization</s>
|
<s>approach.The rest of this paper is arranged as follows. In Section II,we have discussed some relevant existing research works,our proposed method is described in Section III, andin section IV, the results of our method and relevantdiscussions are presented. Finally, we conclude our workin Section V along with some future plans.II. Related WorksIn this section, some available research works related toMachine and Deep Learning based sentiment and emotionanalyses are described.A. Machine Learning for Emotion Analysis in BanglaA Machine Learning based human emotion analysis ap-proach is described by Riyadh et al., in [4]. In this researchwork, authors selected happiness, sadness, surprise, anddisgust for their classification task. They collected tweetsfrom Sentiment140, labeled them manually, eliminatedtweets with no emotion, and created a balanced datasetcontaining 3,750 tweets. 3,500 tweets were selected as theirtraining dataset and 250 tweets as the testing dataset. Forfeature extraction, Unigram model and Unigram modelwith POS tagging were used. Authors used the frequencyof Bag of Words model as a feature to train their clas-sifier. They used Multinomial Naïve Bayes classifier forthe classification task. To avoid zero probability prob-lem of Naïve Bayes classifier, they used Laplacian Add-1Smoothing by assuming the training dataset to be verylarge and by adding one to each count. Against 4-wayclassification, they gained an average of 81% accuracyfor Unigram model. For the Unigram model with POStagging, they gained an average of 79.5% accuracy. Forthe 5-way classification task, they gained an average of66% for Uni-gram model. For Uni-gram model with POStagging, they gained an average of 64.8% accuracy.B. RNN for Depression ForecastingA novel method for depression forecasting was estab-lished by Suhara et al., using RNN [6]. The authors de-veloped LSTM-RNN based deep learning algorithm. Theyused their model to produce embedding layers regardingevery categorical variable, which also incorporates a day-of-the-week variable to determine the day-of-the-week con-sequences in their model. They collected depressed datafrom 2,382 self-declared depressed persons, covering 22months time span, via a smartphone application. Theirmodel was successfully able to forecast 84.6%, 82.1%,and 80.0% severe depression cases in 1, 3, and 7 daysbeforehand, respectively. Additionally, the model was suc-cessful in detecting overall depression cases with 88.6%,86.0%, and 84.2% accuracy for forecasting 1, 3, and 7 daysbeforehand, respectively.C. Machine Learning for Depression AnalysisWang et al. conducted an experiment on Sina micro-blog, a Chinese micro-blog, which is one of the mostinfluential social media services in China [2]. They emergedboth Psychological and Machine Learning knowledge fortheir experiment. From Psychological perspective, theyused ten features, such as 1st person singular, 1st personplural, positive emoticons, negative emoticons, mention-ing, being forwarded, being commented, original blogs,blogs posted between 0:00 - 6:00 o’clock, and one other.From the technical perspective, Machine Learning tech-niques, such as Decision Tree, Naive Bayes, and Rule-based classifiers were used. Their described method con-tained mainly three steps, namely, sentence and wordsegmentation, polarity calculation of sub-sentences, andpolarity calculation of sentences. Their model was able toaccomplish 80% precision.D. CNN and RNN for Natural Language ProcessingYin et al. presented a systematic comparison betweenCNN and RNN in [10]. Their research work covered awide range of representative NLP tasks, providing basic1069guidance for Deep</s>
|
<s>Neural Network (DNN) selection. Theysystematically compared CNN, LSTM, and GRU models.It covered a broad range of NLP tasks, such as sentimentclassification, relation classification, textual entailment,answer selection, question relation match, path query an-swering, part-of-speech tagging, etc. For their experiment,on each step, they trained their model from scratch, usedbasic setup without complex tricks, like batch normaliza-tion, searched for optimal Hyper-parameters, analyzed thebasic architecture and utilization of the models. This ex-periment concluded that, generally, RNNs perform betterand robust for NLP tasks, except for some cases, like key-phrase recognition task.III. MethodologyOur proposed approach for depression detection fromBangla dataset is divided into two steps - Creating BanglaDataset and Training GRU Model. We put special effortsfor preparing Bangla dataset and shown steps for Hyper-parameter tuning.A. Creating Bangla DatasetWe used Twitter as our primary data source. We col-lected 5,000 Bangla data from Twitter and 210 depressedBangla statements from native Bangla speakers usinggoogle form. We prepared our dataset into three steps- Data Pre-processing, Data Labeling, and Data Post-processing.1) Data Pre-processing: For data pre-processing task,we created a white list containing Bangla alphanumericcharacters, punctuations, and space. We scanned eachtweet character by character, filtered out all non-whitelisted characters, and removed multiple consecutive whitespaces. An example of twitter data cleaning is given inFig 2. We only had to pre-process the tweets, as datacollected via google form were clean.Table I: Part of the labeled and stratified datasetDatano.Data Label34 ওরাল িরহাইেডৰ্শন সিলউশন ও আর এস পৰ্স্তুতকরার ১২ ঘন্টার মেধ বা েরিফৰ্জােরটের রাখেল তা২৪ ঘন্টার মেধ পান কের েফলা উিচত ।Non-depressive33 রাজনাথ িসং সংসেদ বলেলন এই নািক ফাইনাল নয়। এই লক্ষ েলাক েক আবার নািক সুেযাগ েদওয়াহেব নাগিরতব্ পৰ্মাণ করার । এটা িক আইওয়াশ নয়? এরা আর িক পৰ্মাণ েদেবন ? পাসেপাটর্ , আধারেভাটার কাডর্ সব ই েতা বািতল কের িদেয়েছ । আবারঅেনক পুরুেষর নাম আেছ অথচ তােদর পিরবােররনাম েনই । এরা িক করেব ?Depressive118 অাহা ! গভীর গহী েনর িনঃশ সুর সা েথ অাঁধা েররঘৰ্াণ !Non-depressive117 পৰ্ায় ৪০ লক্ষ বাঙািল েদশ ছাড়া হেত চেলেছসামলােত পারেব েতা পৃিথবী ?Depressive2) Data Labeling: Deep human emotions, like depres-sion, are extremely difficult to analyze. In order to makeour depression detection dataset more reliable, we hireda Bangla speaking Sociology student, experienced in deal-ing with human emotions, to manually label our Bangladataset. A total of 5,000 Bangla tweets were labeled. Afterlabeling, we eliminated incomplete and ambiguous tweets.Our initial tweet set consisted of 984 depressive tweets and2,930 non-depressive tweets.3) Data Post-processing: After pre-processing and la-beling our dataset, we eliminated redundant data fromboth tweet set and depressed statement set collected viaa google form. After removing redundancy, our datasetwas imbalanced with 1,289 non-depressive and only 588depressive data. An imbalanced dataset may lead to wrongaccuracy and over-fitting problems, where accuracy fordata within the training dataset would be high, while accu-racy for data outside of the training dataset would be low.Hence, we randomly selected only 588 non-depressive datafrom the labeled dataset to balance with 588 depressivedata. In this step, our balanced dataset consisted of only1,176 data of which 588 were depressive and 588 were non-depressive.Stratifying Dataset. The balanced dataset was very smallto train with a</s>
|
<s>neural network. Therefore, we stratified ourdataset to reduce the effect of its small size while trainingthe GRU model. In this step, data were rearranged into aone-to-one approach, that is, one depressive data followedby one non-depressive data, and continuously repeatedthis process all over the balanced dataset. After stratifyingour dataset, we considered this dataset as final and usedit to train our GRU model.Part of an example of labeled and stratified data setis given in Table I. The table shows some Bangla dataalong with their original labels. The ‘Data no.’ signifiesthe positions of the data in the final dataset, and how thefinal dataset was stratified with one non-depressive datafollowed by one depressive data.B. Training GRU ModelFor training our Gated Recurrent Neural NetworkModel, we again divided the training phase into two sub-steps - Dataset Splitting, and Hyper-parameter Tuning andTraining.1) Dataset Splitting: Each time before training ourmodel, we split our dataset into 80% training, 10% val-idation and 10% testing set. Validation dataset was onlyused to avoid over-fitting. We compared different trainedmodels according to their corresponding testing accuracy.2) Hyper-parameter Tuning and Training: For imple-menting our model, we utilized tensorflow implementationdescribed in [11]. To get an accurate result using our smalldataset, we fixed learning rate of our model to a very lowvalue of 0.0001. Low learning rate helps the model to avoidover-fitting problem so that our model detects depressionaccurately from any given data. We tuned 4 GRU model1070Figure 1: Comparing GRU implementation performances in terms of validation accuracies.Original data:নদীর নীেচ েদেশর পৰ্থম েরল সুড়ঙ্গ ৈতির হেয়েছ কলকাতায়। এবারেমেটৰ্া রাইেলর েসৗজেন েদেশর সবেথেক বড় ভূগভর্স্থ েরল ইয়াডর্েপেত চেলেছ কলকাতায়। যা ৈতির হেচ্ছ কলকাতা িবমানবন্দেররিঠক পােশই।https://ebela.in/state/kolkata-metro-railway-is-constructing-the-country-s-biggest-underground-rail-yard-near-airport-dgtl-1.839389?ref=state-new-stryAfter pre-processing:নদীর নীেচ েদেশর পৰ্থম েরল সুড়ঙ্গ ৈতির হেয়েছ কলকাতায়।এবারেমেটৰ্া রাইেলর েসৗজেন েদেশর সবেথেক বড় ভূগভর্স্থ েরল ইয়াডর্ েপেতচেলেছ কলকাতায়।যা ৈতির হেচ্ছ কলকাতা িবমানবন্দেরর িঠক পােশই।: ?Figure 2: Example of data pre-processing.Hyper-parameters into three steps - Tuning for GRU Size,Tuning Batch Size with No. of Epochs, and Tuning No. ofGRU layers with No. of Epochs. Fig 1 represents the effectsof Hyper-parameter tuning in each implementation (impl)in terms of validation accuracies.a) Tuning for GRU Size: We initially fixed no. ofGRU layers to 5, batch size to 10 and, no. of epochs to5 (total 470 iterations) and tuned GRU size over theseparameters. We trained our GRU model with size 64, 128,256, 512, and 1024. Impacts for tuning GRU size are shownin Fig 1 (impl 1 to 5). Our model gained the highestaccuracy for GRU size 512 (impl 4).b) Tuning Batch Size with No. of Epochs: After tun-ing GRU size, we fixed its size to 512, no. of layers to 5,and tuned batch size along with no. of epochs. We trainedour model with batch size and no. of epochs set to 50 and 5(90 iters), 50 and 10 (180 iters), 5 and 3 (560 iters), 1 and2 (1880 iters), 25 and 10 (370 iters), 25 and 5 (185 iters),respectively. As time complexity increases while batch sizedecreases, we had to decrease no. of epochs correspondingto small batch size. With batch size 5 over 3 epochs, ourmodel accomplished the</s>
|
<s>highest accuracy for tuning batchsize with no. of epochs. Effects for tuning batch size withcorresponding no. of epochs are shown in Fig 1 (impl 6to 11). It indicates how accuracy changes with differentbatch size and no. of epochs.c) Tuning No. of GRU layers with No. of Epochs: Inthe final step of Hyper-parameter tuning, we fixed GRUsize to 512, batch size to 5, and tuned no. of GRU layersof our model. To reduce memory complexity, we decreasedno. of epochs while increasing no. of GRU layers. Wetrained our model with GRU layers and no. of epochs setto 3 and 3 (560 iters), 10 and 3 (560 iters), and 5 and10 (1880 iters), respectively. For tuning GRU layers withno. of epochs, our model attained the highest result for 3layers over 3 epochs. The influences of tuning no. of GRUlayers on learning curve are shown in Fig 1 (impl 12 to14).IV. Result and DiscussionAll the steps of Hyper-parameter tuning along with theircorresponding test accuracies are given in Table II. On thefirst step of Hyper-parameter tuning, we started with alow GRU size of only 64 and gradually increased the sizeto 1024. Our GRU model achieved 59.1%, 70.0%, 67.3%,74.5%, and 69.1% accuracy for GRU size 64, 128, 256, 512,and 1024, respectively. According to these results, with1071Figure 3: Validation loss vs. validation accuracy for GRUbest implementation.Table II: GRU Hyper-parameter tuning results1 64 5 10 5 59.1%2 128 5 10 5 70.0%3 256 5 10 5 67.3%4 512 5 10 5 74.5%5 1024 5 10 5 69.1%6 512 5 50 5 52.0%7 512 5 50 10 61.0%8 512 5 5 3 75.7%9 512 5 1 2 70.3%10 512 5 25 10 57.0%11 512 5 25 5 61.0%12 512 3 5 3 74.8%13 512 10 5 3 69.6%14 512 5 5 10 56.5%size 512, the accuracy was the highest for this step [impl4]. These results show that, in most of the cases, accuracyand GRU size acts proportionally.As on the first step of Hyper-parameter tuning, the highestaccuracy was generated for GRU size 512, we fixed thissize on the second step, and tuned batch size along withno. of epochs. Our GRU model gained 52.0% accuracyfor batch size 50 over 5 epochs, 61.0% accuracy for batchsize 50 over 10 epochs, 75.7% accuracy for batch size 5over 3 epochs, 70.3% accuracy for batch size 1 (that is,online learning) over 2 epochs, 57.0% accuracy for batchsize 25 over 10 epochs, and 61.0% accuracy for batch size25 over 5 epochs. Here, the results signify that accuracyfalls for a very large batch size. Also, it is shown that therequired no. of epochs is highly dependent on the batchsize. However, for fixed batch size, accuracy increases withincreasing no. of epochs. Hence, in case of applying GRU,a set of balanced batch size with reasonably large no. ofepochs is required for high accuracy. The highest accuracyfor this step was 75.7% for batch size 5 over 3 epochs [impl8]. Accuracy for training with the online learning approachis 70.3% for over 2 epochs.On the last step of Hyper-parameter tuning, we fixedGRU</s>
|
<s>size to 512, batch size to 5, and tuned no. ofGRU layers along with no. of epochs. Our GRU modelaccomplished 74.8% accuracy with 5 layers over 3 epochs,69.6% accuracy with 10 layers over 3 epochs, and 56.5%accuracy for 5 layers over 10 epochs. These results ledus to the decision that large no. of GRU layers does notnecessarily help to perform better on a small dataset. Thehighest accuracy for this step was 74.8% for 3 layers over3 epochs [impl 12].Comparison among all GRU model implementation testaccuracies is presented in Fig 4. The validation loss andvalidation accuracy of the best implementation [impl 8] forHyper-parameter tuning are shown in Figure 3. The bestaccuracy we obtained was 75.7% for 5 layered GRU withsize 512, batch size 5, and the learning rate 0.0001 over 3epochs.This work is the first attempt to utilize a Deep Learningapproach for depression analysis in Bangla. Therefore,we are unable to directly compare our results with anyrelevant works. However, the method described by Suharaet al., is quite similar to ours [6]. They also used a DeepLearning approach for depression analysis. Nonetheless,they were able to create one of the largest depressiondatasets with 345,158 records over 22 months time span.Hence, they were able to apply LSTM-RNN with theirhuge dataset. On the contrary, our small dataset containsonly 1,176 data, which is crucially insufficient to trainLSTM model. This led us to utilize GRU model. Also, theirwork was to forecast depression in English. On the otherhand, instead of forecasting, our model detects depression,which is in Bangla. Embracing all these constraints, ourdepression detection model still is able to gain maximum75.7% accuracy, comparing to their LSTM model, whichwas able to gain maximum 88.6% accuracy. If we were ableto create such kind of large dataset, we might be able toaccomplish higher performances.V. Conclusion and Future WorksIn this research, we established a GRU model baseddepression detection technique by analyzing Bangla textdata collected from social media (Twitter, Facebook, andGoogle form). We applied a sequence of Hyper-parametertuning and showed the corresponding effects of theseoperations on a small dataset. The results indicate that1072Figure 4: Comparing GRU implementation test accuracies.in the case of a small dataset, accuracy depends on thevariations of batch size and the number of epochs. We havealso shown that the required number of epochs is highlydependent on the batch size. However, for fixed batch size,accuracy increases with increasing number of epochs. Theresults conclude that in case of applying the GRU modelon a significantly small dataset, a set of balanced batchsize with a reasonably large number of epochs is requiredto obtain adequate high performance.This research will guide further investigations to analyzeBangla social media data for detecting depression fromboth small and large dataset using Deep Learning models.Additionally, multiple experienced persons can be used forlabeling the dataset to make it more reliable. Furthermore,we have tuned only four Hyper-parameters without follow-ing any specific Hyper-parameter optimization approach.However, such kind of approaches, like Evolutionary op-timization, Random search, Gradient-based optimization,and others can be applied for more in-depth analysis.References[1] “Depression,” World Health Organiza-tion, 04-Jul-2017. [Online]. Available:https://www.who.int/mental_health/management/depression/en/. [Accessed: 21-Dec-2018].[2] X. Wang,</s>
|
<s>C. Zhang, Y. Ji, L. Sun, L. Wu, and Z. Bao,“A depression detection model based on sentimentanalysis in micro-blog social network,” in Pacific-Asia Conference on Knowledge Discovery and DataMining, pp. 201–213, Springer, 2013.[3] “Languages of the World,” Ethnologue. [Online].Available: https://www.ethnologue.com/. [Accessed:02-Feb-2019].[4] A. Z. Riyadh, N. Alvi, and K. H. Talukder, “Exploringhuman emotion via twitter,” in 2017 20th Inter-national Conference of Computer and InformationTechnology (ICCIT), pp. 1–5, IEEE, 2017.[5] S. Chowdhury and W. Chowdhury, “Performing sen-timent analysis in bangla microblog posts,” in 2014International Conference on Informatics, Electronics& Vision (ICIEV), pp. 1–6, IEEE, 2014.[6] Y. Suhara, Y. Xu, and A. Pentland, “Deepmood:Forecasting depressed mood based on self-reportedhistories via recurrent neural networks,” in Proceed-ings of the 26th International Conference on WorldWide Web, pp. 715–724, International World WideWeb Conferences Steering Committee, 2017.[7] A. Hassan, M. R. Amin, A. K. Al Azad, and N.Mo- hammed, “Sentiment analysis on bangla androman-ized bangla text using deep recurrent models,”in 2016 International Workshop on ComputationalIntelligence (IWCI), pp. 51–56, IEEE, 2016.[8] E. F. Can, A. Ezen-Can, and F. Can, “Multilingualsentiment analysis: An rnn-based framework for lim-ited data,” arXiv preprint arXiv:1806.04511, 2018.[9] D. Ayata, M. Saraclar, and A. Ozgur, “Busem atsemeval-2017 task 4a sentiment analysis with wordem- bedding and long short term memory rnn ap-proaches,” in Proceedings of the 11th InternationalWorkshop on Semantic Evaluation (SemEval-2017),pp. 777–783, 2017.[10] W. Yin, K. Kann, M. Yu, and H. Schütze, “Com-parative study of cnn and rnn for natural languageprocessing,” arXiv preprint arXiv:1702.01923, 2017.[11] Mchablani, “mchablani/deep-learning,” GitHub. [On-line]. Available: https://github.com/mchablani/deep-learning/blob/master/sentiment-rnn/Sentiment_RNN.ipynb. [Accessed: 30-Jan-2019].1073View publication statsView publication statshttps://www.researchgate.net/publication/336853602</s>
|
<s>Public Sentiment Analysis Basedon Social Media Reactions forBangla Natural LanguageMd. Tazimul HoqueStudent Id: 012161021Department of Computer Science and EngineeringUnited International UniversityA thesis submitted for the degree ofM.Sc. in Computer Science & EngineeringJune 2020© Md. Tazimul Hoque, 2020Approval CertificateThis thesis titled “Public sentiment analysis based on social media reactions forBangla natural language” submitted by Md. Tazimul Hoque, Student ID: 012161021,has been accepted as Satisfactory in fulfillment of the requirement for the degree ofMaster of Science in Computer Science and Engineering.Board of ExaminersDr. Mohammad Nurul Huda SupervisorProfessor & Director - MSCSEDepartment of Computer Science & Engineering (CSE)United International University (UIU)United City, Madani Avenue, Badda, Dhaka 1212, Bangladesh.Dr. Md. Saddam Hossain Mukta Head ExaminerAssistant ProfessorDepartment of Computer Science & Engineering (CSE)United International University (UIU)United City, Madani Avenue, Badda, Dhaka 1212, Bangladesh.Dr. Swakkhar Shatabda Examiner-IAssociate Professor & Undergraduate Program CoordinatorDepartment of Computer Science & Engineering (CSE)United International University (UIU)United City, Madani Avenue, Badda, Dhaka 1212, Bangladesh.Rubaiya Rahtin Khan Examiner-IIAssistant ProfessorDepartment of Computer Science & Engineering (CSE)United International University (UIU)United City, Madani Avenue, Badda, Dhaka 1212, Bangladesh.Dr. Salekul Islam Ex-OfficioProfessor & Head of the Dept.Department of Computer Science & Engineering (CSE)United International University (UIU)United City, Madani Avenue, Badda, Dhaka 1212, Bangladesh.DeclarationI, Md. Tazimul Hoque, declare that this thesis titled, “Public sentiment analysisbased on social media reactions for Bangla natural language” and the researchwork presented in it are my own. I confirm that:• This research work was completed while in candidature for a M.Sc. degree atUnited International University.• Where any portion of this research work has been submitted previously for anydegree or any other qualification at United International University or any otherinstitution, this has been clearly mentioned.• Where I have discussed any published work of other researchers, this is properlyattributed in my writings.• Where I have quoted from the work of others, the source is always given as ref-erence. With the exception of such quotations, this research work is entirely myown work.• I have acknowledged all main sources of help.• Where the thesis is based on work done by myself jointly with others, I have madeclear exactly what was done by others and what I have contributed myself.Signed:Date: 25 June, 2020Md. Tazimul HoqueDepartment of Computer Science and EngineeringMasters of Science in Computer Science & EngineeringStudent ID: 012161021United International University (UIU)United City, Madani Avenue, Badda, Dhaka 1212, Bangladesh.AbstractRepresenting text documents as vector or in numerical format has been a revolution innatural language processing. It represents similar parts of text in such a way that theyare very close to each other, making it very easy to classify or find similarities amongthem. These vectors also represent the way we use the words or parts of documents aswell which helps finding similarity even between pair of words. While word2vec is sucha technique that represents each word as a vector, doc2vec takes it to another level byrepresenting a whole sentence or document as a vector. Being able to represent an entiredocument as a vector allows comparing a substantial number of words or sentences at atime which can save computational power as</s>
|
<s>well as bandwidth. This relatively newerdoc2vec technology has not yet been implemented for Bengali sentiment analysis and itsfeasibility is also unknown. In this study, we have trained doc2vec and word2vec modelsusing a corpus constructed with 10500 Bengali documents. The corpus consists of threetypes of data differentiated by their polarity i.e. positive, negative and neutral. Later,we have employed several machine learning algorithms for comparing the accuracy ofclassification. To evaluate machine learning classifiers performance, we’ve applied k-fold cross validation technique. In k-fold cross validation we’ve used document vectorsdirectly obtained from doc2vec model, and TF-IDF averaged document vectors gainedfrom word2vec model.iiiPublished PapersWork relating to the research presented in this thesis has been published by the authorin the following peer-reviewed conference:1. Hoque, M. T., Islam, A., Ahmed, E., Mamun, K. A., and Huda, M. N. (2019,February). Analyzing Performance of Different Machine Learning ApproachesWith Doc2vec for Classifying Sentiment of Bengali Natural Language. In 2019 In-ternational Conference on Electrical, Computer and Communication Engineering(ECCE) (pp. 1-5). IEEE.AcknowledgementsFirstly I’m grateful and expressing my gratitude to Almighty Allah for giving me thestrength to complete this research work successfully.My research work titled “Public sentiment analysis based on social media reac-tions for Bangla natural language” has been completed to fulfill the requirement ofMS CSE program. I’m thankful to the people from whom I received guidance, coopera-tion and suggestions throughout the journey.I would like to express my gratitude to my thesis supervisor, Dr. Mohammad NurulHuda, Professor & Director - MSCSE, Dept. of CSE, United International University,for his supervision, continuous support, encouragement and giving me the opportunityto do this research work with him.Finally, I’m very much thankful to my parents for their encouragement, continuoussupport and endless love.ContentsList of Figures ixList of Tables x1 Introduction 11.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Aim and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.4 Organization of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Background Materials 42.1 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.1.1 Non-Bengali Languages . . . . . . . . . . . . . . . . . . . . . . . . 42.1.2 Bengali Language . . . . . . . . . . . . .</s>
|
<s>. . . . . . . . . . . . . . . 42.2 Natural Language Processing . . . . . . . . . . . . . . . . . . . . . . . . . 52.3 Sentiment Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.3.1 Different Levels of Sentiment Analysis . . . . . . . . . . . . . . . . 62.3.1.1 Document level . . . . . . . . . . . . . . . . . . . . . . . . 62.3.1.2 Sentence level . . . . . . . . . . . . . . . . . . . . . . . . 62.3.1.3 Entity level . . . . . . . . . . . . . . . . . . . . . . . . . . 72.4 Corpus Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.4.1 Scripting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.4.2 Prepossessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.4.3 Data Set Labeling . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.5 Data Model Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.5.1 Word Embedding Techniques . . . . . . . . . . . . . . . . . . . . . 82.5.1.1 Word2Vec . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.5.1.2 Sentence2Vec . . . . . . . . . . . . . . . . . . . . . . . . . 92.5.1.3 Doc2Vec . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.6 Types of Machine Learning Algorithms . . . . . . . . . . . . . . . . . . . . 112.6.1 Supervised Machine Learning . . . . . . . . . . . . . . . .</s>
|
<s>. . . . . 112.6.2 Unsupervised Machine Learning . . . . . . . . . . . . . . . . . . . 112.6.3 Semi-supervised Machine Learning . . . . . . . . . . . . . . . . . . 112.6.4 Reinforcement Machine Learning . . . . . . . . . . . . . . . . . . . 122.7 Machine Learning Tools for Classification . . . . . . . . . . . . . . . . . . 122.7.1 Regular Machine Learning Classifiers . . . . . . . . . . . . . . . . . 122.7.1.1 Logistic Regression (LR) . . . . . . . . . . . . . . . . . . 122.7.1.2 Linear Discriminant Analysis (LDA) . . . . . . . . . . . . 132.7.1.3 Support Vector Machine (SVM) . . . . . . . . . . . . . . 132.7.1.4 K-Nearest Neighbors . . . . . . . . . . . . . . . . . . . . . 142.7.1.5 Decision Tree (DT) . . . . . . . . . . . . . . . . . . . . . 142.7.1.6 Gaussian Naive Bayes (GaussianNB) . . . . . . . . . . . 152.7.2 Deep Learning Classifiers . . . . . . . . . . . . . . . . . . . . . . . 162.7.2.1 Long Short-term Memory (LSTM) . . . . . . . . . . . . . 162.7.2.2 Bidirectional Long Short-term Memory (BLSTM) . . . . 172.7.2.3 Sequential Model (SM) . . . . . . . . . . . . . . . . . . . 172.8 Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.8.1 Confusion Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.8.2 Precision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.8.3 Recall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.8.4 F1-Score . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.8.5 Accuracy . . . . . . . . . . .</s>
|
<s>. . . . . . . . . . . . . . . . . . . . . 192.8.6 Macro Average for Precision, Recall and F1-score . . . . . . . . . . 202.8.7 k-Fold Cross Validation . . . . . . . . . . . . . . . . . . . . . . . . 202.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Proposed Method 223.1 Overview of proposed system . . . . . . . . . . . . . . . . . . . . . . . . . 223.2 Corpus Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223.2.1 Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233.2.2 Data Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233.2.3 Data Labeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233.3 Data Model Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243.4 Choosing Machine Learning Classifiers . . . . . . . . . . . . . . . . . . . . 243.5 Result and Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . 243.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 Experimental Analysis 254.1 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254.1.1 Corpus Construction . . . . . . . . . . . . . . . . . . . . . . . . . . 254.1.2 Model Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . 274.1.2.1 TF-IDF Averaged Word2vec Model . . . .</s>
|
<s>. . . . . . . . 284.1.2.2 Doc2vec Model . . . . . . . . . . . . . . . . . . . . . . . . 304.1.3 Classifier Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324.1.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344.2 Result and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344.2.1 k-Fold Cross Validation . . . . . . . . . . . . . . . . . . . . . . . . 344.2.1.1 10-Fold Cross Validation - TF-IDF Averaged Word2vec . 354.2.1.2 10-Fold Cross Validation - Doc2vec . . . . . . . . . . . . 354.2.2 Doc2vec vs TF-IDF Averaged Word2vec . . . . . . . . . . . . . . . 364.2.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374.2.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 Conclusion and Future Work 40vii5.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405.2 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405.3 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41A My Publications 46viiiList of Figures2.1 Architecture of CBOW and Skip-gram [1] . . . . . . . . . . . . . . . . . . 92.2 PV-DM [2] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.3 PV-DBOW [2] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.4 doc2vec</s>
|
<s>model with tag vector [3] . . . . . . . . . . . . . . . . . . . . . . . 112.5 LSTM block containing input, output and forget gates [4] . . . . . . . . . 172.6 BLSTM classifier design [5] . . . . . . . . . . . . . . . . . . . . . . . . . . 173.1 Proposed architecture of our research work . . . . . . . . . . . . . . . . . 234.1 Flow of data collection and corpus preparation from Facebook post. . . . 26List of Tables2.1 Confusion Matrix for Binary Class Classifier . . . . . . . . . . . . . . . . . 184.1 10-fold accuracy scores for TF-IDF averaged document vectors (Word2vec) 354.2 10-fold mean performance scores for TF-IDF averaged document vectors(Word2vec) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354.3 10-fold accuracy scores for doc2vec document vectors . . . . . . . . . . . . 364.4 10-fold mean performance scores for doc2vec document vectors . . . . . . 364.5 Comparison of 10-fold mean accuracy scores gained for TF-IDF averagedword2vec and doc2vec models . . . . . . . . . . . . . . . . . . . . . . . . . 37List of Algorithms1 Preparing Positive/Negative Documents from Facebook Page Posts . . . . 282 Checking a post is either categorizable or not . . . . . . . . . . . . . . . . 28Chapter 1IntroductionThis chapter represents an overview of introductory aspects of our research work insentiment analysis. It includes current problem statements, motivation to work in thistopic, aim and objectives of our work and about the contributions we made by thisresearch. Organization of the thesis section gives an brief outline for remaining chaptersof the book.1.1 MotivationIn recent years, various social media platforms e.g. Facebook, YouTube, Twitter playa vital role in day to day life due to their ease-of-access, portability, and affordability[6, 7]. According to Statistic, around 2.46 billion people are actively using social mediaworldwide as of 2017 and it is expected to reach 3.02 billion in 2021 where Facebookhas remained the most popular one as of April, 2018 [8]. Another survey conducted inSeptember 2018 by StatCounter says that 89.04% of social media users interact usingFacebook in Bangladesh [9]. A very large number of data has been comprised over theInternet as a result of enormous dealing with social media platforms which conveys asignificant contribution in Sentiment analysis (SA) [6]. To be specific, analyzing thereactions by users accumulated from social media contents and posts lead to categorizethem into several labels i.e. sad, angry, love.Sentiment</s>
|
<s>analysis is also known as opinion mining, mood or emotion analysis which is awell-known part of Natural Language processing (NLP). The year 2001 or around can bemarked as the beginning of the research awareness in the field of SA and opinion mining[10]. Research papers mentioning “sentiment analysis” focus specifically on the appli-cation of text classification according to their polarity positive (good), negative (bad)or neutral. But now-a-days SA expresses broadly to mean the computational treat-ment of public opinion or review in textual format, processing natural language data,computational linguistics and bio-metrics to systematically extract, identify, quantifyand study effective states with subjective information [11]. In addition, recent adventsin machine learning research, particularly deep learning based methods e.g. recurrentneural network (RNN), avail the opportunities to infer decisions by training a modelin SA. Moreover, the latest key technique titled as “doc2vec” developed by Google Inc.[12] in which usually a document is represented by a vector, can be an emerging tacticsfor classifying emotions or opinions from social media reactions and posts. Although alot of research work has been conducted in the area of SA and they are mainly basedon the social media posts written in English, still these areas are yet to be explored forthe social media posts in Bengali language.1.2 Aim and ObjectivesThe main goal of this experiment is to create and transform data corpus to suitabledocument embedding model representing numeric vector, hence analysing different ma-chine learning techniques to evaluate the performance and accuracy of the classifiers inthe context of Bengali sentiment analysis. Objective of our study can be pointed as -• Create a standard Bengali sentiment classification corpus.• Categorizing documents according to selective human sentiments.• Construct document embedding model representing numeric vectors to work withmachine learning algorithms.• Analysing performance of deep learning and traditional machine learning ap-proaches for Bengali sentiment analysis.1.3 ContributionOur work includes system literature review process and we followed standard steps forsearching, screening, raw data-extraction, model generation, experimenting with differ-ent ML classifiers and reporting accordingly. At initial step, we searched for relevantpapers, research reports, journals, presentations that were broadly concerned with sen-timent analysis or opinion mining. Relevant research articles were searched into IEEEExplorer, ACM Portal, Springer Linker, Science Direct and Google Search Engine. Sys-tematic search strategy applied to achieve consistent best search results. Search key-words and phrases were selected according to our desired research interest.This research aims to analyze public sentiments composed in Bengali on any topic andthen categorize them into three particular classes i.e. positive, negative and neutral sen-timent. For this we are considering Facebook post reactions – Love, Wow, Sad, Angry,and Haha which represent different states of emotion. Here, Love and Wow reactionsare considered as positive sentiment whilst Sad and Angry reactions are considered asnegative sentiment. Facebook added these new reactions feature allowing users to re-act along with Like in a post. We have employed different machine learning methodsi.e. Logistic Regression (LR), Support Vector Machine (SVM), Decision Tree (DT), K-Neighbors, Linear Discriminant Analysis (LDA), Gaussian Naive Bayes (GaussianNB),Sequential Model (SM), Long Short-term Memory (LSTM), and Bidirectional LongShort-term Memory (BLSTM) to</s>
|
<s>build classification models so that they can classifythe sentiments from users’ reactions in different posts published in Bengali.Our work can be defined into the following phases:• Corpus construction phase: This phase includes corpus collection, filtering, label-ing raw corpus according to sentiment score.• Model generation phase: This phase focus on choosing and constructing suitabledata model representing numeric vectors to work with ML classifiers.• Experiment phase: This last phase applies different ML classifiers with selecteddata models to classify sentiment into classes. We briefly discussed about theperformance and resulted accuracy of the employed ML classifiers in this step.1.4 Organization of the ThesisThis experimental thesis work is distributed into rest of the chapters following the belowsequence:• Chapter 2 presents literature review and background study of machine learningtools and technology we used for this experiment.• Chapter 3 describes about our proposed work flow for this experiment.• Chapter 4 represents details about our experimental work & discussion about re-sult and performance of employed document embedding systems & machine learn-ing classifiers.• Chapter 5 presents conclusion, limitation and future work of this thesis.Chapter 2Background Materials2.1 Literature ReviewPreviously accomplished research works related to sentiment analysis or opinion miningare discussed in this section.2.1.1 Non-Bengali LanguagesMany research works are accomplished by measuring the overall polarity of a documentor sentence to determine if it is a positive or negative review [13, 14, 15]. Turney et al.used simple unsupervised learning algorithm which finds average semantic orientationof the phrases form the review containing adjectives or adverbs [13]. In system [14] Daveet al. trained a classifier using a self-tagged corpus of reviews form web sites. Pang et al.applied machine learning approach for textual data categorization to identify the subjec-tive portions of any document [15]. Phrase-level sentiment analysis is discussed in [16]which can identify contextual sentiment polarity for a given large subset of sentimentexpression. In their work they explained that contextual polarity of a phrase may be dif-ferent from the polarities of the words appear in that phrase. Some popular approachesof sentiment analysis – subjective lexicon, using N-Gram modeling, machine learningare discussed in [17]. Using deep learning model, Ouyang et al. proposed a framework“word2vec + Convolutional Neural Network (CNN)” [18] for classifying sentiment ofmovie reviews into fives labels: positive, somewhat positive, negative, somewhat nega-tive and neural. They achieved 45.4% accuracy.2.1.2 Bengali LanguageThough a lot of works have been explored considering the research works for Bengali inthis ground, very few experiments have been investigated in recent years. Chowdhuryet al. worked on sentiment analysis in Bengali microblog posts using SVM and Maxi-mum Entropy (MaxEnt) classification techniques [19]. They collected 1,300 tweets usingTwitter API and split the dataset as 1,000 tweets for training and 300 tweets for testing.They identified the overall polarity of a sentence as either negative or positive. Theirachieved accuracy is 93% for SVM using unigrams with emoticons as features. Das etal. developed a phrase level polarity classification system using SVM [20]. They con-structed a Bengali News corpus containing 3,435 distinct word-forms. It can categorizeopinion phrase as either positive or negative. Their evaluated result have a precisionof</s>
|
<s>70.04% and a recall of 63.02%. Amin et al. used ”word2vec” model for vector rep-resentation of Bengali words [21]. They achieved 75.5% of accuracy using ”word2vec”word co-occurrence score with the words sentiment polarity score. They collected 16,000Bengali single line and multiline comments from blog posts and tagged them as positiveor negative comment by a survey. Hassan et al. used deep recurrent model Long ShortTerm Memory (LSTM), with two loss functions – binary cross-entropy and categoricalcross-entropy for Bengali sentiment analysis [22]. They used 10,000 Bengali and Roman-ized Bengali text samples which were divided into three categories - Positive, Negativeand Ambiguous. They achieved 70% accuracy with Bengali dataset and using Bengaliand Romanized Bengali dataset the accuracy score was 55%.2.2 Natural Language ProcessingNatural Language Processing (NLP), is a branch of artificial intelligence (AI). Usingnatural language, NLP deals with the interaction between computers and human. Themain purpose of NLP is to read, understand and make sense of the human languages torepresent it in a valuable manner. It uses machine leaning techniques to derive meaningfrom human languages. Few useful applications derived from NLP :• Search engine, spell checker, keyword search, questions answering system• Speech recognition applications, intelligent personal assistants• Chat bots for customer support, device controlling, ordering goods.• Recommendation system based on human behavior• Human sentiment analysis• Financial risks or fraud detection• Market prediction based on information retrieved from websites such as products,price location, dates etc.• Spam detection and data filtering applications2.3 Sentiment AnalysisA popular topic of Natural Language Processing (NLP) is Sentiment Analysis (SA)which is also recognized as Opinion Mining. Sentiment Analysis identifies and extractsinformation like opinion, subjectivity, polarity from textual data. Here polarity can bedefined as a measurement unit of sentiment or emotion.Using sentiment analysis, unstructured data can be extract and transformed into struc-tured information of public opinions about news, products, services, brands, politics orany topic that people can express opinions about. This information can be valuable forcommercial applications like -• Market analysis• Product reviews and feedback• Movie review• Public relations• Customer service2.3.1 Different Levels of Sentiment AnalysisSentiment analysis can be applied in different levels e.g. document, sentence and entitylevel. Each of this level represents its own characteristics regarding opinion mining.2.3.1.1 Document levelDocument level opinion categorization refers to the overall sentiment classification ofthe full document. For example, given an elaborate movie review, a sentiment analysissystem identifies whether user review expresses an overall positive or negative senti-ment about that movie. This level of sentiment analysis considers that each documentrepresents opinion on a single entity. Document level classification doesn’t evaluate orcompare sentiments in its entity level.2.3.1.2 Sentence levelThe task of sentence level sentiment analysis is to determine whether each sentenceexpresses a single unit of opinion like - positive, negative or neutral. Here neutralgenerally means no opinion/sentiment at all. Sentence level classification is very closelyrelated with subjectivity classification, that distinguishes sentences which express factualinformation. It is noted that subjectivity is not equivalent to opinion as many objectivesentences may express opinion.2.3.1.3 Entity levelDocument and sentence level sentiment analyses can’t identify what exactly a personliked and did not</s>
|
<s>like. Entity level sentiment analysis looks directly at option itselfinstead of looking at language construction like - documents, paragraphs, sentences,clauses or phrases. Here the main idea is an opinion consists of a sentiment (emotion)and a target (of that sentiment). Opinion without any target being identified as oflimited use. The importance of opinion targets also helps us to understand sentimentclassification problem more better. For an example, “I enjoyed the food, though therestaurant environment was not that good.” - clearly this sentence has a positive opin-ion, but we cannot say that the statement is entirely positive. The sentence tells positiveopinion about the food, but expresses negative opinion about the service of that restau-rant. So the purpose of entity level sentiment analysis is clearly to discover sentimentson entities.2.4 Corpus ConstructionA well constructed data corpus has great impact on machine learning approaches, re-sulted better performance with high accuracy. In our experiment we constructed a rawcorpus using social media as our primary data source. Steps for a corpus constructionare discussed below.2.4.1 ScriptingScripting refers to the process by which raw data is collected from different websites. Weused python language to write script which collected necessary data from online pagesperiodically.2.4.2 PrepossessingPrepossessing is an important part of corpus construction. It includes filtering data toreduce noise from data set. Prepossessing helps to make a solid corpus with relevantdata only. The filtering rules are defined as per research requirement. We applied thefollowing rules -• Removing Hyper links.• Removing Special characters• Checking for only Bengali Phonetics• Checking duplicate data entry2.4.3 Data Set LabelingWe applied supervised machine learning approaches that required labeled data. La-belling a data set refers to the process which maps a single unit data to some predefinedclass. This mapping can be one-to-one or one-to-many. One-to-one represents a singleunit of data to only one class where one-to-many represents an unit of data can belongsto multiple class.2.5 Data Model ConstructionIn natural language processing one of the key ideas is how efficiently texts can be con-verted into numeric vectors so that it can be fed into different machine leaning techniquesto perform training and classification. As our corpus contains raw textual data, we hadto prepare a suitable data model to work with machine learning algorithms. We lookedfor different word embedding systems which can represent textual data to numericalvectors.2.5.1 Word Embedding TechniquesWord embedding system, representations of words as vectors in a predefined vectorspace, learned by exploiting large amounts of text. This learning technique for texts canrepresent words having same meaning, using a similar representation. All the words aremapped into one vector and then the vector values are learned by the system in a waythat creates a neural network.2.5.1.1 Word2VecOne of the latest key techniques for word embedding is word2vec developed by Google[23]. word2vec consists of two different methods: Continuous Bag of Words (CBOW)and Skip-gram.• Skip-Gram : Skip-gram predicts a window of words given a single word. Let’sconsider a sentence “He is a very good boy” and a window size of six. Now if weconsider the word “good” as input then Skip-gram should predict: “he”,“is”,</s>
|
<s>“a”,“very”, “boy”.• Continuous Bag of Words (CBOW) : In converse, CBOW can predict a wordgiven surrounding words. The center word vector is generated by the sum ofcontext words.For their classification algorithm both methods use artificial neural networks. Initiallyeach word is assigned to a random N- dimensional vector. During the training period,using the CBOW or Skip-gram methods, the algorithm learns the optimal vector foreach word.Figure 2.1: Architecture of CBOW and Skip-gram [1]It takes text corpus as input, and provides a set of vectors as output. The output vectorsare feature vectors for the words containing in that corpus. word2vec itself is not a deepneural network. It converts text into a numerical form which deep neural networks canunderstand and process further. It can group the vector of similar words together whichcan detect similarities mathematically. By providing enough data, usage and contexts,word2vec can provide highly accurate output for any input words meaning or similarity/ association with other words based on past appearance. Thus this prediction can beused to cluster documents and classify them by topic or label. Those clusters can be usedfor the fundamental basis of sentiment analysis, search engine, document classificationand other diverse fields of scientific research.2.5.1.2 Sentence2VecA new model for sentence embedding called sentence2vec [24] is introduced which usesunsupervised learning of sentence embedding using compositional n-gram features. Totrain distributed representations of sentences, sentence2vec presents an efficient unsu-pervised objective. It is an extension of word2vec (CBOW) to sentences. The average ofthe source word embedding of its constituent words is defined as sentence embedding.This model is augmented by learning source embedding for not only uni-grams but alson-grams of words present in each sentence, and averaging the n-gram embedding alongwith the words.2.5.1.3 Doc2VecGoal of doc2vec [25] is to make numeric vector representation of documents, regardlessof each documents length. This technique is an adaptation of word2vec. For doc2vectraining it requires a set of documents. For each word a word vector W is preparedand for each document a document vector D is prepared. doc2vec model by itself is anunsupervised method. doc2vec also comes with two different approaches used to built thevector model - Paragraph Vector Distributed Memory (PVDM) and Paragraph VectorDistributed Bag of Words (PV-DBOW).• PV-DM : PV-DM predicts the center word from the set of context words ingiven document and a document id. PV-DM approach should perform consistentlybetter than the following PV-DBOW approach mentioned in [12].Figure 2.2: PV-DM [2]• PV-DBOW : PV-DBOW determines the context probability with given para-graph or document, but ignores context words from input document.Figure 2.3: PV-DBOW [2]Tags can be assigned for each documents in doc2vec and we can easily get their repre-sentation as vectors. Figure -2.4 shows doc2vec representation with tag vector.Figure 2.4: doc2vec model with tag vector [3]2.6 Types of Machine Learning AlgorithmsMachine learning algorithms can gain information from data and improve their outcomefrom experience without human mediation. Different machine learning techniques aredescribed below.2.6.1 Supervised Machine LearningSupervised learning algorithm operates under direct supervision which is - a set of labeleddata corpus is fed into the system with some strict rule to</s>
|
<s>operate. After analyzestraining data-set, supervised learning algorithm produces an inferred function, whichcan be used for mapping new inputs.2.6.2 Unsupervised Machine LearningIn unsupervised machine learning, system is trained with unlabeled data. The systemwill be able to classify new inputs after it learns patterns from data. It is particularlyuseful in cases where we don’t know what to look for in data-set. Two main methodsare employed in unsupervised learning - principal component and cluster analysis.2.6.3 Semi-supervised Machine LearningSemi supervised learning technique uses unlabeled data for training, usually mixing asmall amount of labeled data with a large set of unlabeled data. This learning approachfalls between supervised learning (trained with labeled data) and unsupervised learning(trained with un-labeled data).2.6.4 Reinforcement Machine LearningReinforcement learning is a type of machine learning, hence it’s a branch of artificialintelligence. In an interactive fashion, reinforcement technique continuously learns fromthe environment. In this way, the system learns from its experiences of the environmentuntil it explores the full range of possible states.2.7 Machine Learning Tools for ClassificationWe have used supervised machine learning techniques in this experiment. Our employedmachine learning classifiers can be divided as deep learning based ML classifiers andregular ML classifiers.2.7.1 Regular Machine Learning ClassifiersRegular machine learning techniques use algorithms to process data, learn from it, hencemake decisions based on what has been learned. We experimented with regular machinelearning classifiers e.g. LR, LDA, SVM, K-Neighbors, DT and GaussianNB.2.7.1.1 Logistic Regression (LR)Logistic Regression [26] is known as direct probability model in statistics, developed bystatistician D. R. Cox in 1958. It’s a predictive analysis like all regression analyses. Abinary response can be determined using binary logistic model based on one or morepredictor data features. This ability makes LR a probabilistic classification model in thefield of machine learning.In optimization problem, binary-class L2 penalized LR minimizes cost function repre-sented in eq-2.1.minw,cwTw + Ci=1log(exp(−yi(XTi w + c) + 1) (2.1)2.7.1.2 Linear Discriminant Analysis (LDA)Linear Discriminant Analysis(LDA) is also known as Normal Discriminant Analysis orDiscriminant Function Analysis. For supervised classification problems, LDA is com-monly used to reduce dimensionality. It is employed for modeling differences in groupslike separating two or more classes. To project the features in higher dimension spaceinto a lower dimension space, LDA is mostly used technique.Equation for LDA can be derived using simple probabilistic models. Here for each classk, conditional data distribution is P (X|y = k). Using Bayes formula, we can obtain theprediction:P (y = k|X) =P (X|y = k)P (y = k)P (X)P (X|y = k)P (y = k)∑l P (X|y = l) · P (y = l)(2.2)and we choose the class k which maximizes conditional probability. P (X|y) is modeledas a multivariate Gaussian distribution for linear and quadratic discriminant analysiswith density:P (X|y = k) =(2π)d/2|Σk|1/2exp(X − µk)tΣ−1k (X − µk)(2.3)where number of features is d.If we want to use this model as classifier, from the training data we need to estimate -class priors P (y = k), class means µk and the co-variance matrices.For LDA, Gaussians for each class are considered to share same co-variance matrix :Σk = Σ, for all k. It</s>
|
<s>leads us to linear decision surfaces, which can be determined bycomparing log-probability ratios log[P (y = k|X)/P (y = l|X)]:logP (y = k|X)P (y = l|X)= logP (X|y = k)P (y = k)P (X|y = l)P (y = l)= 0⇔(µk − µl)tΣ−1X =(µt−1µk − µt−1µl)− log P (y = k)P (y = l)(2.4)2.7.1.3 Support Vector Machine (SVM)Support Vector Machine (SVM) [27] is a supervised learning technique which can be ap-plied to any classification or regression task. SVM is an extension to nonlinear model ofthe generalized portrait algorithm developed by Vladimir Vapnik. SVM algorithm oper-ates based on the statistical learning approach and the Vapnik Chervonenkis dimensiondeveloped by Vladimir Vapnik and Alexey Chervonenkis.A hyper-plane or set of hyper-planes is constructed by SVM in a higher dimension space,which can be applied in classification problems. Using hyper-plane a good separationcan be achieved which has the largest distance to the nearest training data points ofany class, since in general the larger the margin the lower the generalization error of theclassifier. Let’s consider training vectors xi ∈ Rp, i = 1, ...., n, in 2 classes and a vectory ∈ 1,−1n, SVM solves the mathematical eq-2.5minw,b,ζwTw + Ci=1ζi (2.5)Here yi(wTϕ(xi) + b) ≥ 1 − ζi and ζi ≥ 0, i = 1, ...., n. Its dual equation is representedby eq- 2.6.minαTQα− eTα (2.6)Here yTα = 0 and 0 ≤ αi ≤ C, i = 1, ..., n, where e represents the vector of all ones,C > 0 is the upper bound, Q is n by n positive semi-definite matrix Qij ≡ K(xi, xj) =ϕ(xi)Tϕ(xj) is the kernel. Using function ϕ, training vectors are implicitly mapped intoa higher dimensional space. To make decision, eq-2.7 is used.sgn(i=1yiαiK(xi, x) + ρ) (2.7)2.7.1.4 K-Nearest NeighborsK-Nearest Neighbors(KNN) is one of the most basic yet essential classification algorithmin the field of machine learning. It follows supervised learning technique. It has beenapplied mostly in data mining fields, pattern recognition, intrusion detection. In real-life scenarios it is widely disposable since it is non-parametric, that means about thedistribution of data, it does not make any underlying assumptions.KNN calculates the distance between data points. For this, simple Euclidean Distanceformula can be used:d(p, q) = d(q, p) =(q1 − p1)2 + (q2 − p2)2 + · · ·+ (qn − pn)2 =√√√√ n∑i=1(qi − pi)2 (2.8)2.7.1.5 Decision Tree (DT)Decision Tree is known as non-parametric supervised learning technique, mostly usedin data classification and regression. Its target is to build a model which can assume atarget variable’s value by learning simple decision rules obtained from the data features.The deeper the decision tree goes, more complex decision rules generate and the modelbecomes more fitter.Provided training vectors xi ∈ Rn, i=1,2,....,l and a label vector y (where y ∈ Rl), adecision tree recursively partitions the space in a way that data information with samelabels are grouped together.Let’s consider data at node m is represented by Q. For each candidate split θ = (j, tm)consisting of a feature j and threshold tm, partition the data into Qleft(θ) and Qright(θ)subsetsQleft(θ) = (x,</s>
|
<s>y)|xj <= tmQright(θ) = Q \Qleft(θ)(2.9)Using an impurity function H() we can determine the impurity at m, the selectiondepends on the task being performed (either classification or regression)G(Q, θ) =nleftH(Qleft(θ)) +nrightH(Qright(θ)) (2.10)To minimise impurityθ∗ = argminθ G(Q, θ) (2.11)Recurse for subsets Qleft(θ∗) and Qright(θ∗) until the maximum allowable depth isreached, Nm < minsamples or Nm = 1.2.7.1.6 Gaussian Naive Bayes (GaussianNB)Gaussian Naive Bayes (GaussianNB), is a special branch of Naive Bayes, mostly usedfor features having continuous value. It’s considered that all the features have normaldistribution.The following mathematical relation comes from Bayes’ theorem, where class variable yand dependent feature vector x1 through xn, :P (y | x1, . . . , xn) =P (y)P (x1, . . . xn | y)P (x1, . . . , xn)(2.12)Using the naive conditional independence assumptionP (xi|y, x1, . . . , xi−1, xi+1, . . . , xn) = P (xi|y), (2.13)this relationship is simplified for all i asP (y | x1, . . . , xn) =P (y)i=1 P (xi | y)P (x1, . . . , xn)(2.14)Since P (x1, . . . , xn) is constant given the input, following classification rule can be used:P (y | x1, . . . , xn) ∝ P (y)i=1P (xi | y)ŷ = arg maxP (y)i=1P (xi | y),(2.15)The likelihood of the features in GaussianNB is assumed to be Gaussian:P (xi | y) =2πσ2exp−(xi − µy)2σ2(2.16)Maximum likelihood is used to estimate σy and µy parameters.2.7.2 Deep Learning ClassifiersDeep learning is a most popular part of machine learning in artificial intelligence basedon neural network. It is also known as deep neural network or deep structured learning.We experimented with deep learning based classifier e.g. LSTM, BLSTM & SM.2.7.2.1 Long Short-term Memory (LSTM)Long short-term memory networks [28] mostly called “LSTMs”, are special kind of re-current neural network (RNN) architecture. It can remember values over arbitraryintervals. LSTM is designed to avoid the long-term dependency problem of RNN. Itaddresses vanishing/exploding gradient problem to allow learning of long-term depen-dencies. Figure -2.5 represents LSTM cell [4] design.Figure 2.5: LSTM block containing input, output and forget gates [4]2.7.2.2 Bidirectional Long Short-term Memory (BLSTM)An extension of traditional LSTM is called Bidirectional LSTM which can significantlyimprove model performance in sequence classification problems. In the input sequence,BLSTM trains two LSTM instead of one. First recurrent layer is duplicated in thenetwork to create two layers side-by-side. After that it provides the input sequence as-isas input to the first layer and a reversed copy of the input sequence is provided to thesecond layer. This action serves additional context to the network and result outcomein faster and even fuller learning on the problem. Bidirectional networks are faster andmore effective in any sequence classification problem than unidirectional ones. Figure-2.6 shows BLSTM classifier design [5].Figure 2.6: BLSTM classifier design [5]2.7.2.3 Sequential Model (SM)Deep learning based Python library Keras [29] focuses on the creation of models as asequence of layers. Sequential class contains very simple model which is actually a linearstack of Layers. Using Sequential class constructor we can easily define all</s>
|
<s>of the layersit requires, hence the model is ready to use. Sequential model requires prior knowledgeabout its input shape. For this purpose, in the first layer of a Sequential model, itsinput shape information is served. Using compile method, learning process is configuredbefore training a model. Input data and labels are represented using Numpy arrays totrain Keras models. Usually fit function is used to train a model.2.8 Performance EvaluationTo evaluate overall performance of our employed machine learning classifiers, we’ve usedk-fold cross validation with evaluation scores like accuracy, f1-score, precision and recall.For multi-class classification, we’ve applied macro average method to calculate precision,recall and f1-score. Accuracy is most used when all the classes are equally important.On the other hand, F1-score gives a better measure of the incorrectly classified casesthan the accuracy metric. We need the precision and recall to calculate the F1-score.Using confusion matrix, average performance of the model can be determined.2.8.1 Confusion MatrixConfusion matrix (CM) [30] represents information of actual and predicted classificationsdone by a classifier. Generally performance of any ML classifier is measured using thenumeric information presented in confusion matrix. Confusion matrix for a two classclassifier is represented in Table-2.1.Table 2.1: Confusion Matrix for Binary Class ClassifierPredictedNegative PositiveActualNegative tn fpPositive fn tpThe entities of confusion matrix represented in Table-2.1 have the following meaning:• tn represents accurate predictions for negative entity• fp represents wrong predictions for positive entity• fn represents wrong predictions for negative entity• tp represents accurate predictions for positive entity2.8.2 PrecisionPrecision (P ) [31] is ratio of correctly identified positive cases. It can be calculated usingthe eq-2.17.P =tp+ fp(2.17)2.8.3 RecallRecall (R) [31] is the ratio of positive observations that were correctly identified. It canbe represented using the eq-2.18R =tp+ fn(2.18)2.8.4 F1-ScoreF1-Score (F1) which is also known as balanced F-score or traditional F-measure [31], isharmonic mean of precision and recall. We can calculate it using eq-2.19.F1 = 2P ∗RP +R(2.19)For F1 score :• best_score = 1• worst_score = 02.8.5 AccuracyAccuracy (A) is the ratio of correctly identified observation to the total observationshence it’s the most intuitive performance measure matrix. Accuracy calculation is shownin eq-2.20A =tp+ tntp+ tn+ fp+ fn(2.20)2.8.6 Macro Average for Precision, Recall and F1-scoreThis method provides average values for independently calculated precision and recallfor each class. Then f1-score is determined using harmonic mean of macro averagedprecision and recall scores. Macro average is suitable when we have balanced data-setin each class.Let’s consider for class A, B and C, we’ve corresponding precision values Pa , Pb , Pcand recall values Ra, Rb, Rc.We can calculate macro average precision (P ) using eq-2.21.P =Pa + Pb + Pc(2.21)eq-2.22 represents calculation for macro average recall (R).R =Ra +Rb +Rc(2.22)2.8.7 k-Fold Cross ValidationCross validation [32] is a well know approach to evaluate performance of a ML classifiermodel. It is also known as re-sampling procedure for a model containing limited data.In this approach, a portion of data is kept aside which won’t be used while training,later that data sample will be used for testing the ML classifier.k-Fold cross validation procedure has one parameter called k,</s>
|
<s>which represents totalnumber of groups that a given data-set is to be split into. We can use a specific valuefor the parameter k and then use this number in place of k to refer the cross validation.The working procedure of k-fold cross validation is, it takes a group from k split andhold it as test data-set. Remaining (k-1) groups are used as training data-set. Aftercompleting the training step, evaluation score is retrained for the ML classifier usingthe test data-set. This procedure continues by shifting test data-set group. Finally theperformance of the ML classifier is evaluated based on the evaluation scores from eachstep.We can observe k-fold procedure with an example. Let’s consider a data-set of 6 obser-vations and we’ll split it into 3 groups. That means k=3, and we can refer it to 3-foldcross validation.Data-set = [1, 2, 3, 4, 5, 6]Splitting this data-set into 3 groups:Group1 = [1, 3]Group2 = [4, 5]Group3 = [6, 2]Using 3-fold cross validation, we’ll have 3 data-sets to train and test any ML classifier.Model1: Train data-set Group1 + Group2, Test data-set Group3Model2: Train data-set Group2 + Group3, Test data-set Group1Model3: Train data-set Group3 + Group1, Test data-set Group2The evaluation scores (accuracy, f1, precision, recall) for ML classifier can be retained foreach model and then we can use those scores to analyze that ML classifier performanceon the given data-set.2.9 SummaryWe begin this chapter with a discussion of related works accomplished previously for sen-timent analysis. Both Bengali and non-Bengali works were elaborately discussed. Thenwe explained different level of sentiment analysis, importance of well structured corpusfor ML applications, preparing data model to work with ML classifiers. After that wefocused on different types of machine learning and our employed machine learning clas-sifiers. Last of all we finished this chapter with explaining cross validation, performancematrix and their importance to evaluate machine learning classifiers.Chapter 3Proposed MethodWe are going to present our proposed architecture for this research work in this chapter.It covers the data collection planning, data filtering based on our need, data labeling tomodel generation, training and testing approaches for selected ML classifiers.3.1 Overview of proposed systemAs we aimed to work with sentiment analysis, we had to narrow our research interestin this field to be more specific about what we like to do. We were interested to workfor Bengali sentiment analysis. For that purpose initially we looked for related worksaccomplished in this sector in past few years and then planned for our own work. Forsentiment classification, we finalized to use supervised machine learning technique whichrequires labeled data. Machine learning techniques in classification problem require agood collection of data set. The data set is required to train and test the performanceof ML classifiers. So our first challenge was to look for a data source which will provideBengali textual data. Considering all those needs, Facebook was a good candidate for ourprimary data source. In the Figure-3.1 we presented an overview diagram of our researchwork. Our proposed approach can be divided into sub parts - data collection, filtering,labeling, data model generation, train ML classifiers,</s>
|
<s>test and evaluate performance ofML classifiers.3.2 Corpus CreationOur corpus creation planning can be divided into three steps - data collection, datafiltering and data labeling. These three steps are described below -Figure 3.1: Proposed architecture of our research work3.2.1 Data CollectionOur primary data source was Facebook from where we have collected textual data withuser reaction counts. It has been done using Facebook graph API implemented bypython script. A set of neutral Bengali sentence list is collected manually for furtherexperiment.3.2.2 Data FilteringData filtering process is required to reduce noise from data and also to filter anythingon demand. We reduced noise from our collected data set by filtering hyperlinks, specialcharacters, checking duplicate data. As we aimed to work for Bengali sentiment analysis,we filtered any characters except Bengali phonetics.3.2.3 Data LabelingFor sentiment classification we’ve considered positive, negative and neutral polarity.Each of the documents in our data set contains user reaction counts. We have developedan algorithm-1 to prepare labeled data (positive and negative) using the reaction counts.We also checked if a document is polarized or not using this algorithm-2. Thus we haveconstructed our labeled corpus.3.3 Data Model SelectionTo work with ML classifiers, we needed to select a word embedding system which repre-sents textual data as numeric vectors. We explored latest word embedding technologiesand found word2vec, sentence2vec and doc2vec quite useful and interesting. We’ve pre-pared doc2vec and TF-IDF averaged document vector model using word2vec from ourtextual data-set to work with further steps.3.4 Choosing Machine Learning ClassifiersWe have selected Python based deep learning library Keras [29] to implement it’s ownSequential Model (SM) API. Then we enhanced our experiment by adding LSTM celland Bidirectional LSTM layer with SM. BLSTM was chosen to implement for it’s well-known performance in sequence classification problem.Among other traditional ML classifiers, we chose to train and test the performance ofLR, LDA, SVM, K-Neighbors, DT and GaussianNB. These traditional classifiers havebeen implemented using Python based machine learning library scikit-learn [33].3.5 Result and Performance EvaluationTo evaluate each ML classifiers performance, we chose to use k-fold cross validation withperformance evaluation scores - accuracy and F1 score. Using only accuracy matrixwont provide any good result for imbalanced number of data in each classes. But ourfinal data-set contains equal number of documents for positive, negative and neutralsentiments. So using accuracy matrix along with F1 score provided much better insightfor result and performance analysis in our experiment.3.6 SummaryWe have discussed about our research planning and presented it step by step. Also adiagram presenting the total work flow is shown in Figure-3.1. This chapter representsdifferent approaches used in each steps of this research work to achieve the best outcome.Chapter 4Experimental Analysis4.1 ExperimentsIn this chapter, we have discussed about our experimental setup. Section-4.1.1 describesabout our corpus collection, filtering process and data-set labeling. Section-4.1.2 repre-sents how we constructed TF-IDF averaged word2vec and doc2vec models from labeleddata-set and finally Section-4.1.3 describes about the parameters we have used to trainand test our model for different machine learning algorithms.4.1.1 Corpus ConstructionAim of this study is to analyze public sentiment on any topic from Bengali text andthen</s>
|
<s>categorize it based on sentiment polarity. We have considered positive, negativeand neutral sentiment in this work. To construct a corpus for Bengali sentiment analysis,different sources have been considered, among which Facebook post data seems mostpromising for SA as they represent the most natural form of language. In Facebookposts, people react with different reactions i.e. “Like”, “Love”, “Wow”, “Sad”, “Angry”,and “Haha”, each of which represent different states of emotion. Our aim is to classifythese emotions into positive, negative or neutral class. Users react with “Like” morethan other reactions as it is easy to perform although it does not represent a specificsentiment polarity that can be classified as positive or negative [34]. Correlation among“Like” and other reactions can be expressed as-• Strongly positive correlation with “Love” and “Wow”.• Weakly positive correlation with “Sad” and “Angry”.Although “Like” reaction is the most common, we have considered this as low-effort datafrom users and ignored it while classifying the sentiment polarity of a post. Furthermore,we have observed that people reacted “Haha” reaction in any funny, sarcastic posts morethan any other reactions. Therefore, we can’t polarize post sentiment in either positiveor negative category based on “Haha” reaction.We have used Facebook Graph API [35] implemented by our own Python scriptto collect data regularly from some popular Bengali Facebook public pages. We havecollected 6244 Facebook posts which were pre-processed afterwards to validate themas proper text data. The pre-processing stage includes the filtering of any kind ofhyperlink, special characters, duplicate post and non Bengali phonetics. This filteringshrunk the volume of our data set to 4317 posts. We stored this data into databasewhich contains following columns - page type, page post text, and reaction counts of -“like”, “love”, “wow”, “sad”, “angry” and “haha”. Fig. 4.1 demonstrates the total flowof data collection and corpus preparation from Facebook posts.Figure 4.1: Flow of data collection and corpus preparation from Facebook post.To prepare positive and negative post documents from this database we had to categorizemultiple reactions into either positive or negative. Here, “Love” and “Wow” reactionsrepresent positive polarity and on the other hand “Sad” and “Angry” reactions repre-sent negative polarity. We considered total count of “Love” and “Wow” reactions assummation of total positive reactions, and total count of “Sad” and “Angry” reactionsas summation of total negative reactions. Comparing the total numbers of positive andnegative reactions of a post, we categorized it accordingly. This process is summarizedin Algorithm 1. Here, we have not categorize a post’s sentiment if-• the total number of “Haha” reaction is greater than positive or negative reactions.• the total number of positive and negative reactions are same or both are zero.The procedure to determine whether a post is categorized or not is shown in Algorithm2. After this procedure we have 3,193 posts where majority reaction counts are -• Love: 1,162• Wow: 529• Sad: 1,007• Angry: 495.So, finally we have 1,691 posts with positive polarity and 1,502 posts with negativepolarity. To keep equal polarity data, we finally stored 1,500 posts per sentiment polarity(positive and negative).We’ve manually constructed a set of neutral documents containing</s>
|
<s>3500 Bengali sen-tences. Socian Ltd. [36] provided a public corpus containing 4,000 labeled Bengalisentences according their sentiment polarity, either positive or negative which containsequal distribution of labeled data. They have collected this corpus from different socialmedia platforms, news paper sites and blogs. We included this data set with our pre-pared corpus. This way finally we managed to prepare a corpus of 10500 posts, 3500documents for each sentiment polarity - positive, negative and neutral.4.1.2 Model GenerationIn this section we’ll describe about the procedure we used to generate numeric vectormodels from out textual data-set. For this purpose we’ve chosen to work with TF-IDFaveraged word2vec and doc2vec models.Algorithm 1 Preparing Positive/Negative Documents from Facebook Page Posts1: procedure ParsePosts(posts)2: for each post do3: positive← count(Love) + count(Wow)4: negative← count(Sad) + count(Angry)5: if Categorizable() = false then6: skip to the next post7: else if positive > negative then8: save post text into positive.txt9: else10: save post text into negative.txt11: end if12: end for13: end procedureAlgorithm 2 Checking a post is either categorizable or not1: procedure Categorizable()2: if count(Haha) > positive or negative then3: return false4: else if positive = negative then5: return false6: else if positive = negative = 0 then7: return false8: else9: return true10: end if11: end procedure4.1.2.1 TF-IDF Averaged Word2vec ModelWord2vec model represents numeric vector for the words in a document. Here eachword is represented as a vector where similar words have closer values to it. First weconstructed gensim word2vec [23] model using our prepared corpus.Parameters used to train gensim word2vec model-• size : 100 ; word vectors dimensionality• window : 25 ; max distance between focus and predicted word in sentence• min_count : 1 ; word frequency to ignore below this• workers : 20 ; worker threads used for training• alpha : 0.03 ; initial learning rate• min_alpha : 0.02 ; min learning rate over training progress• training iteration : 60After training the word2vec model, it contains a vocabulary of 23574 unique words. Weused min_count 1 to keep all the words in vocabulary. We had to find a way to getdocument vector from word vectors. Following approaches are consider for this purpose-• Word2vec vectors average: It’s a simple approach to take the average of allword vectors from a document to represent document vector.• Word2vec vectors average using TF-IDF: This is the best approach to finddocument vector from word vectors. Firstly word vectors are multiplied with theircorresponding TF-IDF scores and then the average vector represents documentvector.We chose to work with TF-IDF approach, hence mentioning the outcome documentvector model as TF-IDF averaged word2vec model. TF-IDF means “Term Frequency -Inverse Data Frequency”. Here TF provides the frequency of a word in each documentin a data-set. It can be represented by the ratio of a word appearance in a documentwith the total number of words in that document. IDF is used to calculate the weightof rare words across all documents in a corpus. Lets consider a term t from a documentd in a document set. Then the formula to find TF-IDF score for that term</s>
|
<s>-TFIDF (t, d) = TF (t, d) ∗ IDF (t) (4.1)where the IDF is calculated as -IDF (t) = log[DF (t)] + 1 (4.2)Here n is the number of documents in data-set. DF(t) is the document frequency of t,the document frequency is the number of documents in the document set that containthe term t. The effect of adding “1” to the IDF in the equation above is that terms withzero IDF, i.e., terms that occur in all documents in a training set, will not be entirelyignored.We chose to implement TD-IDF averaged document vector from word vectors. If weconsider a document D and n number of word vectors from that document as W1, W2,...., Wn , then the document vector Dvec using TD-IDF averaged -Dvec =W1 ∗ TFIDF (W1, D) +W2 ∗ TFIDF (W2, D) + ...+Wn ∗ TFIDF (Wn,D)](4.3)Some example sentences from our data-set are presented with their corresponding TF-IDF averaged document vector using word2vec model.• Positive Bengali Sentence:ি েভর উ াবন যুি খােত এেনেছ দা ণ পিরবতনCorresponding TF-IDF averaged document vector (100 dimension) -[ 0.12677471 0.19230783 0.49904703 ........ -0.28776385 0.31691263 0.02206575]• Negative Bengali Sentence:সবাই এখন মুখুশ ধারী আসেল কও মানবতার জন কাজ কের নাCorresponding TF-IDF averaged document vector (100 dimension) -[ 0.01814755 0.11038729 -0.71053634 ........ -0.4587077 -0.05324249 1.46053586]• Neutral Bengali Sentence:আিম কেলজ থেক াতক পাশ করার পের বািড়েত িফের যাই এবং িতন বছরআমারিপতামাতার সে বসবাস কিরCorresponding TF-IDF averaged document vector (100 dimension) -[ 0.47712728 1.72919988 0.46817737 ........ 0.01164649 0.28329555 -0.46544328]4.1.2.2 Doc2vec ModelCreating numerical representation of any document is the goal of doc2vec [25]. Here eachdocument or sentence is represented as a vector where similar documents have closervalues. We’ve used our corpus to train doc2vec model. All the labeled sentences from ourcorpus were fed into doc2vec model to build its vocabulary. Here each labeled sentencecontains a list of Bengali words and a label either “Positive”, “Negative” or “Neutral”based on its sentiment polarity. An example of labeled sentences used to train doc2vecis -[[’word1’, ’word2’, ’word3’,..., ’last word’], [’label’]]For each document we’ve used the polarity label with a unique identifier while trainingthe doc2vec model, so that later we can observe any document’s numeric vector repre-sentation. For unique identifier, we used document index. So a positive labeled sentencerepresentation looks like -[[’word1’, ’word2’, ’word3’,..., ’last word’], [’POS_unique_index’]]A negative labeled sentence representation -[[’word1’, ’word2’, ’word3’,..., ’last word’], [’NEG_unique_index’]]And a neutral labeled sentence representation -[[’word1’, ’word2’, ’word3’,..., ’last word’], [’NEU_unique_index’]]Parameters used to train gensim doc2vec model-• vector_size : 100 ; feature vector dimension• dbow_words : 1 ; to train word vectors• dm : 0 ; training algorithm PV-DBOW• epochs : 60 ; training epochs over data-set• window : 25 ; max distance between focus and predicted word in document• min_count : 2 ; word frequency to ignore below this• workers : 20 ; worker threads used for training• alpha : 0.03 ; initial learning rate• min_alpha : 0.02 ; min learning rate over training progressAfter training the doc2vec model, it contains 10500 documents vector and has a vocab-ulary of 10170 unique words. Using min_count 2 eliminates</s>
|
<s>unimportant words whiletraining the model. Below we are representing some example sentences from our data-setand their corresponding document vector from doc2vec.• Positive Bengali Sentence:িতিন লখক িহেসেব পুেরা দেশ িবখ াত হেয় ওেঠনCorresponding document vector (100 dimension) -[ 0.13282606 -0.19305475 0.3983899 ........ -0.09714048 -0.32621175 -0.2620505 ]• Negative Bengali Sentence:রািহ া সমস া বাংলােদেশর িনরাপ ার জন ভয়াবহ মিকCorresponding document vector (100 dimension) -[-0.4323732 0.54408896 0.46205854 ........ -0.7783456 0.6751657 -1.1729606 ]• Neutral Bengali Sentence:আপিন িক িনি ত যআপিন আপনার চাকির ছেড় িদেত চানCorresponding document vector (100 dimension) -[0.12949668 -0.12175082 -0.27730727 ........ -0.5717205 0.4326126 -0.91951835 ]4.1.3 Classifier DesignFor sentiment classification using our prepared numeric vector models, we used machinelearning approaches i.e. LR, SVM, DT, K-Neighbors, LDA, GaussianNB, SM, LSTMand BLSTM. Keras [29] API has been applied to train and test SM and LSTM, BLSTMdeep learning classifiers. Using scikit-learn [33] API, other classifiers - LR, LDA, SVM,K-Neighbors, DT and GaussianNB were implemented. Chosen parameters for eachclassifier are described below:• LSTM classifier:– Input: Input constructed with three different layers -∗ First Layer: LSTM cell including 64 hidden nodes and activation ‘‘relu”.∗ Middle Layer: Dropout rate 0.25.∗ Final Layer: 3D Dense layer with activation function ‘‘softmax” andkernel_initializer=‘‘glorot_uniform”.– Compilation: optimizer=‘‘adam”, loss=‘‘categorical_crossentropy” andmatrices= [‘‘accuracy”]• BLSTM classifier:– Input: Three different layers used to construct input -∗ First Layer: Bidirection layer including LSTM cell with 64 hidden nodesand activation ‘‘relu”.∗ Middle Layer: Dropout rate 0.25.∗ Final Layer: 3D Dense layer with activation function ‘‘softmax” andkernel_initializer=‘‘glorot_uniform”.– Compilation: optimizer=‘‘adam”, loss=‘‘categorical_crossentropy” andmatrices= [‘‘accuracy”]• SM classifier:– Input: Input contained three different layers -∗ First Layer: Dense layer with batch_size=64, input_dim=100 and acti-vation ‘‘relu”.∗ Middle Layer: Dropout rate 0.25.∗ Final Layer: 3D Dense layer with activation function ‘‘sigmoid”.– Compilation: optimizer=‘‘rmsprop”, loss=‘‘categorical_crossentropy” andmatrices= [‘‘accuracy”]• LR classifier:– penalty=‘‘l2′′– Tolerance for stopping criteria, tol=0.0001– Max iteration, max_iter=100– Inverse of regularization strength, C=1.0• LDA classifier:– Solver, solver=‘‘svd”; Singular value decomposition (svd) is mostly recom-mended for any data-set having large number of features.– Rank estimation threshold in SVD solver, tol=0.0001– Shrinkage parameter, shrinkage=None• SVM classifier:– Penalty parameter, C=1.0– Tolerance for stopping criterion, tol=0.0001– Kernel type, kernel=‘‘rbf”– Size of the kernel cache (in MB) cache_size=200– Max iteration, max_iter=1000• K-Neighbors classifier:– Number of neighbors, n_neighbors=5– Weight function for prediction, weights=‘‘uniform”– Algorithm for computing NN, algorithm=‘‘auto”; Most appropriate algo-rithm is determined by auto function based on the parameters value passedinto fit method.– Distance metric used for the tree, metric=‘‘minkowski”– Minkowski metric power parameter, p=2; p=2 is completely equivalent tousing euclidean_distance.• DT classifier:– Split quality measuring function, criterion=‘‘gini′′; supported Gini impuritycriteria.– Strategic parameter used to select split at each node, splitter=‘‘best′′– random_state=None• GaussianNB classifier:– Prior probability of selective classes, priors=None– var_smoothing=1e − 09 ; For calculation stability, part of largest variancefrom all features is added to variances.4.1.4 SummaryIn this section, we explained about different components of our experimental setup. Wediscussed about choosing social media Facebook as our primary data source to con-struct the corpus. Numeric document vector model generation using TF-IDF averagedword2vec and doc2vec is discussed. Parameter choosing for our employed machine learn-ing classifiers are also focused in this chapter.4.2 Result and AnalysisTo evaluate</s>
|
<s>the effectiveness of our employed ML classifiers, we’ve applied k-fold crossvalidation technique and retained performance evaluation scores - accuracy, F1 score,precision and recall from each cross validation steps. For train and test ML classifiers,we’ve used document vectors gained from doc2vec and TF-IDF averaged document vec-tors from word2vec. To refer the computational efficiency of classification, we will usethe term performance in this work.4.2.1 k-Fold Cross ValidationWe have applied 10-fold cross validation to justify the performance of selected ML clas-sifiers using our prepared vector models from doc2vec and word2vec. Both of our vectormodels contain 10500 document vectors representing the full data-set. Below steps we’vefollowed for applying k-fold cross validation to the ML classifiers for both vector models.• We’ve initialized document vectors into python numpy [37] array and named itdata array. This array size is 10500 and each item in this array contains 100dimension vector shape. Corresponding labels for each vectors are loaded intoanother numpy array named label array.• We’ve used same shuffled index for both data and label array using numpy randompermutation [38] so that they are randomly distributed over the data-set.• Using sklearn cross validation score API [39], we provided the ML classifier, dataand label array, k-fold as 10, and defined the performance evaluation scoring pa-rameter which we want to retain. Then this API provides expected evaluationscores for 10-fold cross validation and we stored it for our result analysis.Using the above steps we retained performance evaluation scores for our employed MLclassifiers. In following sub sections, we represent 10-fold cross validation results achievedfor word2vec and doc2vec models. All the results are calculated in percentage.4.2.1.1 10-Fold Cross Validation - TF-IDF Averaged Word2vecTable-4.1 represents 10-fold accuracy scores with a mean column for TF-IDF averageddocument vectors using word2vec model.Table 4.1: 10-fold accuracy scores for TF-IDF averaged document vectors (Word2vec)Classifier K1 K2 K3 K4 K5 K6 K7 K8 K9 K10 MeanBLSTM 77.81 78.76 76.95 76.95 73.71 79.43 78.19 76.57 76.48 78.57 77.34LSTM 76.29 78.48 76.38 74.38 73.14 77.24 78.95 77.71 76.19 78.38 76.71SM 75.9 78.95 74.1 72.38 73.81 76.29 77.62 75.33 75.33 76.67 75.64SVM 74.19 75.43 70.76 71.81 72.29 72.67 76.38 73.24 71.71 72.48 73.1LR 71.62 73.24 71.9 69.24 70.19 71.43 73.33 72.29 70.95 73.33 71.75LDA 71.05 73.14 72.57 69.33 69.71 71.33 72.67 72.48 70.76 73.62 71.67K-Neighbors 68.38 69.24 67.52 68.1 67.52 70.29 70.67 67.71 67.9 68.95 68.63GaussianNB 58.38 62.19 64.95 59.24 61.62 63.33 63.33 62.1 60.1 60.57 61.58DT 59.52 57.52 60.1 56 55.43 58.57 60.48 56.95 57.62 56.76 57.9Table-4.2 represents all performance evaluation parameter’s (accuracy, F1 score, preci-sion, recall) 10-fold mean values for TF-IDF averaged document vectors using word2vecmodel. From this table we can observe that BLSTM has acquired highest accuracyof 77.34%. On other hand K-Neighbors, GaussianNB and DT have provided lowestaccuracy below than 70%.Table 4.2: 10-fold mean performance scores for TF-IDF averaged document vectors(Word2vec)Classifier Accuracy F-1 Score Precision RecallBLSTM 77.34 77.19 77.05 77.02LSTM 76.71 76.19 77 76.51SM 75.64 74.93 75.14 74.85SVM 73.1 72.99 74.13 73.1LR 71.75 71.79 71.93 71.75LDA 71.67 71.8 72.19 71.67K-Neighbors 68.63 68.48 69.09 68.63GaussianNB 61.58 61.42 64.07 61.58DT 57.9 57.76 57.42</s>
|
<s>57.514.2.1.2 10-Fold Cross Validation - Doc2vecTable-4.3 represents 10-fold accuracy scores with a mean column for doc2vec documentvectors.Table 4.3: 10-fold accuracy scores for doc2vec document vectorsClassifier K1 K2 K3 K4 K5 K6 K7 K8 K9 K10 MeanBLSTM 74.57 77.05 73.71 74 76.67 74.38 74.29 77.05 78.19 75.71 75.56LSTM 73.14 75.9 74.48 74.76 75.43 73.62 74.57 76.48 77.14 74.19 74.97SM 73.14 75.24 74.38 72 73.9 71.14 72.38 74.76 76.1 72.48 73.55SVM 72.38 72.48 72.1 71.52 71.81 72.29 71.24 73.9 75.43 73.52 72.67LDA 70.38 71.05 71.14 70.48 73.05 69.9 70 72.1 71.52 70 70.96LR 70.1 70.67 70.95 70.57 72.86 69.33 69.71 73.05 71.62 69.71 70.86GaussianNB 64.57 64.1 66.19 65.05 65.9 66.1 65.33 66.57 66.29 63.9 65.4K-Neighbors 58.29 57.62 58.19 58.57 58.57 58.67 57.62 59.9 58.76 57.9 58.41DT 53.62 49.71 51.43 49.24 50 48.38 51.33 51.52 52.76 52.19 51.02Table-4.4 represents all performance evaluation parameter’s (accuracy, F1 score, pre-cision, recall) 10-fold mean values for doc2vec document vectors. Here BLSTM hasacquired highest accuracy of 75.56% and DT obtained lowest accuracy of 51.02%.Table 4.4: 10-fold mean performance scores for doc2vec document vectorsClassifier Accuracy F-1 Score Precision RecallBLSTM 75.56 75.77 75.42 75.74LSTM 74.97 74.66 74.61 75.19SM 73.55 73.61 73.58 73.7SVM 72.67 72.44 72.76 72.67LDA 70.96 70.77 70.79 70.96LR 70.86 70.7 70.7 70.86GaussianNB 65.4 64.9 65.5 65.4K-Neighbors 58.41 56.19 64.59 58.41DT 51.02 51.05 50.8 50.94.2.2 Doc2vec vs TF-IDF Averaged Word2vecDocument vectorization technique of doc2vec is an adaptation of word2vec. At firstdoc2vec creates vocabulary by extracting unique words from the provided documentsdata-set, therefor words are unique across all documents. For generating vector model,doc2vec offers two approaches - PV-DM and PV-DBOW. We’ve built our doc2vec modelusing PV-DBOW. In training phase, it doesn’t consider word ordering information ina document. Also term frequency of a word is ignored in training phase. In simple itdetermines context probability for a given paragraph or document by sampling list ofwords from it. So we can see that term ordering and rareness of a term are not consideredwhile creating document vector using doc2vec. It leads to a problem, common words willappear more often and other words containing more information about the documentwill be less frequent, and thus the output document vector will be less informative fortopic classification.Now we’ll discuss how TF-IDF vectorization overcomes these drawbacks. TF-IDF con-siders the term frequency of a word and makes a balance with its inverse documentfrequency. That means mostly common words across all documents will gain low scores.Rare words representing the topic of a document will gain higher scores and they willhave more impact on the output document vector.According to this research [40], performance of doc2vec is not remarkable for short lengthdocuments. doc2vec model is more suitable for very large corpus. On the contrary, TF-IDF is preferable for short text fragment and small or medium size corpus. As our corpussize is small and it contains mostly short length documents, TF-IDF seems the mostsuitable solution. Our result comparison also indicates that TF-IDF averaged word2vecprovides more classification accuracy than doc2vec.Table-4.5 represents a comparison of 10-fold mean accuracy scores achieved from TF-IDFaveraged document vectors using word2vec</s>
|
<s>and document vectors from doc2vec.From this table our observation is, almost all ML classifiers performed slightly betterwith TF-IDF averaged document vectors than doc2vec document vectors. K-Neighborsand DT perform much better with TF-IDF averaged word2vec. Only GaussianNB hasachieved better result with doc2vec model.Table 4.5: Comparison of 10-fold mean accuracy scores gained for TF-IDF averagedword2vec and doc2vec modelsClassifier Word2vec Accuracy Doc2vec AccuracyBLSTM 77.34 75.56LSTM 76.71 74.97SM 75.64 73.55SVM 73.1 72.67LR 71.75 70.86LDA 71.67 70.96K-Neighbors 68.63 58.41GaussianNB 61.58 65.4DT 57.9 51.024.2.3 DiscussionIn our study, we have applied 10-fold cross validation with most common machine learn-ing performance matrices i.e. accuracy, precision, recall, F1 score for the evaluation ofengaged ML classifiers. Obtained 10-fold mean performance scores for word2vec is rep-resented in TABLE 4.2 and doc2vec is represented in TABLE 4.4. Results are sorteddecreasingly based on the classification accuracy achieved by the employed classifiers.According to the data available in TABLE 4.2 for TF-IDF averaged document vectorsusing word2vec, BLSTM has the best performance as it has gained an accuracy of 77.34%whilst DT has attained lowest accuracy which is 57.9% for the corpus we have built inthis study. And in TABLE 4.4 for doc2vec document vectors, BLSTM has achievedhighest accuracy of 75.56% whilst DT performs very poor with an accuracy of 51.02%.Classifiers result accuracy comparison for TF-IDF averaged word2vec and doc2vec is rep-resented in TABLE 4.5. This table shows that almost all ML classifiers perform slightlybetter with document vectors constructed using TF-IDF score from word2vec model.Only GaussianNB has achieved better result with doc2vec document vectors comparingits result with TF-IDF averaged word2vec.The word2vec algorithm makes distributed semantic representation of words. This ideacan be extended for sentences and documents. Instead of learning feature representationsfor words, system can learn it for sentences or documents. sentence2vec representsmathematical average of all the word vector representations in a sentence. doc2vecextends the idea of sentence2vec or rather word2vec because sentences can also beconsidered as documents. For our experiment we required document vectors as ourcorpus contains documents as a single unit of labeled data and we aimed to classify it.doc2vec model gives document vector for each documents we provided while trainingthe model. word2vec model only provides word vectors from a document. To makedocument vector using word2vec model, we applied TF-IDF averaged document vectorwhich is mostly used in document classification and data analysis problems using wordembedding technologies.We observed that performance of deep learning approaches are better than regular MLclassifiers using document vectors obtained from both word2vec and doc2vec model.BLSTM, LSTM and SM classifier are deep learning based approaches we used in thisexperiment. BLSTM used Sequential model with bidirectional LSTM cell which in-creases performance of classifier. In sequence classification problem, using the inputsequence in first layer and a reverse copy in the second layer provide more context tothe classifier network. This improves the learning process and provides faster result.Among other traditional machine learning approaches, SVM, LR and LDA performsbetter than K-Neighbors, DT and GaussianNB using both doc2vec and word2vec model.Naive Bayes (NB) classifier works fine with numerical and textual data. But it has amajor limitation. When features are</s>
|
<s>highly correlated, it performs very poorly. It alsofails to consider word occurrence frequency in feature vector regarding text classificationproblem. In our experiment GaussianNB has achieved accuracy of 61.58% with TF-IDFaveraged document vector using word2vec. But it has achieved good result using doc2vecwhich is 65.4%. Nearest Neighbor classifier is known as effective and non-parametric innature. But it takes very long time for classification. SVM offers an advantage whichis, in over fitting problem, it tends to be fairly robust and can scale up to considerabledimensionality. SVM achieved good result among other traditional ML classifier usingboth doc2vec and word2vec model. With TD-IDF averaged document vector, SVMachieved 73.1% accuracy which is pretty good.Using a suitable pre-processing, K-Neighbors can achieve very good results. Its perfor-mance scales up well with the number of data set, which is not the case for SVM. SVMuses more parameters than LR and DT classifiers as per analysis. It can achieve highestclassification precision most of the time. But SVM is very time consuming as it usesmore parameters which requires more computation time. LDA is popular for multi-classclassification, because it provides low-dimensional views of the data. It should be ap-plied when training sample is small, to avoid high variance problem. Compared to SVMand LDA, LR is computationally efficient.Deep learning classifiers performance is better that traditional machine learning whenthe scale of data increases. But with a small data-set, deep learning algorithms don’tperform very well. The reason is deep learning approaches need a large amount of datato learn from it in context of classification. High-end machines are suitable for deeplearning experiment contrary to traditional machine learning approaches.4.2.4 SummaryIn this section we’ve shown our experimental result analysis using 10-fold cross vali-dation. Performance evaluation parameters - accuracy, F1-score, precision and recallhave been retained from each k-fold validation step and displayed in tabular manner forrelevant vector models. Comparison of doc2vec and TF-IDF averaged word2vec model isdiscussed. We also analyzed different ML classifiers performance with document vectorsobtained from TF-IDF averaged word2vec and doc2vec models.Chapter 5Conclusion and Future Work5.1 ConclusionWord embedding technologies perform better with large data-set in context of naturallanguage processing. One of the main focus of this research was to identify relevant datasource and retrieve categorical data from it which can be applied to supervised machinelearning and document embedding problems. Social media platforms are great sourceof data if we can apply proper filtering and extract valuable information from it. In ourexperiment, classification accuracy for different classifiers shows that word embeddingtechnologies have enough potential if implemented properly.5.2 LimitationsClassifying human sentiment has many limitations. First we want to discuss some lim-itations that depend on mainly human interactions and their believe, which also varieswith time and place.• Person’s perspective: As we are taking about opinions, it is the nature of humanto have different perspective about anything. It’s difficult to mine and categorizelarge amount of data sample when attempting to analyze opinion or sentimentfrom it.• Time and Place: An opinion may have different meaning and sentiment basedon time and place. A demand of one country’s people may not have any positiveimpact</s>
|
<s>on other countries.• Group and Organizational Impact: Religion and politics also have impact onhuman sentiment. Based on peoples believe/group/organization, their sentimenton a topic can vary.Other limitations are related to data collection, filtering and system design.• Data Source Availability: Availability of Bengali corpus for sentiment analysisis not that high. For this study we did not found any standard classified Bengalitext corpus. That’s why we have to create our own corpus.• Noisy Data: We have created our own corpus by parsing social media contentand that contains lots of noisy data. We removed those noisy data sometimesmanually and sometimes pragmatically based on predefined filtering rules.• Bengali Phonetics based Filtering: In this experiment we only worked withtexts containing Bengali phonetics, which filtered out Romanized Bengali texts.But people often use Romanized text to write their thoughts.5.3 Future WorkAlthough our corpus currently constructed with the polarity of sentiment, it is a definitepossibility that multi-class model can be prepared given enough time and larger volumesof data.• Identify Different Human Emotions: A textual data can represent very spe-cific state of emotions like - happiness, sadness, fear, anger, surprise etc. In future,we can work with these multiple class classification instead of just identifying sen-timent polarity.• Scoring Multiple Emotions: Representing a document with percentage of emo-tions can be an excellent improvement.• Romanized Bengali Texts Classification: We pre-processed dataset to getthe text containing only Bengali phonetics which filtered out Romanized Bengalitexts. This narrowed down our dataset and also the scope to work with Latinletters used to write Bengali sentences (Romanized Bengali text).• Work with Emoticons: Filtering special characters removed any kind of emoti-cons used in the textual post, but emoticon plays a vital role in sentiment expres-sion. We are intending to work with emoticons in our next research work involvingsentiment analysis.Bibliography[1] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation ofword representations in vector space. arXiv preprint arXiv:1301.3781, 2013.[2] Andrew M Dai, Christopher Olah, and Quoc V Le. Document embedding withparagraph vectors. arXiv preprint arXiv:1507.07998, 2015.[3] A gentle introduction to Doc2Vec. https://medium.com/wisio/a-gentle-introduction-to-doc2vec-db3e8c0cce5e, 2020. (Visited on02/03/2020).[4] Wikipedia - LSTM. https://en.wikipedia.org/wiki/Long_short-term_memory, 2018. (Visited on 10/27/2018).[5] Deep Dive into Bidirectional LSTM. https://www.i2tutorials.com/technology/deep-dive-into-bidirectional-lstm/, 2020. (Visited on02/03/2020).[6] Erik Cambria. Affective computing and sentiment analysis. IEEE Intelligent Sys-tems, 31(2):102–107, 2016.[7] Rui Gaspar, Cláudia Pedro, Panos Panagiotopoulos, and Beate Seibt. Beyondpositive or negative: Qualitative sentiment analysis of social media reactions tounexpected stressful events. Computers in Human Behavior, 56:179–191, 2016.[8] Social Media Statistics & Facts. https://www.statista.com/topics/1164/social-networks/, 2018. (Visited on 10/27/2018).[9] Social Media Stats Bangladesh. http://gs.statcounter.com/social-media-stats/all/bangladesh, 2018. (Visited on 10/27/2018).[10] Bo Pang and Lillian Lee. Opinion mining and sentiment analysis. Found. TrendsInf. Retr., 2(1-2):1–135, 2008.[11] Sentiment analysis. https://en.wikipedia.org/wiki/Sentiment_analysis,2018. (Visited on 10/27/2018).[12] Quoc Le and Tomas Mikolov. Distributed representations of sentences and docu-ments. In International Conference on Machine Learning, pages 1188–1196, 2014.https://medium.com/wisio/a-gentle-introduction-to-doc2vec-db3e8c0cce5ehttps://medium.com/wisio/a-gentle-introduction-to-doc2vec-db3e8c0cce5ehttps://en.wikipedia.org/wiki/Long_short-term_memoryhttps://en.wikipedia.org/wiki/Long_short-term_memoryhttps://www.i2tutorials.com/technology/deep-dive-into-bidirectional-lstm/https://www.i2tutorials.com/technology/deep-dive-into-bidirectional-lstm/https://www.statista.com/topics/1164/social-networks/https://www.statista.com/topics/1164/social-networks/http://gs.statcounter.com/social-media-stats/all/bangladeshhttp://gs.statcounter.com/social-media-stats/all/bangladeshhttps://en.wikipedia.org/wiki/Sentiment_analysis[13] Peter D. Turney. Thumbs up or thumbs down?: semantic orientation applied tounsupervised classification of reviews. Proceedings of the 40th Annual Meeting onAssociation for Computational Linguistics - ACL ’02, pages 417–424, 2002. ISSN0738467X.[14] Kushal Dave, Steve Lawrence, and David M Pennock. Mining the Peanut Gallery:Opinion Extraction and Semantic</s>
|
<s>Classification of Product Reviews. In Proceedingsof the 12th international conference on World Wide Web (WWW ’03), pages 519–528, 2003.[15] Bo Pang, Lillian Lee, Z. A. Bán, Bo Pang, Lillian Lee, and ShivakumarVaithyanathan. Proceedings of the Conference on Empirical Methods in Natu-ral Language Processing. Proceedings of the Conference on Empirical Methods inNatural Language Processing, 48(1):49–55, 2002.[16] T. Wilson, J. Wiebe, and P. Hoffman. Recognizing contextual polarity in phraselevel sentiment analysis. In Proceedings of the conference on human language tech-nology and empirical methods in natural language processing, pages 347–354, 2005.[17] Amandeep Kaur and Vishal Gupta. A Survey on Sentiment Analysis and OpinionMining Techniques. Journal of Emerging Technologies in Web Intelligence, 5(4):367–371, 2013.[18] Xi Ouyang, Pan Zhou, Cheng Hua Li, and Lijun Liu. Sentiment analysis usingconvolutional neural network. In 2015 IEEE International Conference on Com-puter and Information Technology; Ubiquitous Computing and Communications;Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Com-puting, pages 2359–2364, 2015.[19] Shaika Chowdhury and Wasifa Chowdhury. Performing sentiment analysis inBangla microblog posts. In 2014 International Conference on Informatics, Elec-tronics and Vision, ICIEV 2014, 2014.[20] Amitava Das and Sivaji Bandyopadhyay. Opinion-polarity identification in bengali.In International Conference on Computer Processing of Oriental Languages, pages169–182, 2010.[21] Md Al-Amin, Md Saiful Islam, and Shapan Das Uzzal. Sentiment analysis of Bengalicomments with Word2Vec and sentiment information of words. In ECCE 2017 -International Conference on Electrical, Computer and Communication Engineering,pages 186–190, 2017.[22] Asif Hassan, Mohammad Rashedul Amin, Abul Kalam Al Azad, and Nabeel Mo-hammed. Sentiment analysis on bangla and romanized bangla text using deeprecurrent models. In IWCI 2016 - 2016 International Workshop on ComputationalIntelligence, pages 51–56, 2017.[23] Word2Vec. https://code.google.com/archive/p/word2vec/, 2018. (Visited on10/27/2018).[24] Matteo Pagliardini, Prakhar Gupta, and Martin Jaggi. Unsupervised learningof sentence embeddings using compositional n-gram features. arXiv preprintarXiv:1703.02507, 2017.[25] Doc2vec paragraph embeddings. https://radimrehurek.com/gensim/models/doc2vec.html, 2018. (Visited on 10/27/2018).[26] Alexander Genkin, David D Lewis, and David Madigan. Large-scale bayesian lo-gistic regression for text categorization. Technometrics, 49(3):291–304, 2007.[27] Thorsten Joachims. Text categorization with support vector machines: Learningwith many relevant features. In European conference on machine learning, pages137–142. Springer, 1998.[28] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural com-putation, 9(8):1735–1780, 1997.[29] Keras: The Python Deep Learning library. https://keras.io, 2018. (Visited on10/27/2018).[30] Foster Provost and Ron Kohavi. On applied research in machine learning. InMachine learning, pages 127–132, 1998.[31] David Martin Ward Powers. Evaluation: from precision, recall and f-measure toroc, informedness, markedness and correlation. International Journal of MachineLearning Technology, 2(1):37–63, 2011.[32] Ron Kohavi et al. A study of cross-validation and bootstrap for accuracy estimationand model selection. In Ijcai, volume 14, pages 1137–1145. Montreal, Canada, 1995.[33] Scikit-learn - Machine Learning in Python. https://scikit-learn.org, 2018.(Visited on 10/27/2018).[34] Facebook Reactions. http://minimaxir.com/2016/06/interactive-reactions/, 2018. (Visited on 10/27/2018).[35] Facebook Graph API. https://developers.facebook.com/docs/graph-api/,2018. (Visited on 10/27/2018).[36] Socian Bangla Sentiment Dataset. https://github.com/socianltd/socian-bangla-sentiment-dataset-labeled/, 2018. (Visited on 10/27/2018).[37] NumPy. https://numpy.org/, 2020. (Visited on 02/03/2020).[38] Numpy Random Permutation. https://numpy.org/devdocs/reference/random/generated/numpy.random.permutation.html, 2020. (Visited on 02/03/2020).https://code.google.com/archive/p/word2vec/https://radimrehurek.com/gensim/models/doc2vec.htmlhttps://radimrehurek.com/gensim/models/doc2vec.htmlhttps://keras.iohttps://scikit-learn.orghttp://minimaxir.com/2016/06/interactive-reactions/http://minimaxir.com/2016/06/interactive-reactions/https://developers.facebook.com/docs/graph-api/https://github.com/socianltd/socian-bangla-sentiment-dataset-labeled/https://github.com/socianltd/socian-bangla-sentiment-dataset-labeled/https://numpy.org/https://numpy.org/devdocs/reference/random/generated/numpy.random.permutation.htmlhttps://numpy.org/devdocs/reference/random/generated/numpy.random.permutation.html[39] Sklearn Cross Validation Score. https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html, 2020. (Visitedon 02/03/2020).[40] Cedric De Boom, Steven Van Canneyt, Steven Bohez, Thomas Demeester, and BartDhoedt. Learning semantic similarity for very short texts. In 2015 ieee internationalconference on data mining workshop (icdmw), pages 1229–1234. IEEE,</s>
|
<s>2015.https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.htmlhttps://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.htmlAppendix AMy Publications1. Hoque, M. T., Rifat-Ut-Tauwab, M., Kabir, M. F., Sarker, F., Huda, M. N., andAbdullah-Al-Mamun, K. (2016, May). Automated Bangla sign language transla-tion system: Prospects, limitations and applications. In 2016 5th InternationalConference on Informatics, Electronics and Vision (ICIEV) (pp. 856-862). IEEE.2. Hoque, M. T., Islam, A., Ahmed, E., Mamun, K. A., and Huda, M. N. (2019,February). Analyzing Performance of Different Machine Learning ApproachesWith Doc2vec for Classifying Sentiment of Bengali Natural Language. In 2019 In-ternational Conference on Electrical, Computer and Communication Engineering(ECCE) (pp. 1-5). IEEE. List of Figures List of Tables 1 Introduction 1.1 Motivation 1.2 Aim and Objectives 1.3 Contribution 1.4 Organization of the Thesis 2 Background Materials 2.1 Literature Review 2.1.1 Non-Bengali Languages 2.1.2 Bengali Language 2.2 Natural Language Processing 2.3 Sentiment Analysis 2.3.1 Different Levels of Sentiment Analysis 2.3.1.1 Document level 2.3.1.2 Sentence level 2.3.1.3 Entity level 2.4 Corpus Construction 2.4.1 Scripting 2.4.2 Prepossessing 2.4.3 Data Set Labeling 2.5 Data Model Construction 2.5.1 Word Embedding Techniques 2.5.1.1 Word2Vec 2.5.1.2 Sentence2Vec 2.5.1.3 Doc2Vec 2.6 Types of Machine Learning Algorithms 2.6.1 Supervised Machine Learning 2.6.2 Unsupervised Machine Learning 2.6.3 Semi-supervised Machine Learning 2.6.4 Reinforcement Machine Learning 2.7 Machine Learning Tools for Classification 2.7.1 Regular Machine Learning Classifiers 2.7.1.1 Logistic Regression (LR) 2.7.1.2 Linear Discriminant Analysis (LDA) 2.7.1.3 Support Vector Machine (SVM) 2.7.1.4 K-Nearest Neighbors 2.7.1.5 Decision Tree (DT) 2.7.1.6 Gaussian Naive Bayes (GaussianNB) 2.7.2 Deep Learning Classifiers 2.7.2.1 Long Short-term Memory (LSTM) 2.7.2.2 Bidirectional Long Short-term Memory (BLSTM) 2.7.2.3 Sequential Model (SM) 2.8 Performance Evaluation 2.8.1 Confusion Matrix 2.8.2 Precision 2.8.3 Recall 2.8.4 F1-Score 2.8.5 Accuracy 2.8.6 Macro Average for Precision, Recall and F1-score 2.8.7 k-Fold Cross Validation 2.9 Summary 3 Proposed Method 3.1 Overview of proposed system 3.2 Corpus Creation 3.2.1 Data Collection 3.2.2 Data Filtering 3.2.3 Data Labeling 3.3 Data Model Selection 3.4 Choosing Machine Learning Classifiers 3.5 Result and Performance Evaluation 3.6 Summary 4 Experimental Analysis 4.1 Experiments 4.1.1 Corpus Construction 4.1.2 Model Generation 4.1.2.1 TF-IDF Averaged Word2vec Model 4.1.2.2 Doc2vec Model 4.1.3 Classifier Design 4.1.4 Summary 4.2 Result and Analysis 4.2.1 k-Fold Cross Validation 4.2.1.1 10-Fold Cross Validation - TF-IDF Averaged Word2vec 4.2.1.2 10-Fold Cross Validation - Doc2vec 4.2.2 Doc2vec vs TF-IDF Averaged Word2vec 4.2.3 Discussion 4.2.4 Summary 5 Conclusion and Future Work 5.1 Conclusion 5.2 Limitations 5.3 Future Work A My Publications</s>
|
<s>Topic Modelling and Sentiment Analysis with the Bangla Language: A Deep Learning Approach Combined with the Latent Dirichlet AllocationTopic Modelling and Sentiment Analysis with the Bangla Language: A Deep Learning Approach Combined with the Latent Dirichlet AllocationMustakim Al HelalA thesisSubmitted to the Faculty of Graduate Studies and Research In Partial Fulfillment of the RequirementsFor the Degree ofMaster of ScienceComputer ScienceUniversity of ReginaRegina, SaskatchewanSeptember, 2018University Web Site URL Here (include http://)UNIVERSITY OF REGINA FACULTY OF GRADUATE STUDIES AND RESEARCH SUPERVISORY AND EXAMINING COMMITTEE Mustakim Al Helal, candidate for the degree of Master of Science in Computer Science, has presented a thesis titled, Topic Modelling and Sentiment Analysis with the Bangla Language: A Deep Learning Approach Combined with the Latent Dirichlet Allocation, in an oral examination held on August 28, 2018. The following committee members have found the thesis acceptable in form and content, and that the candidate demonstrated satisfactory knowledge of the subject material. External Examiner: *Dr. Yllias Chali, University of Lethbridge Supervisor: Dr. Malek Mouhoub, Department of Computer Science Committee Member: Dr. Samira Sadaoui, Department of Computer Science Committee Member: Dr. David Gerhard, Department of Computer Science Chair of Defense: Dr. Maria Velez-Caicedo, Department of Geology *via SKYPE AbstractIn this thesis, the Bangla language topic modelling and sentiment analysis hasbeen researched. It has two contributions lining up together. In this regard, wehave proposed different models for both the topic modelling and the sentimentanalysis task. Many research exist for both of these works but they do not addressthe Bangla language. Topic modelling is a powerful technique for unsupervisedanalysis of large document collections. There are various efficient topic modellingtechniques available for the English language as it is one of the most spoken lan-guages in the whole world, but not for the other spoken languages. Bangla beingthe seventh most spoken native language in the world by population, it needs au-tomation in different aspects. This thesis deals with finding the core topics of theBangla news corpus and classifying news with a similarity measure which is oneof the contributions. This is the first ever tool for Bangla topic modelling. Thedocument models are built using LDA (Latent Dirichlet Allocation) with Bigram.Over the recent years, people in Bangladesh are heavily getting involved in so-cial media with Bangla texts. Among this involvement, people post their opinionabout products or businesses across different social sites and Facebook is the mostweighted one. We have collected data from the Facebook Bangla comments andapplied a state of the art algorithm to extract the sentiments which is anothercontribution. Our proposed system will demonstrate an efficient sentiment analy-sis. We have performed a comparison analysis with the existing sentiment analysissystem in Bangla. However it is not straightforward to extract sentiments fromthe Bengali language due to its complex grammatical structure. A deep learningbased method was applied to train the model and understand the underlying sen-timent. The main idea is confined to the word level and character level encodingand in order to see the differences in terms of the model performance. So, wewill explore different algorithms and techniques</s>
|
<s>for topic modelling and sentimentanalysis for the Bangla language.AcknowledgementsMy first debt of gratitude goes to my supervisor Dr. Malek Mouhoub, who en-couraged me to follow this path and provided me with constant support during myM.Sc. studies. I would like to express my sincere thanks for his valuable guidance,financial assistance and constant encouragement. His enthusiasm, patience anddiverse knowledge helped and enlightened me on many occasions. The amount offreedom in thinking I received from him helped me overcome the difficulties withmy research. I feel proud to be his research student and cannot imagine bettersupervisor.I acknowledge the Faculty of Graduate Studies and Research for providing mewith the financial means, in the form of scholarships which contributed towardsmy tuition fees. I thank UR international office for helping me engage myself indifferent community work.I also thank Dr. Samira Sadaoui and Dr. David Gerhard, my thesis commit-tee members who read my thesis and provided invaluable suggestions and usefulcomments for improvement.I would like to take this opportunity to thank every member of the Departmentof Computer Science who has helped me throughout my studies.Last but not least, I would like to extend my deepest gratitude to my belovedparents for their unconditional love and support throughout my entire life. Itwould not have been possible for me to come to Canada and achieve a prestigiousscholarship without my parent’s never ending support. I dedicate my hard earnedM.Sc. degree to my beloved parents.POST DEFENCE ACKNOWLEDGEMENTMy thanks go to Dr. Yllias Chali of the University of Lethbridge for being theexternal examiner for my M.Sc thesis and for providing me with his invaluablecomments and suggestions.ContentsAbstract iAcknowledgements iiList of Figures viList of Tables viiAbbreviations viii1 Introduction 11.1 Problem Statement and Motivations . . . . . . . . . . . . . . . . . 11.2 Proposed Solution and Contributions . . . . . . . . . . . . . . . . . 31.3 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Literature Review and Background 62.1 Literature Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2 Background Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . 162.2.1 Topic Modelling . . . . . . . . . . . . . . . . . . . . . . . . . 162.2.2 Latent Dirichlet Allocation . . . . . . . . . . . . . . . . . . . 182.2.3 Latent Semantic Indexing . . . . . . . . . . . . . . . . . . . 192.2.4 Hierarchical Dirichlet Process . . . . . . . . . . . . .</s>
|
<s>. . . . 212.2.5 Singular Value Decomposition . . . . . . . . . . . . . . . . . 222.2.6 Evaluation of Topics . . . . . . . . . . . . . . . . . . . . . . 242.2.7 Recurrent Neural Network . . . . . . . . . . . . . . . . . . . 252.2.8 Long Short Term Memory . . . . . . . . . . . . . . . . . . . 262.2.9 Gated Recurrent Unit . . . . . . . . . . . . . . . . . . . . . 282.2.9.1 Update Gate . . . . . . . . . . . . . . . . . . . . . 302.2.9.2 Reset Gate . . . . . . . . . . . . . . . . . . . . . . 312.2.9.3 Current Memory Content . . . . . . . . . . . . . . 312.2.9.4 Final memory at Current Time-Stamp . . . . . . . 322.2.10 Evaluation of Sentiment Analysis Model . . . . . . . . . . . 343 Sentiment Analysis 353.1 Data Collection and Preprocessing . . . . . . . . . . . . . . . . . . 353.2 Character Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . 363.3 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383.4 Proposed Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393.4.1 Baseline Model . . . . . . . . . . . . . . . . . . . . . . . . . 393.4.2 Character Level Model . . . . . . . . . . . . . . . . . . . . . 403.5 Experimentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413.6 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 414 Topic Modelling 454.1 The Corpus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454.2 The Crawler . . . . . . . .</s>
|
<s>. . . . . . . . . . . . . . . . . . . . . . . 484.3 Preprocessing and Cleaning . . . . . . . . . . . . . . . . . . . . . . 484.3.1 Tokenization . . . . . . . . . . . . . . . . . . . . . . . . . . . 494.3.2 Stop Words . . . . . . . . . . . . . . . . . . . . . . . . . . . 494.3.3 Bag of Words Model . . . . . . . . . . . . . . . . . . . . . . 504.3.4 Bigram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514.3.5 Removing Rare and Common Words . . . . . . . . . . . . . 514.4 Proposed Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524.5 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534.6 Experimentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564.6.1 Topic Extraction . . . . . . . . . . . . . . . . . . . . . . . . 574.6.2 Similarity Measure . . . . . . . . . . . . . . . . . . . . . . . 644.6.3 Performance Comparison with Other Topic Model Algorithms 674.7 Methodology for Classifying News Category . . . . . . . . . . . . . 695 Conclusion and Discussion 73Bibliography 76List of Figures2.1 A typical RNN [31] . . . . . . . . . . . . . . . . . . . . . . . . . . 262.2 A typical LSTM [32] . . . . . . . . . . . . . . . . . . . . . . . . . . 272.3 A recurrent neural network with a gated recurrent unit [32] . . . . 292.4 The GRU unit [32] . . . . . . . . . . . . . . . . . . . . . . . . . . . 292.5 Diagram showing the sigmoid activation for merge [32] . . . . . . . 302.6 GRU reset function [32] .</s>
|
<s>. . . . . . . . . . . . . . . . . . . . . . . 312.7 Diagram showing GRU tanh function [32] . . . . . . . . . . . . . . 322.8 Diagram showing GRU function [32] . . . . . . . . . . . . . . . . . 333.1 The dataset for the sentiment analysis work . . . . . . . . . . . . . 363.2 Characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373.3 Character encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . 373.4 Word level model architecture . . . . . . . . . . . . . . . . . . . . . 403.5 Character level model architecture . . . . . . . . . . . . . . . . . . . 413.6 Training and testing loss . . . . . . . . . . . . . . . . . . . . . . . . 423.7 Training and testing accuracy . . . . . . . . . . . . . . . . . . . . . 433.8 Comparison of the two models . . . . . . . . . . . . . . . . . . . . . 434.1 The news corpus: CSV file . . . . . . . . . . . . . . . . . . . . . . . 464.2 Bagla Independent Vowels . . . . . . . . . . . . . . . . . . . . . . . 474.3 Bagla Dependent Vowels . . . . . . . . . . . . . . . . . . . . . . . . 474.4 Bagla Consonants . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474.5 Bagla words . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484.6 Proposed model for topic extraction . . . . . . . . . . . . . . . . . . 524.7 Coherence based number of topics . . . . . . . . . . . . . . . . . . . 564.8 Coherence based number of topics (t=10) . . . . . . . . . . .</s>
|
<s>. . . . 574.9 Coherence based number of topics (t=20) . . . . . . . . . . . . . . . 584.10 Similarity Dissimilarity of Cosine average . . . . . . . . . . . . . . . 664.11 Model performance comparison . . . . . . . . . . . . . . . . . . . . 674.12 Document topic distribution for movie news . . . . . . . . . . . . . 714.13 Document topic distribution for Trump news . . . . . . . . . . . . . 71List of Tables2.1 List of some examples words after POS tagging with positive andnegative polarity [1] . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.2 Comparison of F-measure for both the classifiers with different fea-tures [1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.3 Comparison of characteristics of topic modelling methods [2] . . . . 132.4 Different topic evolution models and their main characteristics [2] . 142.5 Top five topic probabilities for article Light [3] . . . . . . . . . . . . 152.6 An example document-term matrix . . . . . . . . . . . . . . . . . . 183.1 Data Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363.2 Example confusion matrix . . . . . . . . . . . . . . . . . . . . . . . 444.1 Stop words . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504.2 Document-Term matrix . . . . . . . . . . . . . . . . . . . . . . . . . 544.3 Term-Topic matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 554.4 Extracted Topics from the optimized LDA . . . . . . . . . . . . . . 584.5 Showing cosine similarity score between different models . . . . . . 65viiAbbreviationsNLP Natural Language ProcessingLDA Latent Dirichlet AllocationLSI Latent Semantic IndexingHDP Hierarchical Dirichlet ProcessSVD Singular Value DecompositionRNN Recurrent Neural NetworkLSTM Long Short Term MemoryGRU Gated Recurrent UnitviiiChapter 1IntroductionIn this chapter, the motivation and the problem which will be solved is discussedalong with some general discussion to get</s>
|
<s>started with the basic ideas of Natu-ral Language Processing (NLP). The solution of the discussed problem has beendefined. This chapter also talks about the main contribution for this thesis. Theorganization of the thesis will be discussed in section 1.31.1 Problem Statement and MotivationsNatural Language Processing (NLP) has been a demanding field of research underthe Artificial Intelligence domain for quite a long time. Recently, NLP is placing asone of the top trending research fields due to having a large amount of available textdata. Data science and machine learning are collectively working as the next bigthing in Computer Science. Moreover, the World Wide Web (www) has becomesignificant over the last 25 years which is fueling the data science and machinelearning sectors. This presence made available of Bangla texts for classificationand other classic natural language processing tasks. However, there has been noattempt of topic modelling with Bangla language so far in terms of the LatentDirichlet Allocation (LDA) which is a powerful topic modelling algorithm. Thisworked as a motivation for this thesisAnother motivation for this work was social media like Facebook. Bangla is thenative language in Bangladesh. Dhaka, the capital of Bangladesh ranked as the2nd highest city in the world in terms of number of Facebook users [4] . Facebookadopted the Bangla language and Bangla speakers are more comfortable usingBangla for Facebook comments and status after the Bangla typing applicationAvro was first introduced in 2003. The goal for this thesis was to make a sentimentanalysis model with Bangla in a different approach that can beat the existingresults. There are some systems that performed sentiment analysis in Bangla butno research has been done to analyse sentiments with a Recurrent Neural Network(RNN).Topic modelling and sentiment analysis with Bangla are both new concepts as faras the NLP field is concerned. With the topic modelling approach, classification ofa large document collection can be achieved. However, applying topic modellingalgorithms on Bangla language is a challenge since it has a completely differentgrammatical structure compared to English. Data pre-processing is different forBangla. This research deals with finding the core topics in a Bangla news cor-pus which consists of 7,134 news articles from renowned online news portals ofBangladesh. A method is proposed for the classification of the news. Perplexity[5] is discussed. Coherence of different topics are also explored. A comparisonbetween LDA, LSA, HDP and LSI are discussed in the subsequent sections of thisthesis.The second contribution of the thesis is a Sentient analysis. A collection of Banglacomments from Facebook was used as a dataset for this research. If someonewants to know which comment is positive, which one is negative and which ofthe comments are neutral then there has been no tool to identify this with theBangla language. Previously, research has been done in English with a NaiveBayes classifier or from the lexical resources perspective. However, the problemwith this is it’s performance was limited to boolean models and not tested withmultinominal features in [6]. An approach has been proposed in this thesis tosuccessfully achieve accuracy better than the existing sentiment analysis model inBangla. Sentiment</s>
|
<s>analysis in Bangla has a wide range of future possibilities withdeep learning. Deep learning methods are being applied successfully to naturallanguage processing problems and achieving an acceptable accuracy. RecurrentNeural Network, a variation of Neural Network (NN) is used for processing se-quential data. The classification task of the sentiments are a step by step processwhich will be discussed in detail in their respective chapters.Long Short Term Memory (LSTM) is used in the proposed method. It is a popularand successful technique to handle long term dependency problems. There is avariant of LSTM which is known as Gated Recurrent Unit (GRU). GRU is alsoused in this research.Research in NLP with the Bangla language is a necessity after the recent flour-ishing of the language in the world wide web. However, there is still no researchto preprocess news articles dataset for topic modelling with the Bangla languagewhich is also a motivation for this research. A Comparison of LDA, HDP andLSI algorithms is also a new step for Bangla which will benefit current and fu-ture researchers to understand the performance of these popular topic modellingalgorithms in terms of Bangla language.1.2 Proposed Solution and ContributionsThis thesis provides several insights regarding the contribution with the Banglalanguage for topic modelling and sentiment analysis. Firstly, an analysis of thetopic modelling techniques with graphical illustrations, tables and comparisonsare provided. A deep explanation of a recurrent neural network is also discussed.Secondly, a novel idea is proposed for sentiment analysis on the Bangla language.Both the word and the character level representations are achieved for training themodel. LSTM and GRU are used after the character level encoding. Characterlevel representation has never been used in Bangla language for sentiment analysis.Thirdly, the major drawback of the word level representation for deep neuralnetwork model has been addressed in this research. The model performs witherror if any word that comes in the test set has not been trained. So the characterlevel encoding plays a vital role in understanding the sequence of the sentence andhence predicting the underlying sentiment.Fourthly, an optimized model with the right hyper parameters for the sentimentanalysis is proposed. In the proposed model, an embedding layer of 67 units andoutput layer with three units are used after understanding with trial and errorthat this is the optimal number of layers for this particular task to be solved.Fifth, a model with the LDA algorithm is developed to train with Bangla languageand predict the news class from the Bangla news corpus. LSA, LSI and HDPalgorithms are also compared. Coherence of the topics is achieved with graphicalillustrations. Individual news topics are collected to make it meaningful for ahuman reader.Sixth, finding the related news groups with a similarity measure is implemented.This is a classification task with the LDA algorithm. The cosine similarity valuefor the news is computed. Coherence based topic optimization is employed toachieve the right number of topics for the corpus.Finally, different experiments and analysis demonstrate the correctness and effi-ciency of the proposed models.1.3 Thesis OrganizationThe rest of the thesis is organized as follows. Chapter 2 discusses related worksand explains</s>
|
<s>the background knowledge that is necessary for understanding therest of the thesis work. Basic concepts of the Recurrent Neural Network (RNN)are introduced. Then the LSTM algorithm is discussed in detail. Further, variousconcepts of topic modelling are explained. Text processing requirements are alsodiscussed in this chapter. Some examples are used for the easy understanding ofthe reader.Chapter 3 introduces the first part of the contribution. Here, the basics of thesentiment are discussed. The RNN algorithm is illustrated with it mathemati-cal explanation. How LSTM works for a sequential text stream is also discussed.The proposed model is discussed with graphical representations. The model ar-chitecture is also explained with a step by step process. The experiments for theproposed model are explained followed by the results with the graphical illustra-tions. Two different approaches are explained for the sentiment extraction. Theresult section focuses on the comparison of the two approaches.Chapter 4 is about the second contribution of the thesis. It is a topic-modellingbased algorithm that works with the Bangla language. The most famous algo-rithm for topic modelling, Latent Dirichlet Allocation (LDA) is discussed here.Data collection and preprocessing are also explained for Bangla language. TheBigram data model is first discussed. This chapter discusses the various experi-ments with the Bangla news corpus with respect to topic modelling. News datatopic extraction experiment with both individual news and with the complete cor-pus is explained. LSI, HDP and a tweaked version of LDA is compared in theexperimental section to check which one has better performance in terms of topiccoherence. Finally, topic-wise categorization of the news is performed.Chapter 5 then discusses the conclusions and future work regarding how thesemodels can further be enhanced for more research with the Bangla language. Thischapter proposes the future applications of these models across different fields.Chapter 2Literature Review andBackground2.1 Literature StudyDue to complex grammatical structure and fewer resources in Bangla, the languagehas not been well researched in the field of Natural Language Processing (NLP).Most of the Sentiment Analysis (SA) work has been carried out on the Englishlanguage. For this research work, some related works are studied that deal with SA.Many papers regarding topic modelling are studied too for the second contributiontowards Bangla.The most relevant research work carried out is with romanized Bangla in [7]. In thispaper, the researchers collected a dataset from different social media sources. Theyapplied a deep neural network approach. More precisely, the LSTM algorithm wasused to train the model and achieved an accuracy of 78% with categorical crossentropy. However, in this research two types of loss functions were discussed tocheck the performance of the model. Apart from the categorical cross entropy,binary cross entropy is also used. In this research a consolidated dataset wasgenerated from different social media and was preprocessed to be readily usedfor research with SA by applying a deep neural network. The model that theexperiments were done with is based on RNN or more specifically LSTM. Any deepneural network uses multiple layers as a basic architecture. Each layer receives theinputs from the previous layer and passes on the outputs to the next layer</s>
|
<s>afterperforming its respective job. Accordingly, in this research the model has differentlayers. First layer of the model is the embedding layer. In this layer word to vectorrepresentation is done. The second layer in their model is the LSTM layer withdimensions of 128 units. Then comes the fully connected third layer with differentactivation for the classification purpose.In the experiment, the last layer has 1, 2 or 3 neurons. Different number of neuronswere employed depending on what classification task is being attempted. Whenonly positive and negative sentiments were being classified, the final fully con-nected layer was configured with 1 or 2 neurons only. However, the dataset hasanother type of entity, annotated as ambiguous (neither positive nor negative).For those entities to be classified, 3 neurons were used. For the task of classifica-tion of only the positive and negative sentiment, binary cross entropy was used.Categorical cross entropy was employed in which case 3 neurons were involved.The results achieved from this model scored a 78% in terms of accuracy withthe categorical cross entropy loss and ambiguous entities removed. However, itscored much lower with the categorical cross entropy, modified text and ambiguousconverted to 2 and the accuracy for this case was only 55%. The paper did notaddress the character level encoding of the dataset. However, Hassan et al. in [7]addressed a practical perspective of having romanized Bangla i.e writing Banglawith English letters which is an important fact for people in Bangladesh writingin different social medias.The other work with SA was done with Bangla data collected from micro-blogposts [1]. The main goal for this research was to obtain a dataset from banglamicro-blogs and train the model on the opinionated data to identify the polarityas positive or negative. The dataset for this work consists of a collection of twitterposts in Bangla. 1300 instances were collected and 1000 instances were pickedfor training the model. Some English part along with the Bangla texts were alsocomprised into the dataset. Moreover, all the hashtags were included after pre-processing as they can potentially carry meaningful sentiments. Along with thegeneral pre-processing which is done for any NLP research, Parts of Speech (POS)tagging was also performed on the dataset. POS tagging is basically used asa strong subjectivity of the respective sentence. Initially, the researchers in [1],manually developed a list of Bangla words grouping by nouns, pronouns, adjec-tives, verbs etc. Each word is then translated to English in order to achieve ascore indicating its polarity as positive or negative.Table 2.1: List of some examples words after POS tagging with positive andnegative polarity [1]POS Positive-Emotion Words Negative-Emotion WordsNOUN Love, Happiness Sorrow, TroubleVERB Enjoy, Attract Damage, HateADJ Beautiful, Enthusiastic Bad, UselessADV Well UnfortunatelyHowever, one problem that might need to be addressed with a POS tagging is thatwhen a Bangla word is translated to its English counterpart the meaning mightget changed. A random Bangla word in a sentence might have a different context.So, even a positive word may sarcastically carry a negative sentiment which isa very challenging task to train as far as NLP is concerned. In this research</s>
|
<s>in[1], a semi supervised technique has been applied due to the unavailability of alarge amount of labeled data. Self training bootstrapping method is used. First, asmall chunk of the dataset was trained on the basis of the frequency of positive andnegative words. After the model acquires the knowledge based on this little data,another chunk of the data is applied to gain insight on the polarity. This processis continued until the whole dataset is labelled. Once the dataset is ready, featureextraction begins. Bigrams are generated. Stemming, the process of convertingeach word into its root is performed with a rule-based technique. Emoticons inthe twitter posts were also considered as an important part. An emoticon polaritydictionary developed by Leebecker et al. was used to unwrap the meaning of theemoticons. Bangla and English Lexicons were used to identify the polarity.Two state-of-the-art classifiers named SVM and MaxEnt were used in [1]. Finally,the classifiers results were compared in different feature sets. A comparison ofF-measure with the different features are demonstrated in Table 2.2From the results it was observed that SVM outperforms MaxEnt. The highestaccuracy was obtained as much as 93% with SVM using unigrams and emoticonsas features [1]. So, the result was satisfactory with a small amount of data andscarce resource for Bangla language itself. However, the neutral statements a.k.aambiguous sentences were not considered in this research which is address in thisthesis. The proposed model in this research addresses the neutral statements thatdo not convey any positive or negative sentiments.Table 2.2: Comparison of F-measure for both the classifiers with differentfeatures [1]Features PositiveMeasureNegativeMeasurePositiveMeasureNegativeMeasureunigram 0.65 0.68 0.67 0.69unigram+stemming 0.69 0.70 0.67 0.70bigram 0.69 0.42 0.034 0.67unigram+bigram 0.65 0.69 0.67 0.70unigram+negation 0.66 0.68 0.67 0.69emoticon 0.89 0.87 0.74 0.83unigram+emoticon 0.93 0.93 0.83 0.85unigram+negation+emoticon 0.92 0.92 0.83 0.85lexicon(Bangla) 0.71 0.34 0.71 0.38lexicon(English+Bangla) 0.53 0.73 0.57 0.74unigram+lexicon(English+Bangla) 0.71 0.71 0.67 0.69unigram+lexicon(English+Bangla)+POS0.71 0.47 0.71 0.46all 0.89 0.85 0.83 0.85A hybrid method was proposed in [8] to identify the sentiments. The authors firstdetermined whether or not a sentence is subjective and they designed a modelfrom the mixture of various POS features collected from the phrase level similar-ity and they used syntactic model to perform sentiment analysis. By doing so,they achieved overall 63.02% recall rate using SVM on the news data. The re-search involves a dependency graph to identify related nodes in a sentence. Here,nodes represent each word. It helps to determine intra-chunk polarity relation-ship. Chunk level information was used as a feature in this proposed method sinceit helps reducing ambiguity of the polarity. SentiWordNet [9] has been used asan important feature to get the idea of the words carrying potential sentimentalmeanings. For the purpose of this research, collected words from SentiWordNetare given a score according to their corresponding polarity.After feature selection and processing, SVM is applied to train the classifier. Itreaches an accuracy of 70.04% and a recall of 63.02%. Although, this researchinvolves efficient features in their model, it does not come up with any significantresult in terms of the accuracy. One reason for this could be the size of the dataset.A relatively</s>
|
<s>small dataset is used for this research. Some similar research addresseddata from social media [10]. Here, two different approaches were discussed toidentify the polarity of a Facebook post. The first approach is a Naive Bayesclassifier and the second one involves lexical resources. However, an observationwas made on the results that focused on the fact that the Lexical resource worksbetter for a domain specific sentiment analysis.Character level Convolutional Neural Network (CNN) was explored in [11]. Severallarge scale datasets were used by Xiang Zhang et al. in [11] to demonstrate theefficiency of CNN in achieving the state of the art results. Different models suchas TF-IDF and Bag of Words were compared in terms of performance. In thisresearch, one dimensional temporal CNN was used. A key module for the CNNwas max pooling to train deeper models. Characters were used in encoded version.One hot encoding was performed on size m of the input language. In total, 70characters were used consisting of 26 English letters, 33 other characters and 10digits and also the new line character. The input feature length was selected as1024 which is the same as we did in our contribution. Logically, 1024 characterscould include most of the texts of interest for this kind of corpus. An interestingpart of the model is that it has data augmentation ability. Since, in many cases ithas been studied that data augmentation makes the model capable of controllingthe generalization error or also known as out-of-sample error .A comparison between different models was carried out. First, the Bag of Wordsmodel is developed. The bag contained only 50,000 most frequent words fromthe training set. Bag of n-grams and its TF-IDF was then used. For the laterone, 500,000 most frequent words were selected from the training subset for eachdataset. Finally, Bag of means model was used. In this one, K-means clusteringwas used for classification. All the words appearing more than 5 times in thetraining period were considered.Deep learning methods were another section of the paper. Two different typesof deep learning models were used and compared. One is Word based CNN andthe other one is LSTM. This research has concrete connection with the work hasbeen done in this thesis. Two different models were compared for deep learning.One is word based CNN and the other one is simply LSTM which is the same asthis thesis. For the wordbased CNN both pre-trained word2vec embedding andlookup table technique were used. In both cases the embedding size was chosenas 300 to make a fair comparison. However the techniques followed by Zhang etal. in [11] for the LSTM model is different. They combine a couple of LSTMmodels and generated a feature vector and then perform a multinominal logisticregression on this vector. Vanilla architecture which is a variation of LSTM wasalso used. They also used gradient clipping. The most important discussion fromthis research is that the character level CNN can be efficient without the need forwords. This strongly indicates that languages can also be considered as a signal ofany other kind. However, how well these deep learning</s>
|
<s>models work depends onmany parameters such as the data size, number of layers, choice of alphabets etc.Some other works have also been studied for this research. These include neuralnetwork research topics, representation with back propagating errors and otherfundamental dependency learning with the LSTM algorithm which helped carryforward this research. In [12] a learning procedure was proposed which is knownas back propagation. This procedure is specific to learning in terms of neuralnetworks. This back propagation regularly adjusts the weights of the connectionsin the network to minimize the difference between the actual output vector and thedesired output vector. Due to this adjustment the understanding of the hiddenunits in the neural network becomes much clearer and they come to representimportant features of the task domain. This gives an understanding as to howthe weights of the CNN model should be adjusted and relate this thesis to theexploration of the neural network architecture. In [13] a research was carriedout to understand the underlying concepts of the LSTM algorithm itself whichalso contributed towards the ideas to use this classic algorithm for the sentimentanalysis task in this thesis. LSTM is a novel gradient based efficient algorithmthat works well where remembering some values for an arbitrary time interval isnecessary. The architecture of LSTM consists of gated units and neurons whichwas briefly described in [13]. Different datasets were explored to learn how manyruns the LSTM takes to appropriately learn languages. Also, the learning rateswere compared. A counterpart research was made by Yoshua Bengio et al. in [14]where it has been experimented that long term dependency is difficult to learnwith a gradient descent approach and the algorithm becomes inefficient as thetime duration of holding the memory increases. In [15] more LSTM experimentswere done with gated recurrent neural network. Dropout techniques were studiedin [16]. It sheds light on the techniques on how to avoid overfitting. The subsequentsections will discuss more on dropouts and also on Adam [17] for optimization.Some more research will be cited for the sentiment analysis work as and whenneeded in chapter 3.For the second part of the thesis, Topic Modelling, a number of research papersand related works were studied. Most of the works are done in English due to itsavailability of a structured dataset. However, no paper cites the topic modellingapplied on the Bangla language specifically developing models with the LatentDirichlet Allocation (LDA) algorithm. So, that is where this thesis focuses on.Although, many experiments have been conducted over the years to extract thetopics from an English corpus. This is an unsupervised learning from the classicLDA algorithm. The first paper in [2] is basically a survey on Topic Modelling.Two different aspects of topic modelling were discussed in [2]. One is about thedifferent classic topic modelling algorithms like LDA, Latent Semantic Analysis(LSA) and the second aspect is the topic trend. The first one is experimentedin this thesis. The LDA algorithm is the base for the proposed model in thisthesis. The reason for LDA gaining popularity among the NLP community isthat LDA can capture both the document and word perspectives from</s>
|
<s>a corpus.LDA mimics the way a document is written. Given the topics it generates adocument that best fits those topics and hence can understand the correlationbetween documents and topics. So this paper talks and discusses about each of thetopic modelling algorithm and critically analyses in terms of their performances.LDA perceives a data to be a mixture of different topics and each topic containingsome words. Each of the word within a topic has a probability value to belong tothat topic. On the other hand LSA tries to achieve a semantic content from thevectorized representation of a document. Another goal of the LSA is to measurethe similarity of documents and then picking up the most relevant word thatmatches the document. Apparently, these can be done with the LDA as well butthe inner structure of the LSA vary from the LDA. LDA collects the related textsand divide by the number of documents. In the next section of this chapter a briefdiscussion will be available to understand how exactly these models work.Discussion on topic evolution model sheds light on the pros and cons of this modelin [2]. It is important to understand the topic trend over time and there are severaldifferent techniques on topic evolution finding. Word co-occurrence and time aretaken into consideration to understand the topic trend. Another model is knownas dynamic topic models. It generates topic distribution at different epochs. Mostof the time a sequential corpus is used to understand the topic trends. It assumesthat at each time slice topic distributions will change and the same topic can havedifferent probabilities over different time slots depending on the dataset.In this paper some comparisons of different topic modelling and topic evolutionmodels were compared in terms of their characteristics and also their pros and cons.Table 2.3: Comparison of characteristics of topic modelling methods [2]Name of the methods CharacteristicsLatent Semantic Analysis(LSA)1) LSA can get fromthe topic if thereare any synonyms2) Not robust statisti-cal backgroundProbabilistic Latent Seman-tic Analysis (PLSA)1) It can generate each wordfrom a single topic;eventhough various words in onedocument may be gener-ated from different topics2) PLSA handles polysemyLatent Dirichlet Allocation(LDA)1) Need to manuallyremove stop words2) It is found that theLDA can not make the rep-resentation of relationshipsamong topicsCorrelated Topic Model(CTM)1) using of logistic nor-mal distribution to cre-ate relations among topics2) Allows the occurrences ofwords in other topics andtopic wordsHowever, CTM and PLSA will not be used in the proposed model for this the-sis. But it is a good survey paper to understand the baseline models with theircharacteristics. Different topic evolution methods were also studied in this paper.Blei et al. has the most fundamental contribution on LDA in his paper in [18]. Thispaper specifically talks about how the LDA actually works. Empirical experimentswere conducted to demonstrate how the LDA algorithm can provide state of the artresults across topic modelling, text classification and collaborative filtering. LDAassumes that there are K underlying latent topics by which the whole corpuscan be generated. Each topic consists of all the relevant words those can belongtogether with a probability value. A document</s>
|
<s>is generated by taking a mixtureof topics and then the topics consist of word distributions. In the experimentstwo different corpora were tested to explore how the LDA can generate topicsand their corresponding words. Perplexity was calculated of the model to test theperformance in general. Perplexity is an estimate on how well a probability modelTable 2.4: Different topic evolution models and their main characteristics [2]Models Main characteristics ofmodelsModeling topic evolution bycontinuous-time model1) “Topics over time: A nonmarkov continuous timemodel of topic trends”Modeling topic evolution bydiscretizing time1)“Dynamic Topic Models”2)“Multiscale topictomography”3)“A non parametricapproach to pairwise dy-namic topic correlationdetection”Modeling topic evolution byusing citation relationshipas well as discretizing time1)“Detecting topic evolu-tion in scientific literature:How can citations help”2)“The web of topics: Dis-covering the topology oftopic evolution in a corpus”works for an unseen data. The lower the perplexity the better the prediction is.For both of the dataset it was seen in the experiment that the LDA perplexityreduces with number of topics in [18]. So, the accuracy results are validated.Among the other topic modelling algorithms LDA seems to work better and givingreasonably good results. So in this research a comparison of different models andtheir topic coherence will be discussed in chapter 4. Results for text classificationand collaborative filtering were also combined and LDA worked better for theseapplications as well. So, the conclusion is that LDA being a generative model canbe used as a module to different applications and extend it in different directions.An insightful study of LDA has been done in [3]. The main goal of this paperwas to classify the documents with similarity and understand the topics achievedfrom the LDA. So, two different types of datasets were used. The authors used theWikipedia and Twitter datasets. Data pre-processing was applied. Tokenization,followed by stop word removal was done. Afterwards, a dictionary was made beforefinally training the model with the LDA. After the data was pre-processed, thetwo different models were individually trained for the two datasets. Topics wereachieved for both the datasets.Some experiments were carried out with these topics to understand how well theyrelate to the data. A document was chosen randomly and the topic distributionfor that data was illustrated. Some relevant topics with high probabilities wereachieved. Topic-word distribution for those topics were then experimented whichshowed the relevance of the words with the documents. This is a very straightfor-ward and effective research on the LDA to understand the fundamental concepts.Apart from the topic distribution, document similarity measure was also achievedusing Jensen-Shannon divergence to calculate the distance. However, in this the-sis the cosine similarity is used to perform the similarity and classification taskssince it is a simplified technique in terms of the sparse matrix distance calculationand also it demonstrated effective results. The following table shows how a topicdistribution is made for a particular document.Table 2.5: Top five topic probabilities for article Light [3]Topics Topic47 Topic45 Topic16 Topic24 Topic30Probabilities 0.05692600 0.45920304 0.28462998 0.01518027 0.01328273In Table 2.5, the probability values are calculated with high precision in order todistinguish the small fractional changes. Similar research was done by Blei et al.from the application perspective</s>
|
<s>of LDA. Blei proposed the LDA algorithm fortopic modelling in 2003. An exploration of the underlying semantic structure of adataset from the JSTORs archive for scientific journals were carried out in [19]. Amethodology was proposed to facilitate efficient browsing of the electronic dataset.The Dynamic LDA was also implemented on this dataset to see the topic trendsover a period of time. In this research, efforts were made to understand the topicsof Bangladeshi newspapers.A survey for the LDA and its variants was performed in [20]. Classification task ofthe LDA was also discussed. Both [19] and [20] discussed about the dynamic LDAto obtain the topic trends over time which is important for news analytics. InBangla, no previous effort was made to understand the topic trends. The datasetthat is developed for this research is organized chronologically. There are over7000 news articles organized from the oldest to the most recent. This will help tounderstand the topic trends as an extension work in the future.Out of few number of researches with the Bangla language for text summariza-tion, a novel approach was proposed in [21]. Although no LDA based methodswere discussed in [21], it is still a very insightful research done with the Banglalanguage. A collection of news articles were used as the dataset. Basically, itproposed a scoring method for understanding the semantic meaning of a docu-ment. Two different approaches were proposed for scoring. One is for sentencescoring and the other one is word based scoring method. However, before anyscoring applied on the dataset, general preprocessing was done. Tokenization,stemming, stop words removal etc. were performed. For the word level scoring,word frequency, numeric value identification, repeated words distance, cue wordscount were performed. During the word frequency phase, the number of wordsare counted across the whole collection. This helps generating the Bag of Wordsmodel. Numeric values are considered in this research because they can effectivelycarry semantic meaning of any text article. These numeric values were included inthe vector space representation. Repeated word distance was also calculated. Re-peated words can help understand the summary. A word occurring multiple timesin an article with a close distance can find the importance of that word whichcontributes in the summary generation. Repeated word distance further helps inidentifying a cue word which may have a clear indication of the summary. Forthe sentence scoring, different techniques were implemented that includes iden-tifying sentence length, sentence position and uniform sentences. Finally, wordsand sentences were clustered in three different ways [21]. Sentence ranking basedon their similarities, Final gist analysis, Aggregate similarities were performed tocombine the results and cluster different types of news together. The performancefor this proposed method was evaluated against a human clustered group. So, thesystem-generated summary of the news was much faster and almost as relevant asthe human generated ones.2.2 Background KnowledgeIn this section the required background knowledge will be discussed. It is necessaryfor the reader to understand the rest of the thesis. This section will briefly discussthe algorithms, concepts and the tools used for the experiments.2.2.1 Topic ModellingIn the edge of digitization over the last decade,</s>
|
<s>an enormous amount of textualdata have been and still are being generated extremely rapidly on the world wideweb (www). As soon as data science got fueled with big data, the importanceof analyzing textual data emerged. Starting from understanding the customer’sreviews across different business fields to understanding the latent meaning of acorpus (a collection of text data) gained high importance. Blei proposed a novelapproach to understand the topics which eventually led to the classification of doc-uments, understanding sentiments and opened quite a lot of analysis perspectivesfor textual data [18]. Topic modelling is basically understanding the topics in acorpus. It is a probabilistic method to explore the topics which is not possibleor in some cases time consuming for a naked human eye. As the name suggests,topic modelling is a technique by which topics from a corpus are generated au-tomatically that provides insight on the hidden patterns of the dataset. It is nota rule-based or a regular-expression-based technique on which some generic rulesare created to find the importance of the words. Rather, it is more of a statisticaland a probabilistic approach. An example of a topic modelling is as follows.Let assume there is a large amount of data talking about different subjects likehealth, food, education and traffic. A good topic modelling would bring thesetopics together with a probabilistic value representing the chances of these wordsto be in that topic. So, a good topic modelling should generate these topics -Topic 1: Body, Breathing, Burning, Acne, Headache, Hygiene etc. Topic 2: Eat,Digest, Delicious, Frozen, Taste, Spice. Topic 3: Engineering, Science, Degree,University, Job, Money. Topic 4: Accidents, Rush, Weather, Speed, Ticket, Car.This is how a topic modelling brings about the hidden semantic structures ofa data. Without reading a large number of text data, topics are automaticallygenerated. This helps in classification, sentiment analysis, understanding trendsin a business etc. Topic modelling has impact in the field of data science andNatural Language Processing. This thesis has half of its contribution regardingtopic modelling with the Bangla language which has never been explored due to thescarcity of dataset as well as other resources on Bangla. Another important fieldof topic modelling is newspaper industry. Automatic classification plays a vitalrole in the virtual news world. A huge number of news articles are generated eachday. Understanding the topics and trends of news are important to analyze andpredict. World renowned news agency “New York Times” is using topic modellingto boost their customers’ need and suggesting news genre according to customer’sprofile. However, Bangla being the 6th most spoken language all over the world itis a necessity to have an automation. Topic model may also work the same wayas Netflix in terms of suggestions. It will suggest articles or news or may even bebooks for digital libraries.2.2.2 Latent Dirichlet AllocationThere are many different approaches for obtaining topics from a set of text data.Topics can be generated from Term-Frequency and Inverse-Document frequency(TF-IDF) model, non-negative matrix factorization etc. However, Latent DirichletAllocation (LDA) is the most popular technique applied on topic modelling. Thisapproach has never been explored on the Bangla</s>
|
<s>language. LDA is basically aprobabilistic algorithm that generates topics from a Bag of Words model. Beforethe LDA is discussed, a general overview of Topic and Term is given as follows:A topic is a collection of words with different probabilities of occurrence in adocument talking about that topic. If there are multiple documents, a topic wouldconsist all the words initially from that collection. After the model is trained thetopics will consist of words that are highly relevant.A term is a word in a topic. Each term belongs to a topic. The LDA algorithmgenerates a topic-term matrix.LDA is a matrix factorization technique [22]. At first, it generates a document-term matrix. In the matrix, assume that there are two documents d1 and d2 andtwo words t1 and t2. So the example matrix is as follows:Table 2.6: An example document-term matrixt1 t2d1 0 1d2 1 0In this matrix, the rows represent the documents and the columns represent theterms t1 and t2. 0 and 1 shows if the word belongs to the document or not. Thenthe LDA generates two more low level matrices representing document-topic andtopic-term matrix which will be discussed in detail in chapter 3. Afterwards, alist of all the unique words are made from all the documents. The algorithm goesthrough each word and adjusts the topic-term matrix with a new assignment. Anew topic “K” is assigned to the word “W” with a probability P [22]. How theprobability is calculated will be discussed in the related section. The algorithmkeeps iterating until a situation is reached when the probabilities do not changeconsiderably which means that the probabilities will have a very small fractionalchange with further iterations. At this point the algorithm converges and is ex-pected to have the topics extracted.2.2.3 Latent Semantic IndexingLatent Semantic Indexing (LSI) also known as Latent Semantic Analysis (LSA) is awidely used topic modelling algorithm apart from the LDA. This topic modellingalgorithm tries to understand the underlying semantic meaning of words in adocument. Topic modelling is not a straightforward process for a machine sincethere is not just a single word having the same concept or meaning. It wouldhave been much easier to model topics if there would be each word mapping toa single concept. But in reality, there could be many words that address thesame meaning and people use different words under the same context. Just likein English, Bangla has many words to describe a single context. LSA has efficientyet simple techniques to relate words. For example the word Bank, if used withMortgage, Loan, Credit etc then it is defining a financial institution. But if theword Bank is used with Fish, Tide etc. then we know that it is defining a riverbank.LSA is a variation of the LDA that comes with the goal to find a relevant documentby searching words. It becomes complicated to find documents by just searchingwords since words can have multidimensional contexts and this is not simple fora machine to learn. However, LSA came up with simple ideas to make a conceptspace by keeping track of both words</s>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.