text
stringlengths 41
31.4k
|
|---|
<s>tests such as fluency and adequacy tests. To ensure the quality ofoutput, the Bilingual Evaluation Understudy(BLEU) score has been calculated. Some Bangla phrasesgenerated with their respective UNL phrases by the proposed Bangla DeConverter are shown inTable 3. Our proposed system achieved a BLEU score of 0.76. Since, there are no other BanglaDeConverters proposed yet, we compared our system with a Punjabi DeConverter [8] to test theInformation 2019, 10, 324 14 of 17efficiency of our work. A comparison of the results, shown in Figure 10, was conducted based onthe BLEU score, fluency score, and the percentage of a grammatically correct sentence.Information 2019, 10, x FOR PEER REVIEW 15 of 16 {unl} agt(go(icl>move>do,plt>place,plf>place,agt>thing).@entry.@present,they(icl>group).@pl) plt(go(icl>move>do,plt>place).@entry.@present,office(icl>organization,icl>place)) met(go(icl>move>do,plt>place).@entry.@present,car(icl>motor_vehicle>thing)) {/unl} เฆคเฆพเฆฐเฆพเฆเฆ เฆเฆพเฆฟเงเงเฆคเฆเงเฆฐเฆ
เฆฟเฆซเฆธเฆเฆพเงเฅค Tara ekti garite kore office jaye. They go to office by a car. {unl} aoj(admit(icl>icl>give_access>be,plt>place).@entry.@past,he(icl>person))plc(admit(icl>give_access>be,plt>place).@entry.@past,hospital(icl>medical_institution>)) rsn(admit(icl>give_access>be,plt>place).@entry.@past,illness(icl>ill_health>thing)){/unl} เฆธ เฆ
เฆธเง เฆนเฆเงเฆพเง เฆนเฆพเฆธเฆชเฆพเฆคเฆพเฆฒ เฆญเฆฟเฆค เฆนเงเงเงเฆเฅค Se oshusto howaye haspatale vorti hoyeche He admitted in a hospital due to illness {unl} agt(go(icl>move>do,plt>place,plf>place,agt>thing).@entry.@present,i(icl>person)) plt(go(icl>move>do,plt>place).@entry.@present,australia(iof>country>thing)) via(australia(iof>country>thing),singapore(iof>island>thing)) {/unl} เฆเฆฟเฆฎ เฆฟเฆธ เฆพเฆชเงเฆฐ เฆนเงเง เฆ
เง เฆฟเฆฒเงเฆพ เฆเฆพเฆเฅค Ami Singapore hoye Australia jai I go to Australia via Singapore. Figure 10.Result comparison with Punjabi DeConverter [8]. 8. ConclusionsThis research paper has proposed a Bangla DeConverter. Syntactic linearization is a significant part of the proposed system for the extraction of quality Bangla language texts. Syntactic linearization of simple and compound sentences with scope-node and matrix-based priority of relations have been discussed in this paper. The proposed Bangla DeCo system has been tested for 300 UNL expressions. The system attained a fluency score of 3.63 on a four-point scale, and a BLEU score of 0.76. The proposed Bangla DeCo can successfully convert a UNL expression to resemble 0.720.7689%90%3.613.630 0.5 1 1.5 2 2.5 3 3.5 4Panjabi [8]Proposed Bengali DeConverterPanjabi [8] Proposed Bengali DeConverterFluency score out of 4.00 3.61 3.63Grammatically Correct sentence 89% 90%BLEU score 0.72 0.76Result Comparison chartFigure 10. Result comparison with Punjabi DeConverter [8].We have evaluated the proposed Bangla DeConverter with only one other DeConverter (PunjabiDeConverter) because, no DeConverters have been proposed for Bangla language yet. The sentencestructure of Bangla language and Punjabi language are almost similar. Unlike English, Bangla is afree word order language known for its affluent semantical and morphological features similar to thePunjabi language. The English language is pa erned with Subject, Verb, and Object (SVO), whileboth Bangla and Punjabi languages follow Subject, Object, and Verb (SOV) pa ern. Therefore, wehave evaluated the proposed DeConverter with the Punjabi DeConverter.In is well known that a rule-based machine translation system provides good accuracy in wri enand plainly structured documents such as simple article, weather report etc. But it cannot workefficiently for real-world documents. The main reason is that a human language does not follow afixed set of rules. Human languages are full of regional variations, special cases, and new rules. Newrules are continuously evolving and old rules are continually changing in most, if not all languages.Therefore, a slight improvement may play importance roles.Information 2019, 10, 324 15 of 17Table 3. Bangla sentences produced by the proposed Bangla DeCo with their corresponding input UNL expressions.Sentence</s>
|
<s>No. Input UNL Expressions Bangla Sentences Produced by Bangla DeConverter{unl}agt(spend(icl>pass>do,com>time,).@entry.@past,i(icl>person))pos(holiday(icl>leisure>thing,equ>vacation),i(icl>person))obj(spend(icl>pass>do,com>time).@entry.@past,holiday(icl>leisure>thing,equ>vacation))plc(holiday(icl>leisure>thing,equ>vacation),paris(iof>national_capital>thing)){/unl}เฆเฆฟเฆฎ เฆ เฆเฆฐ เฆฟเฆฆเฆจ เฆชเฆพเฆฟเฆฐเงเฆธ เฆเฆพ เฆเงเงเฆฟเฆAmi chhutir din parishe katiechhiI spent my holiday in Paris.{unl}agt(perform_an_action(icl>do).@entry.@present,we(icl>group).@pl)pos(work(icl>activity>abstract_thing),we(icl>group).@pl)obj(perform_an_action(icl>do).@entry.@present,work(icl>activity>abstract_thing))man(perform_an_action(icl>do).@entry.@present,perfectly(icl>how,equ>absolutely)){/unl}เฆเฆฎเฆฐเฆพเฆเฆฎเฆพเงเฆฆเฆฐ เฆเฆพเฆ เฆธ เฆ เฆเฆญเฆพเงเฆฌ เฆเฆฟเฆฐ|Amra amader kaj shothikvabe koriWe do our work perfectly{unl}aoj(city(icl>administrative_district).@entry.@present,tokyo(iof>national_capital>thing))man(beautiful(icl>adj,ant>ugly),very(icl>how,equ>extremely))mod(city(icl>administrative_district).@entry.@indef.@present,beautiful(icl>adj,ant>ugly)){/unl}เฆเฆฟเฆเฆ เฆเฆ เฆ เฆธเง เฆฐ เฆถเฆนเฆฐ|Tokyo ekti shundor shohorTokyo is a very beautiful city.{unl}aoj(have(icl>be,equ>possess,obj>thing,aoj>thing).@entry.@present,i(icl>person))obj(have(icl>be,equ>possess,).@entry.@present,tomorrow(icl>time,ant>yesterday))aoj(meet(icl>join>be,cao>thing,aoj>thing).@progress,tomorrow(icl>timeโant>yesterday)){/unl}[/S]เฆเฆเฆพเฆฟเฆฎเฆเฆพเฆฒเฆเฆฎเฆพเฆฐ เฆเฆเฆเฆพ เฆฟเฆฎ เฆเฆเฆเงเฆ|Agamikal amar ekti meeting achheI have a meeting tomorrow.{unl}agt(go(icl>move>do,plt>place,plf>place,agt>thing).@entry.@present,they(icl>group).@pl)plt(go(icl>move>do,plt>place).@entry.@present,office(icl>organization,icl>place))met(go(icl>move>do,plt>place).@entry.@present,car(icl>motor_vehicle>thing)){/unl}เฆคเฆพเฆฐเฆพเฆเฆ เฆเฆเฆพเฆฟเงเงเฆคเฆเงเฆฐเฆ
เฆฟเฆซเฆธเฆเฆพเง|Tara ekti garite kore office jaye.They go to office by a car.{unl}aoj(admit(icl>icl>give_access>be,plt>place).@entry.@past,he(icl>person))plc(admit(icl>give_access>be,plt>place).@entry.@past,hospital(icl>medical_institution>))rsn(admit(icl>give_access>be,plt>place).@entry.@past,illness(icl>ill_health>thing)){/unl}เฆธเฆ
เฆธเง เฆนเฆเงเฆพเง เฆนเฆพเฆธเฆชเฆพเฆคเฆพเฆฒ เฆญเฆฟเฆค เฆนเงเงเงเฆ|Se oshusto howaye haspatale vorti hoyecheHe admi ed in a hospital due to illness{unl}agt(go(icl>move>do,plt>place,plf>place,agt>thing).@entry.@present,i(icl>person))plt(go(icl>move>do,plt>place).@entry.@present,australia(iof>country>thing))via(australia(iof>country>thing),singapore(iof>island>thing)){/unl}เฆเฆฟเฆฎ เฆฟเฆธ เฆพเฆชเงเฆฐ เฆนเงเง เฆ
เง เฆฟเฆฒเงเฆพ เฆเฆพเฆ|Ami Singapore hoye Australia jaiI go to Australia via Singapore.Information 2019, 10, 324 16 of 178. ConclusionsThis research paper has proposed a Bangla DeConverter. Syntactic linearization is a significantpart of the proposed system for the extraction of quality Bangla language texts. Syntactic linearizationof simple and compound sentences with scope-node and matrix-based priority of relations havebeen discussed in this paper. The proposed Bangla DeCo system has been tested for 300 UNLexpressions. The system a ained a fluency score of 3.63 on a four-point scale, and a BLEU score of0.76. The proposed Bangla DeCo can successfully convert a UNL expression to resemble Bangla texts.Researchers of other native languages can explore our system to develop DeCo for their respectivenative languages. Currently our system can convert simple Bangla sentences accurately. But forcomplex and compound sentences, sometimes the system does not provide efficient results. In ourfuture work, we will address those issues and make our system more accurate.Author Contributions: The authors contributions is as follows-โConceptualization, M.N.Y.A. and M.L.R.;Methodology, M.N.Y.A. and M.L.R.; Validation, M.N.Y.A. and G.S.; Formal analysis, M.N.Y.A. and M.L.R.;Resources, M.N.Y.A.; Data curation, M.L.R.; Writingโoriginal draft preparation, M.N.Y.A. and M.L.R.;Writingโreview and editing, G.S.; Visualization, G.S.; Supervision, M.N.Y.A.; Project administration, M.N.Y.A.;Funding acquisition, M.N.Y.A. and G.S.โ.Funding: This research received no external funding.Conflicts of Interest: The authors declare no conflict of interest.References1. Uchida,H.; Zhu,M.; Senta, T.C.D.UniversalNetworking Language, UNDLFoundation. Int. Environ. House2005, 6.2. EnConverter Specification, Version 3.3; UNL Center/UNDL Foundation: Tokyo, Japan, 2002.3. Boguslavsky, I.; Frid, N.; Iomdin, L.; Kreidlin, L.; Sagalova, I.; Sizov, V. Creating a UniversalNetworking Language module within an advanced NLP system. In Proceedings of the 18th conferenceon Computational Linguistics, Saarbrรผcken, Germany, 31 Julyโ4 August 2000; Volume 1, pp. 83โ89.4. DeConverter Specification, Version 2.7; UNL Center, UNDL Foundation: Tokyo, Japan, 2002.5. Martins, R.T.; Hasegawa, R.; Rino, L.H.M.; Oliveira Junior, O.N.D.; Nunes, M.D.G.V. Specification of theUNL-Portuguese enconverter-deconverter prototype. 1997. Available online: https://bdpi.usp.br/item/000951455 (accessed on 21 October 2019).6. Dave, S.; Parikh, J.; Bha acharyya, P. Interlingua-based EnglishโHindi Machine Translation and LanguageDivergence. Comput. Transl. 2001, 16, 251โ304.7. Kumar, P.; Sharma, R.K. Punjabi DeConverter for generating Punjabi fromUniversalNetworking Language.J. Zhejiang Univ. Sci. C 2013, 14, 179โ196. [CrossRef]8. Blanc, E. About and around the French Enconverter and the French Deconverter. Univers. Netw. Lang. Adv.Theory Appl. 2005, 12, 157โ166.9. Shi, X.; Chen, Y. A Unl Deconverter for Chinese; UNL Book: Instituto Politรฉcnico Nacional, Mexico, 2005.10. Daoud, D.M. Arabic generation in the framework of the Universal Networking Language. Univers. Netw.Lang. Adv. Theory Appl. 2005, 12, 195โ209.11. Keshari, B.;</s>
|
<s>Bista, K. UNL Nepali DeConverter. In Proceedings of the 3rd International Conference onCALIBER, Cochin University of Science and Technology, Kochi, India, 2โ4 February 2005; pp. 70โ76.12. Singh, S.; Dalal, M.; Vachhani, V.; Bha acharyya, P.; Damani, O.P. Hindi generation from Interlingua (UNL).In Proceedings of the Machine Translation Summit XI, Copenhagen, Denmark, 10โ14 September 2007.13. Nalawade, A.Natural LanguageGeneration fromUniversalNetworking Language. Masterโs Thesis, IndianInstitute of Technology, Bombay/Mumbai, India, 2007.14. Vachhani, V. UNL to Hindi DeConverter. Bachelorโs Thesis, Dharamsinh Desai Institute of Technology,Nadiad, India, 2006.15. Dey, K.; Bha acharyya, P. Universal Networking Language based analysis and generation of Bangla casestructure constructs. Univers. Netw. Lang. Adv. Theory Appl. 2006, 12, 215โ229.16. Vora, A. Generation of Hindi sentences from Universal Networking Language. Bachelorโs Thesis,Dharamsinh Desai Institute of Technology, Nadiad, India, 2002.https://bdpi.usp.br/item/000951455https://bdpi.usp.br/item/000951455http://dx.doi.org/10.1631/jzus.C1200061Information 2019, 10, 324 17 of 1717. Hrushikesh, B. Towards Marathi Sentence Generation from Universal Networking Language. MasterโsThesis, Indian Institute of Technology, Bombay/Mumbai, India, 2002.18. Ru: Russian and English Language Server. Available online: http://www.unl.ru (accessed on18 August 2019).ยฉ 2019 by the authors. Licensee MDPI, Basel, Swi erland. This article is an open accessarticle distributed under the terms and conditions of the Creative Commons A ribution(CC BY) license (h p://creativecommons.org/licenses/by/4.0/).http://www.unl.ruhttp://creativecommons.org/http://creativecommons.org/licenses/by/4.0/. Introduction Related Works Architecture of Bangla DeConverter Phases of Bangla DeConverter Parser Phase Morpheme Selection Phase Morphological Analysis Phase Case Maker Insertion Phase Syntactic Linearization Phase Issues in Syntactic Linearization ParentโChild Relation Matrix-Based Priority of Relations Syntactic Linearization of Simple and Compound Sentences Syntactic Linearization of Simple Sentence Syntactic Linearization of Compound Sentence Experimental Results and Discussions Extraction of a Bangla Sentence Results and Discussions Conclusions References</s>
|
<s>Emotion Detection from Bangla Text Corpus Using Naïve Bayes Classifier4th International Conference on Electrical Information and Communication Technology (EICT), 20-22 December 2019, Khulna, Bangladesh 978-1-7281-6040-5/19/$31.00 ยฉ2019 IEEE Emotion Detection from Bangla Text Corpus Using Naรฏve Bayes ClassifierSara Azmin Department of Computer Science & Engineering Premier University Chittagong, Bangladesh azmin.sara17@gmail.com Kingshuk Dhar Department of Computer Science & Engineering Premier University Chittagong, Bangladesh kingshuk2006@yahoo.comAbstractโ Emotions are an important part of everyday human interaction. Emotions can be expressed by means of written text, verbal speech or facial expressions. In recent years, the practice of expressing emotion in social media or blogs have increased rapidly. People write about their feelings and opinions on any political or global issues. All these social activities have made it essential to gather and analyze human emotion from the text. Although the field of emotion detection has been explored extensively for English language, the investigation of this domain for Bangla language still now in its infant stages. Our paper aims at detecting multi-class emotions from Bangla text using Multinomial Naรฏve Bayes (NB) classifier along with various features such as stemmer, parts-of-speech (POS) tagger, n-grams, term frequency-inverse document frequency (tf-idf). Our final model was able to classify the text into three emotion classes (happy, sad and angry) with an overall accuracy of 78.6%. Keywords โ Emotion Detection, Machine Learning, Natural Language Processing, Bangla Text Processing, Naรฏve Bayes. I. INTRODUCTION Human emotion has always been a core interest to study in psychology. As they are an important element that helps understand any human nature. In psychology, emotion has been defined by many professors and specialists. Professor of Psychology, David G. Myers says, human emotion involves โ...physiological arousal, expressive behaviors, and conscious experience.โ [1]. Nowadays the internet has made it easier to connect to the people of any part of the world. The recent growing usage of social media has caught the attention of computer science researchers. Especially in the study of human-computer interaction. Social media like Facebook, Twitter has created huge opportunities for its users to convey their feelings, opinions, feedbacks, and emotions through text. This has made it possible to analyze the emotion of people living in any part of the world on serious issues or crisis. It is also beneficial in the case of product reviews, market analysis. Emotion detection from text is one of the core applications of artificial intelligence (AI) and Natural Language Processing (NLP). This is an important area of study to improve the interaction between humans and machines. Although this topic has been widely studied in the English language but it is still a less explored area in the case of Bangla language. Bangla is the 7th most spoken language in the world with nearly 228 million native speakers. Bangla is the official 1 http://www.btrc.gov.bd/content/internet-subscribers-bangladesh-january-2019 language of Bangladesh and a major part of Indian people also uses this language. At present, the number of internet subscriber of Bangladesh has reached a total of 91.421 million1. The current number of people in Bangladesh using Facebook is</s>
|
<s>around 33,996,0002. Nowadays, a large number of people use Bangla to write on social media. For this rapid growth of Bangla users, it is quite important to focus on the study of emotion detection in Bangla Language. Most of the recent works in Bangla focuses on binary sentiment analysis. A lot of works has been done where positive is tagged as happiness emotion and negative as sadness emotion. But this is not sufficient to analyze a text. According to Ekman [2] - happiness, fear, anger, sadness, surprise, and disgust are the six basic human emotions. In social media, people do not just share their happy and sad feelings. They comment on different posts with anger, they write about their fear about any uncertainty and so on. Therefore, being one of the most widely used languages, the need for understanding the meaning of anything written in Bangla should be taken into account. This can be utilized in many areas such as market analysis, predicting public reaction in addition to so on. In this paper, we have worked with a Bangla text corpus that contains comments from Facebook users on different topics. The proposed method preprocesses the corpus to simplify the classification process. It then classifies the data into three emotion class namely happy, sad, and angry using Multinomial Naรฏve Bayes Classifier. II. RELATED WORKS Not much of the work has been done for emotion detection in Bangla. Rather most of the paper has focused on sentiment analysis or binary sentiment polarity. A sentiment detecting approach using machine learning technique has been proposed in [3], in this paper they also analyzed some features but did not actually use them in their research. They focused on binary classification by using tf-idf classifier to find out the most informative words and got 83% accuracy using this approach. A sentiment polarity detection on Bangla tweets using word and character n-grams with Naรฏve Bayes has been proposed in [4]. Authors also looked at the SentiWordNet feature, which is a lexical resource of sentiment polarity analysis. They classified the tweets using Multinomial NB. Using 1000 training and 500 test data, they achieved 48.5% accuracy. 2 https://napoleoncat.com/stats/facebook-users-in-bangladesh/2019/09 In another paper [5], a lexicon-based backtracking approach has been employed over 301 test sentences for binary emotion classification. In this paper, they first classified the sentiment of the data and then the emotion. The dataset was mainly collected from Facebook status, news headlines, textbook, and direct speech. They claimed an accuracy of 77.16% using this approach. A good study on emotion tagging has been done on paper [6]. In this paper, authors aimed at doing manual annotations of sentence-level text from web-based Bengali blog corpus and observed the classification results on the corpus they annotated. With 1200 training instances, The Conditional Random Fields (CRF) classifier gave them an average accuracy score of 58.7% and Support Vector Machine (SVM) managed to get them to 70.4%. A computational approach of analyzing and tracking emotions is studied in the paper [7]. This paper</s>
|
<s>focuses on the identification of the emotional expressions at word, phrase, sentence, and document level along with the emotion holders and events. Emotions has been tracked on the basis of subject or event. They observed a micro F-score of 0.63 on 200 test sentences that are collected from Bangla news and blogs for sentential emotion tagging. On a case study for Bengali [8], authors developed a blog-based emotion analysis system. The blog posts are collected from Bengali web blog archive. They considered 1100 sentences and used a morphological analyzer for identifying lexical keywords. The average evaluation results they got for precision, recall, and F1-score was 0.59, 0.64 and 0.62 respectively. And for the morphological based system, they achieved an F1-score of 0.65. A multilabel sentiment and emotion detecting approach has been proposed in [9]. Authors have considered a dataset containing Bangla, English and Romanized Bangla comments on different YouTube videos. They proposed a deep learning-based approach that classifies Bangla comments with a three-class and five class sentiment labels. They built models to extract emotions as well. Results showed 65.97% and 54.24% accuracy for three class and five class sentiment labels. Also, they extracted emotions with an accuracy of 59.23%. In one of the most recent works [10] about fine-grained Bangla emotion classification, authors compared the results for five different classical machine learning techniques. They introduced a dataset having six different emotion categories and showed that a non-linear SVM with RBF kernel achieves an average accuracy of 52.98% and an overall F1 macro score of 0.33. One remarkable contribution of this paper is the exploration of manifold preprocessing and feature selection techniques. We are also using the same dataset produced by them but only considering three classes because of the highly imbalanced nature of this dataset. III. METHODOLOGY Given a set of comments and emotion labels (e.g. happy, sad, anger) for each of those comments, our main objective was to classify the comments with the appropriate emotion label using a supervised machine learning algorithm called Naรฏve Bayes classifier. We have gone through a number of pre-processing steps before classifying the data to remove any unnecessary information. Variants of feature selection techniques were also examined to improve the classification performance. Each of these techniques was important since these can affect the overall result of the supervised approach that we took. Here in this section, we provided an overview of our approaches. Figure 1 shows the detailed architecture of the proposed method. Our system works in two phases: Training phase and Test phase. At the very beginning, the dataset was divided into training and test data. The training set is then processed with several pre-processing steps followed by various feature selection techniques before feeding them to the classifier. The test data goes through this same process afterward and predicts probable emotion labels for each document. The predefined emotion labels are then compared with the predicted ones to evaluate the efficiency of the system. The proposed method can be divided into four sequential phases:</s>
|
<s>1. Dataset Preparation 2. Pre-processing 3. Feature Selection and Extraction 4. Classification Fig 1: Architecture of the proposed method A. Dataset Preparation The dataset we used for this work contains a large number of user comments from different Facebook groups and some public posts of popular bloggers. The comments were collected based on different socio-political issues. We adopted the same corpus developed by the authors of this paper [10]. The dataset was annotated by the authors themselves based on the presence of words in addition to phrases corresponding to emotional content as well as the overall emotion lying in that comment. Among six different categories, we only considered 4200 comments of three emotion classes - happy, sad and angry. Owing to the fact that the dataset was too much imbalanced and the number of annotations for the other classes were quite a few as compared to the above three classes. This in terms affected their performance (to 52.98%) because most of the misclassification was due to the reason of training imbalanced dataset. We split the above dataset where 3780 of the comments were used for training and 420 comments as test data. Table I provides the class distribution of the selected dataset. TABLE I. CLASS DISTRIBUTION IN THE DATASET Label Training Set Test Set Happy 1582 230 Sad 1062 104 Angry 1136 86 Total 3780 420 B. Pre-processing The comments in the dataset contained a lot of useless and duplicate data such as stop words, punctuations, digits, and symbols. In order to simplify the further steps, we processed the data and cleaned them as follows: a) Text Segmentation: At first any kind of extra dot, comma, hyphen, and other symbols were deleted. Punctuations and digits were removed too. Then the sentences were tokenized into separate words. We have used the native python string split function to tokenize the words of sentences. b) Handling Emoticon: At this point, we removed all the emoticons from the comments since we aimed to consider text data only. A string of punctuations and emoticons was defined to strip out every single one of them from the data. c) Stop Word Removal: We removed the stop words from each data. Since stop words are actually the most frequently repeated words. We filtered stop words (such as เฆนเง, aเฆฅเฆ, aเฆฅเฆฌเฆพ, eเฆฌเฆ) and removed them. We have considered the Bangla stop word list from an open source project3. d) Stemming: A word may vary in different forms. We stemmed each token into its root word to make it easier to process the data. Stemming Bangla words is quite difficult since it has a huge number of inflected words. Here, we made lists of prefix, suffix, nouns, verbs and article inflections. And then 3 https://github.com/stopwords-iso/stopwords-bn 4 https://github.com/shaoncsecu/Bangla_POS_Tagger trimmed the left or right part of the word to extract the stems. For example, these three words เฆเฆฐเงเฆเฆจ, เฆเฆฐเงเฆฌเฆจ, เฆเฆฐเงเฆ would be converted into the root word โเฆเฆฐโ. C. Feature Selection and Extraction Feature selection and extraction is the most important</s>
|
<s>step to detect emotion because it affects the overall result of the work. A good feature selection results in a good prediction. So, selecting features properly to enhance the classification is very important. After the completion of the pre-processing phase, we applied several features to evaluate our processed data. We combined different techniques to observe the best possible result. a) POS Tagging: POS tagger labels a word based on its grammatical category. It assigns each of the words in a document to its corresponding parts of speech. Different approaches can be used for POS tagging. We have used a Hidden Markov Model (HMM) based POS tagger which is supervised. The POS tagger we used can tag words with 32 tags of parts-of-speech and its subclasses4. We have used the same POS tagger used in the paper [10] which has a claimed accuracy of 75% over the POS tagged dataset they used. In our experiment we have only considered 3 versions of tagging, one with only JJ (โadjectiveโ), another with five tags - โJJโ, โCXโ, โVMโ, โNPโ and โAMNโ, and finally with โall tagsโ for the purpose of comparison. The reason behind choosing these specific tags is that, most of the emotion related words falls under these parts-of speech categories. We looked upon other combinations too but those actually did not change the results too much. Fig 2: Example of n-grams as a feature. b) Word n-grams: n-grams feature is considered to be very useful for classifying texts. Basically, this is a combination of n subsequent words or characters. In our work have used word n-grams. Here, we have observed the performance for unigram, bi-gram, and tri-gram to get the best model. We combined the n-grams feature with other features as well to get the overall scenario. Bi-gram provides a relatively better result than unigram and tri-gram in our work. Therefore, we used bi-grams for further evaluation. For our experimentation with features, we used scikit-learn [11]. Figure 2 shows an example of unigram, bigram, and trigram in Bangla text. c) Tf-idf Vectorizer: Tf-idf simply corresponds to term frequency times inverse document frequency. One good input representation in NLP task is the tf-idf vectorizer which weights the occurrence of a word in a document instead of taking only raw counts. In our work, we combined both tf-idf and n-gram from scikit-learn [11]. The term frequency counts how many times a particular word appears in a given document. Whereas, inverse document frequency accounts for all the documents which have that word in it. The formula to calculate tf-idf is given below: . . (1) log . . (2) - โ (3) D. Classification The classifier uses the representation of the data that goes through all the pre-processing and feature selection steps. For classification, we have used a Multinomial Naรฏve Bayes classifier to predict the emotions from the text. Naรฏve Bayes is a probabilistic classifier which relies on the Bayesian theorem [12] to build a predictive classification model based on every pair of features.</s>
|
<s>We have applied it in our work using scikit-learn [11]. It is a widely used library for text classification in python. In our implementation, we used the Multinomial version of the NB which is defined by the function in scikit-learn as MultinomialNB(). It uses the fit(trainDoc, trainClass) method to train the classifier. In our case the training data contains the comments and the training class are the corresponding emotion labels. NB algorithm uses the following Bayes rule to calculate the class probability given a document which is also termed as posterior probability. The class that has the maximum probability among all the classes will be selected as the most probable class for that document. It determines the probability of each word or word n-grams with respect to the classes and uses the chain rule to produce the full probability for a document given a class. | | (4) A classification model based on Multinomial Naรฏve Bayes classifier has been designed in this work to classify Bangla Language texts into three different classes namely happy, sad and angry. The model was first trained with the training data and then the test labels were classified with this model. The predicted labels of the test data were then compared to the gold emotion labels to evaluate the performance of the classification model. IV. EVALUATION AND RESULT In order to evaluate the performance of our proposed method, we have considered precision, recall, and F1-score for each emotion class and the average accuracy. We have experimented with different combinations of pre-processing and features to find out which combination produces the best score. Table II shows the performance of each of these models in different combinations. TABLE II. CLASSIFICATION RESULTS BASED ON DIFFERENT FEATURES. Feature Combination Accuracy emoticon removal + tf-idf + stemmer 0.775 Stopword and emoticon removal + stemmer + tf-idf + POS tagger 0.773 Stopword and emoticon removal + stemmer + tf-idf + unigram + POS tagger 0.776 Stopword and emoticon removal + stemmer + tf-idf + bigram + POS tagger 0.786 Stopword and emoticon removal + stemmer + tf-idf + trigram + POS tagger 0.774 Based on all the experiments given, we choose the model with the best features. Our best model uses bigram based tf-idf with POS features and both stopword and emoticon processor. With this combination, we were able to gain an overall accuracy score of 78.6%. For this same combination, we also evaluated the result by using Support Vector Machine (SVM) for classification. The overall score for SVM was not sufficient enough compared to MNB. Table III shows the result of both classifiers for the best combination of features. TABLE III. CLASSIFICATION RESULTS BASED ON CLASSIFIER. Classifier Accuracy MNB 0.786 SVM 0.716 Figure 3 shows the confusion matrix of this classification model on the test set. Fig 3: Confusion matrix for three classTable IV shows the detailed evaluation scores for our best model. We can see that the modelโs prediction for the happy class was really good (F1 of 0.843)</s>
|
<s>while for sad categories, our model performed poorly. This is due to the fact that our training dataset was imbalanced and the sad class had the lowest number of training examples. However, this, in essence, represents the real-world scenario making our model more generalized. TABLE IV. DETAILED EVALUATION USING BEST MODEL. Emotion Class Precision Recall Macro F1-Score Accuracy Happy 0.764 0.968 0.843 0.786 Sad 0.843 0.519 0.643 Angry 0.828 0.616 0.707 We also compared our best model with some of the existing works that we have mentioned in the literature review section. Table V shows the comparison with related works in the aspect of techniques and results. While evaluating, we have seen that the size of dataset, features and classifier used for the classification purpose greatly affects the overall result of the work. TABLE V. COMPARISON WITH RELEVENT STATE OF ART WORKS V. CONCLUSION In this paper, we have tried to detect emotions from Bangla text. We have used a dataset that is comprised of a large number of user comments from Facebook posts. We processed the data to remove any kind of unnecessary information noise from it and to make the classification easier. We applied a variety of features like n-grams, POS tagger, and tf-idf to enhance the efficiency of the classifier. We used a Multinomial Naรฏve Bayes classifier to classify the data and compared the predicted label with the gold label. Our final model was able to classify the test data with an overall accuracy of 78.6%. While doing this work, we faced some problems with the dataset since it was imbalanced. The original dataset consists of six emotion class but we only considered three classes that had a relatively sufficient number of data. We believe, there is some inconsistency with the annotations and it makes the model disagree more. Also, the overall evaluation would have been better if we had more data. Moreover, Bangla is a morphologically rich language so we faced some difficulties while working with some of the features. In a future study, we also would like to handle negations for better performance. REFERENCES [1] D. G. Myers, โTheories of emotion,โ Psychol. Seventh Ed. New York, NY Worth Publ., vol. 500, 2004. [2] P. Ekman, โAn argument for basic emotions,โ Cogn. Emot., vol. 6, no. 3โ4, pp. 169โ200, 1992. [3] M. Mahmudun, M. T. Altaf, and S. Ismail, โDetecting Sentiment from Bangla Text using Machine Learning Technique and Feature Analysis,โ Int. J. Comput. Appl., vol. 975, p. 8887. [4] K. Sarkar, โUsing Character N-gram Features and Multinomial Naive Bayes for Sentiment Polarity Detection in Bengali Tweets,โ in 2018 Fifth International Conference on Emerging Applications of Information Technology (EAIT), 2018, pp. 1โ4. [5] T. Rabeya, S. Ferdous, H. S. Ali, and N. R. Chakraborty, โA survey on emotion detection: A lexicon based backtracking approach for detecting emotion from Bengali text,โ in 2017 20th International Conference of Computer and Information Technology (ICCIT), 2017, pp. 1โ7. [6] D. Das and S. Bandyopadhyay, โLabeling emotion in Bengali blog corpus--a</s>
|
<s>fine grained tagging at sentence level,โ in Proceedings of the Eighth Workshop on Asian Language Resouces, 2010, pp. 47โ55. [7] D. Das, โAnalysis and tracking of emotions in english and bengali texts: a computational approach,โ in Proceedings of the 20th international conference companion on World wide web, 2011, pp. 343โ348. [8] D. Das, S. Roy, and S. Bandyopadhyay, โEmotion tracking on blogs-a case study for bengali,โ in International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, 2012, pp. 447โ456. [9] N. I. Tripto and M. E. Ali, โDetecting Multilabel Sentiment and Emotions from Bangla YouTube Comments,โ in 2018 International Conference on Bangla Speech and Language Processing (ICBSLP), 2018, pp. 1โ6. [10] M. Rahman, M. Seddiqui, and others, โComparison of Classical Machine Learning Approaches on Bangla Textual Emotion Analysis,โ arXiv Prepr. arXiv1907.07826, 2019. [11] F. Pedregosa et al., โScikit-learn: Machine learning in Python,โ J. Mach. Learn. Res., vol. 12, no. Oct, pp. 2825โ2830, 2011. [12] J. Han, J. Pei, and M. Kamber, Data mining: concepts and techniques. Elsevier, 2011. Paper Approach/Algorithm Result (Accuracy/ F1-score) M. Mahmudun, 2016 โข Tf-Idf on training data โข Frequency of Important data 83% (A) K. Sarkar, 2018 โข NB 48.5% (A) T. Rabeya, 2017 โข Lexicon Approach โข Backtracking Technique 77.16% (A) S. Bandyopadhyay, 2010 โข Data Annotation โข Conditional Random Field (CRF) โข SVM CRF-58.7% SVM-70.4% (A) D. Das, 2011 โข Emotion, Holder and Topic โข Sense-based affect estimation โข SVM/CRF 63.26% (F1) S. Roy, 2012 โข Lexical word level keyword spotting โข Ordering of timestamps 65.3% (F1) N. I. Tripto, 2018 โข LSTM/CNN โข SVM/NB (LSTM) 65.97% 54.24% 59.23% (A) M.A. Rahman, 2019 โข NB/SVM/KNN/K- means clustering/ Decision tree (SVM) 52.98% (A) Our best model โข NB 78.6% (A) /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.7 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 0 /ParseDSCComments false /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo false /PreserveFlatness true /PreserveHalftoneInfo true /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Remove /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /AbadiMT-CondensedLight /ACaslon-Italic /ACaslon-Regular /ACaslon-Semibold /ACaslon-SemiboldItalic /AdobeArabic-Bold /AdobeArabic-BoldItalic /AdobeArabic-Italic /AdobeArabic-Regular /AdobeHebrew-Bold /AdobeHebrew-BoldItalic /AdobeHebrew-Italic /AdobeHebrew-Regular /AdobeHeitiStd-Regular /AdobeMingStd-Light /AdobeMyungjoStd-Medium /AdobePiStd /AdobeSongStd-Light /AdobeThai-Bold /AdobeThai-BoldItalic /AdobeThai-Italic /AdobeThai-Regular /AGaramond-Bold /AGaramond-BoldItalic /AGaramond-Italic /AGaramond-Regular /AGaramond-Semibold /AGaramond-SemiboldItalic /AgencyFB-Bold /AgencyFB-Reg /AGOldFace-Outline /AharoniBold /Algerian /Americana /Americana-ExtraBold /AndaleMono /AndaleMonoIPA /AngsanaNew /AngsanaNew-Bold /AngsanaNew-BoldItalic /AngsanaNew-Italic /AngsanaUPC /AngsanaUPC-Bold /AngsanaUPC-BoldItalic /AngsanaUPC-Italic /Anna /ArialAlternative /ArialAlternativeSymbol /Arial-Black /Arial-BlackItalic /Arial-BoldItalicMT /Arial-BoldMT /Arial-ItalicMT /ArialMT /ArialMT-Black /ArialNarrow /ArialNarrow-Bold /ArialNarrow-BoldItalic /ArialNarrow-Italic /ArialRoundedMTBold /ArialUnicodeMS /ArrusBT-Bold /ArrusBT-BoldItalic /ArrusBT-Italic /ArrusBT-Roman /AvantGarde-Book /AvantGarde-BookOblique /AvantGarde-Demi /AvantGarde-DemiOblique /AvantGardeITCbyBT-Book /AvantGardeITCbyBT-BookOblique /BakerSignet /BankGothicBT-Medium /Barmeno-Bold /Barmeno-ExtraBold /Barmeno-Medium /Barmeno-Regular /Baskerville /BaskervilleBE-Italic /BaskervilleBE-Medium /BaskervilleBE-MediumItalic /BaskervilleBE-Regular /Baskerville-Bold /Baskerville-BoldItalic /Baskerville-Italic /BaskOldFace /Batang /BatangChe /Bauhaus93 /Bellevue /BellMT /BellMTBold /BellMTItalic /BerlingAntiqua-Bold /BerlingAntiqua-BoldItalic /BerlingAntiqua-Italic /BerlingAntiqua-Roman /BerlinSansFB-Bold /BerlinSansFBDemi-Bold /BerlinSansFB-Reg /BernardMT-Condensed /BernhardModernBT-Bold /BernhardModernBT-BoldItalic /BernhardModernBT-Italic /BernhardModernBT-Roman</s>
|
<s>/BiffoMT /BinnerD /BinnerGothic /BlackadderITC-Regular /Blackoak /blex /blsy /Bodoni /Bodoni-Bold /Bodoni-BoldItalic /Bodoni-Italic /BodoniMT /BodoniMTBlack /BodoniMTBlack-Italic /BodoniMT-Bold /BodoniMT-BoldItalic /BodoniMTCondensed /BodoniMTCondensed-Bold /BodoniMTCondensed-BoldItalic /BodoniMTCondensed-Italic /BodoniMT-Italic /BodoniMTPosterCompressed /Bodoni-Poster /Bodoni-PosterCompressed /BookAntiqua /BookAntiqua-Bold /BookAntiqua-BoldItalic /BookAntiqua-Italic /Bookman-Demi /Bookman-DemiItalic /Bookman-Light /Bookman-LightItalic /BookmanOldStyle /BookmanOldStyle-Bold /BookmanOldStyle-BoldItalic /BookmanOldStyle-Italic /BookshelfSymbolOne-Regular /BookshelfSymbolSeven /BookshelfSymbolThree-Regular /BookshelfSymbolTwo-Regular /Botanical /Boton-Italic /Boton-Medium /Boton-MediumItalic /Boton-Regular /Boulevard /BradleyHandITC /Braggadocio /BritannicBold /Broadway /BrowalliaNew /BrowalliaNew-Bold /BrowalliaNew-BoldItalic /BrowalliaNew-Italic /BrowalliaUPC /BrowalliaUPC-Bold /BrowalliaUPC-BoldItalic /BrowalliaUPC-Italic /BrushScript /BrushScriptMT /CaflischScript-Bold /CaflischScript-Regular /Calibri /Calibri-Bold /Calibri-BoldItalic /Calibri-Italic /CalifornianFB-Bold /CalifornianFB-Italic /CalifornianFB-Reg /CalisMTBol /CalistoMT /CalistoMT-BoldItalic /CalistoMT-Italic /Cambria /Cambria-Bold /Cambria-BoldItalic /Cambria-Italic /CambriaMath /Candara /Candara-Bold /Candara-BoldItalic /Candara-Italic /Carta /CaslonOpenfaceBT-Regular /Castellar /CastellarMT /Centaur /Centaur-Italic /Century /CenturyGothic /CenturyGothic-Bold /CenturyGothic-BoldItalic /CenturyGothic-Italic /CenturySchL-Bold /CenturySchL-BoldItal /CenturySchL-Ital /CenturySchL-Roma /CenturySchoolbook /CenturySchoolbook-Bold /CenturySchoolbook-BoldItalic /CenturySchoolbook-Italic /CGTimes-Bold /CGTimes-BoldItalic /CGTimes-Italic /CGTimes-Regular /CharterBT-Bold /CharterBT-BoldItalic /CharterBT-Italic /CharterBT-Roman /CheltenhamITCbyBT-Bold /CheltenhamITCbyBT-BoldItalic /CheltenhamITCbyBT-Book /CheltenhamITCbyBT-BookItalic /Chiller-Regular /Cmb10 /CMB10 /Cmbsy10 /CMBSY10 /CMBSY5 /CMBSY6 /CMBSY7 /CMBSY8 /CMBSY9 /Cmbx10 /CMBX10 /Cmbx12 /CMBX12 /Cmbx5 /CMBX5 /Cmbx6 /CMBX6 /Cmbx7 /CMBX7 /Cmbx8 /CMBX8 /Cmbx9 /CMBX9 /Cmbxsl10 /CMBXSL10 /Cmbxti10 /CMBXTI10 /Cmcsc10 /CMCSC10 /Cmcsc8 /CMCSC8 /Cmcsc9 /CMCSC9 /Cmdunh10 /CMDUNH10 /Cmex10 /CMEX10 /CMEX7 /CMEX8 /CMEX9 /Cmff10 /CMFF10 /Cmfi10 /CMFI10 /Cmfib8 /CMFIB8 /Cminch /CMINCH /Cmitt10 /CMITT10 /Cmmi10 /CMMI10 /Cmmi12 /CMMI12 /Cmmi5 /CMMI5 /Cmmi6 /CMMI6 /Cmmi7 /CMMI7 /Cmmi8 /CMMI8 /Cmmi9 /CMMI9 /Cmmib10 /CMMIB10 /CMMIB5 /CMMIB6 /CMMIB7 /CMMIB8 /CMMIB9 /Cmr10 /CMR10 /Cmr12 /CMR12 /Cmr17 /CMR17 /Cmr5 /CMR5 /Cmr6 /CMR6 /Cmr7 /CMR7 /Cmr8 /CMR8 /Cmr9 /CMR9 /Cmsl10 /CMSL10 /Cmsl12 /CMSL12 /Cmsl8 /CMSL8 /Cmsl9 /CMSL9 /Cmsltt10 /CMSLTT10 /Cmss10 /CMSS10 /Cmss12 /CMSS12 /Cmss17 /CMSS17 /Cmss8 /CMSS8 /Cmss9 /CMSS9 /Cmssbx10 /CMSSBX10 /Cmssdc10 /CMSSDC10 /Cmssi10 /CMSSI10 /Cmssi12 /CMSSI12 /Cmssi17 /CMSSI17 /Cmssi8 /CMSSI8 /Cmssi9 /CMSSI9 /Cmssq8 /CMSSQ8 /Cmssqi8 /CMSSQI8 /Cmsy10 /CMSY10 /Cmsy5 /CMSY5 /Cmsy6 /CMSY6 /Cmsy7 /CMSY7 /Cmsy8 /CMSY8 /Cmsy9 /CMSY9 /Cmtcsc10 /CMTCSC10 /Cmtex10 /CMTEX10 /Cmtex8 /CMTEX8 /Cmtex9 /CMTEX9 /Cmti10 /CMTI10 /Cmti12 /CMTI12 /Cmti7 /CMTI7 /Cmti8 /CMTI8 /Cmti9 /CMTI9 /Cmtt10 /CMTT10 /Cmtt12 /CMTT12 /Cmtt8 /CMTT8 /Cmtt9 /CMTT9 /Cmu10 /CMU10 /Cmvtt10 /CMVTT10 /ColonnaMT /Colossalis-Bold /ComicSansMS /ComicSansMS-Bold /Consolas /Consolas-Bold /Consolas-BoldItalic /Consolas-Italic /Constantia /Constantia-Bold /Constantia-BoldItalic /Constantia-Italic /CooperBlack /CopperplateGothic-Bold /CopperplateGothic-Light /Copperplate-ThirtyThreeBC /Corbel /Corbel-Bold /Corbel-BoldItalic /Corbel-Italic /CordiaNew /CordiaNew-Bold /CordiaNew-BoldItalic /CordiaNew-Italic /CordiaUPC /CordiaUPC-Bold /CordiaUPC-BoldItalic /CordiaUPC-Italic /Courier /Courier-Bold /Courier-BoldOblique /CourierNewPS-BoldItalicMT /CourierNewPS-BoldMT /CourierNewPS-ItalicMT /CourierNewPSMT /Courier-Oblique /CourierStd /CourierStd-Bold /CourierStd-BoldOblique /CourierStd-Oblique /CourierX-Bold /CourierX-BoldOblique /CourierX-Oblique /CourierX-Regular /CreepyRegular /CurlzMT /David-Bold /David-Reg /DavidTransparent /Dcb10 /Dcbx10 /Dcbxsl10 /Dcbxti10 /Dccsc10 /Dcitt10 /Dcr10 /Desdemona /DilleniaUPC /DilleniaUPCBold /DilleniaUPCBoldItalic /DilleniaUPCItalic /Dingbats /DomCasual /Dotum /DotumChe /DoulosSIL /EdwardianScriptITC /Elephant-Italic /Elephant-Regular /EngraversGothicBT-Regular /EngraversMT /EraserDust /ErasITC-Bold /ErasITC-Demi /ErasITC-Light /ErasITC-Medium /ErieBlackPSMT /ErieLightPSMT /EriePSMT /EstrangeloEdessa /Euclid /Euclid-Bold /Euclid-BoldItalic /EuclidExtra /EuclidExtra-Bold /EuclidFraktur /EuclidFraktur-Bold /Euclid-Italic /EuclidMathOne /EuclidMathOne-Bold /EuclidMathTwo /EuclidMathTwo-Bold /EuclidSymbol /EuclidSymbol-Bold /EuclidSymbol-BoldItalic /EuclidSymbol-Italic /EucrosiaUPC /EucrosiaUPCBold /EucrosiaUPCBoldItalic /EucrosiaUPCItalic /EUEX10 /EUEX7 /EUEX8 /EUEX9 /EUFB10 /EUFB5 /EUFB7 /EUFM10 /EUFM5 /EUFM7 /EURB10 /EURB5 /EURB7 /EURM10 /EURM5 /EURM7 /EuroMono-Bold /EuroMono-BoldItalic /EuroMono-Italic /EuroMono-Regular /EuroSans-Bold /EuroSans-BoldItalic /EuroSans-Italic /EuroSans-Regular /EuroSerif-Bold /EuroSerif-BoldItalic /EuroSerif-Italic /EuroSerif-Regular /EUSB10 /EUSB5 /EUSB7 /EUSM10 /EUSM5 /EUSM7 /FelixTitlingMT /Fences /FencesPlain /FigaroMT /FixedMiriamTransparent /FootlightMTLight /Formata-Italic /Formata-Medium /Formata-MediumItalic /Formata-Regular /ForteMT /FranklinGothic-Book /FranklinGothic-BookItalic /FranklinGothic-Demi /FranklinGothic-DemiCond /FranklinGothic-DemiItalic /FranklinGothic-Heavy /FranklinGothic-HeavyItalic /FranklinGothicITCbyBT-Book /FranklinGothicITCbyBT-BookItal /FranklinGothicITCbyBT-Demi /FranklinGothicITCbyBT-DemiItal /FranklinGothic-Medium /FranklinGothic-MediumCond /FranklinGothic-MediumItalic /FrankRuehl /FreesiaUPC /FreesiaUPCBold /FreesiaUPCBoldItalic /FreesiaUPCItalic /FreestyleScript-Regular /FrenchScriptMT /Frutiger-Black /Frutiger-BlackCn /Frutiger-BlackItalic /Frutiger-Bold /Frutiger-BoldCn /Frutiger-BoldItalic /Frutiger-Cn /Frutiger-ExtraBlackCn /Frutiger-Italic /Frutiger-Light /Frutiger-LightCn /Frutiger-LightItalic /Frutiger-Roman /Frutiger-UltraBlack /Futura-Bold /Futura-BoldOblique /Futura-Book /Futura-BookOblique /FuturaBT-Bold /FuturaBT-BoldItalic /FuturaBT-Book /FuturaBT-BookItalic /FuturaBT-Medium /FuturaBT-MediumItalic /Futura-Light /Futura-LightOblique /GalliardITCbyBT-Bold /GalliardITCbyBT-BoldItalic /GalliardITCbyBT-Italic /GalliardITCbyBT-Roman /Garamond /Garamond-Bold /Garamond-BoldCondensed /Garamond-BoldCondensedItalic /Garamond-BoldItalic /Garamond-BookCondensed /Garamond-BookCondensedItalic /Garamond-Italic /Garamond-LightCondensed /Garamond-LightCondensedItalic /Gautami /GeometricSlab703BT-Light /GeometricSlab703BT-LightItalic /Georgia /Georgia-Bold /Georgia-BoldItalic /Georgia-Italic /GeorgiaRef /Giddyup /Giddyup-Thangs /Gigi-Regular /GillSans /GillSans-Bold</s>
|
<s>/GillSans-BoldItalic /GillSans-Condensed /GillSans-CondensedBold /GillSans-Italic /GillSans-Light /GillSans-LightItalic /GillSansMT /GillSansMT-Bold /GillSansMT-BoldItalic /GillSansMT-Condensed /GillSansMT-ExtraCondensedBold /GillSansMT-Italic /GillSans-UltraBold /GillSans-UltraBoldCondensed /GloucesterMT-ExtraCondensed /Gothic-Thirteen /GoudyOldStyleBT-Bold /GoudyOldStyleBT-BoldItalic /GoudyOldStyleBT-Italic /GoudyOldStyleBT-Roman /GoudyOldStyleT-Bold /GoudyOldStyleT-Italic /GoudyOldStyleT-Regular /GoudyStout /GoudyTextMT-LombardicCapitals /GSIDefaultSymbols /Gulim /GulimChe /Gungsuh /GungsuhChe /Haettenschweiler /HarlowSolid /Harrington /Helvetica /Helvetica-Black /Helvetica-BlackOblique /Helvetica-Bold /Helvetica-BoldOblique /Helvetica-Condensed /Helvetica-Condensed-Black /Helvetica-Condensed-BlackObl /Helvetica-Condensed-Bold /Helvetica-Condensed-BoldObl /Helvetica-Condensed-Light /Helvetica-Condensed-LightObl /Helvetica-Condensed-Oblique /Helvetica-Fraction /Helvetica-Narrow /Helvetica-Narrow-Bold /Helvetica-Narrow-BoldOblique /Helvetica-Narrow-Oblique /Helvetica-Oblique /HighTowerText-Italic /HighTowerText-Reg /Humanist521BT-BoldCondensed /Humanist521BT-Light /Humanist521BT-LightItalic /Humanist521BT-RomanCondensed /Imago-ExtraBold /Impact /ImprintMT-Shadow /InformalRoman-Regular /IrisUPC /IrisUPCBold /IrisUPCBoldItalic /IrisUPCItalic /Ironwood /ItcEras-Medium /ItcKabel-Bold /ItcKabel-Book /ItcKabel-Demi /ItcKabel-Medium /ItcKabel-Ultra /JasmineUPC /JasmineUPC-Bold /JasmineUPC-BoldItalic /JasmineUPC-Italic /JoannaMT /JoannaMT-Italic /Jokerman-Regular /JuiceITC-Regular /Kartika /Kaufmann /KaufmannBT-Bold /KaufmannBT-Regular /KidTYPEPaint /KinoMT /KodchiangUPC /KodchiangUPC-Bold /KodchiangUPC-BoldItalic /KodchiangUPC-Italic /KorinnaITCbyBT-Regular /KristenITC-Regular /KrutiDev040Bold /KrutiDev040BoldItalic /KrutiDev040Condensed /KrutiDev040Italic /KrutiDev040Thin /KrutiDev040Wide /KrutiDev060 /KrutiDev060Bold /KrutiDev060BoldItalic /KrutiDev060Condensed /KrutiDev060Italic /KrutiDev060Thin /KrutiDev060Wide /KrutiDev070 /KrutiDev070Condensed /KrutiDev070Italic /KrutiDev070Thin /KrutiDev070Wide /KrutiDev080 /KrutiDev080Condensed /KrutiDev080Italic /KrutiDev080Wide /KrutiDev090 /KrutiDev090Bold /KrutiDev090BoldItalic /KrutiDev090Condensed /KrutiDev090Italic /KrutiDev090Thin /KrutiDev090Wide /KrutiDev100 /KrutiDev100Bold /KrutiDev100BoldItalic /KrutiDev100Condensed /KrutiDev100Italic /KrutiDev100Thin /KrutiDev100Wide /KrutiDev120 /KrutiDev120Condensed /KrutiDev120Thin /KrutiDev120Wide /KrutiDev130 /KrutiDev130Condensed /KrutiDev130Thin /KrutiDev130Wide /KunstlerScript /Latha /LatinWide /LetterGothic /LetterGothic-Bold /LetterGothic-BoldOblique /LetterGothic-BoldSlanted /LetterGothicMT /LetterGothicMT-Bold /LetterGothicMT-BoldOblique /LetterGothicMT-Oblique /LetterGothic-Slanted /LevenimMT /LevenimMTBold /LilyUPC /LilyUPCBold /LilyUPCBoldItalic /LilyUPCItalic /Lithos-Black /Lithos-Regular /LotusWPBox-Roman /LotusWPIcon-Roman /LotusWPIntA-Roman /LotusWPIntB-Roman /LotusWPType-Roman /LucidaBright /LucidaBright-Demi /LucidaBright-DemiItalic /LucidaBright-Italic /LucidaCalligraphy-Italic /LucidaConsole /LucidaFax /LucidaFax-Demi /LucidaFax-DemiItalic /LucidaFax-Italic /LucidaHandwriting-Italic /LucidaSans /LucidaSans-Demi /LucidaSans-DemiItalic /LucidaSans-Italic /LucidaSans-Typewriter /LucidaSans-TypewriterBold /LucidaSans-TypewriterBoldOblique /LucidaSans-TypewriterOblique /LucidaSansUnicode /Lydian /Magneto-Bold /MaiandraGD-Regular /Mangal-Regular /Map-Symbols /MathA /MathB /MathC /Mathematica1 /Mathematica1-Bold /Mathematica1Mono /Mathematica1Mono-Bold /Mathematica2 /Mathematica2-Bold /Mathematica2Mono /Mathematica2Mono-Bold /Mathematica3 /Mathematica3-Bold /Mathematica3Mono /Mathematica3Mono-Bold /Mathematica4 /Mathematica4-Bold /Mathematica4Mono /Mathematica4Mono-Bold /Mathematica5 /Mathematica5-Bold /Mathematica5Mono /Mathematica5Mono-Bold /Mathematica6 /Mathematica6Bold /Mathematica6Mono /Mathematica6MonoBold /Mathematica7 /Mathematica7Bold /Mathematica7Mono /Mathematica7MonoBold /MatisseITC-Regular /MaturaMTScriptCapitals /Mesquite /Mezz-Black /Mezz-Regular /MICR /MicrosoftSansSerif /MingLiU /Minion-BoldCondensed /Minion-BoldCondensedItalic /Minion-Condensed /Minion-CondensedItalic /Minion-Ornaments /MinionPro-Bold /MinionPro-BoldIt /MinionPro-It /MinionPro-Regular /Miriam /MiriamFixed /MiriamTransparent /Mistral /Modern-Regular /MonotypeCorsiva /MonotypeSorts /MSAM10 /MSAM5 /MSAM6 /MSAM7 /MSAM8 /MSAM9 /MSBM10 /MSBM5 /MSBM6 /MSBM7 /MSBM8 /MSBM9 /MS-Gothic /MSHei /MSLineDrawPSMT /MS-Mincho /MSOutlook /MS-PGothic /MS-PMincho /MSReference1 /MSReference2 /MSReferenceSansSerif /MSReferenceSansSerif-Bold /MSReferenceSansSerif-BoldItalic /MSReferenceSansSerif-Italic /MSReferenceSerif /MSReferenceSerif-Bold /MSReferenceSerif-BoldItalic /MSReferenceSerif-Italic /MSReferenceSpecialty /MSSong /MS-UIGothic /MT-Extra /MTExtraTiger /MT-Symbol /MT-Symbol-Italic /MVBoli /Myriad-Bold /Myriad-BoldItalic /Myriad-Italic /Myriad-Roman /Narkisim /NewCenturySchlbk-Bold /NewCenturySchlbk-BoldItalic /NewCenturySchlbk-Italic /NewCenturySchlbk-Roman /NewMilleniumSchlbk-BoldItalicSH /NewsGothic /NewsGothic-Bold /NewsGothicBT-Bold /NewsGothicBT-BoldItalic /NewsGothicBT-Italic /NewsGothicBT-Roman /NewsGothic-Condensed /NewsGothic-Italic /NewsGothicMT /NewsGothicMT-Bold /NewsGothicMT-Italic /NiagaraEngraved-Reg /NiagaraSolid-Reg /NimbusMonL-Bold /NimbusMonL-BoldObli /NimbusMonL-Regu /NimbusMonL-ReguObli /NimbusRomNo9L-Medi /NimbusRomNo9L-MediItal /NimbusRomNo9L-Regu /NimbusRomNo9L-ReguItal /NimbusSanL-Bold /NimbusSanL-BoldCond /NimbusSanL-BoldCondItal /NimbusSanL-BoldItal /NimbusSanL-Regu /NimbusSanL-ReguCond /NimbusSanL-ReguCondItal /NimbusSanL-ReguItal /Nimrod /Nimrod-Bold /Nimrod-BoldItalic /Nimrod-Italic /NSimSun /Nueva-BoldExtended /Nueva-BoldExtendedItalic /Nueva-Italic /Nueva-Roman /NuptialScript /OCRA /OCRA-Alternate /OCRAExtended /OCRB /OCRB-Alternate /OfficinaSans-Bold /OfficinaSans-BoldItalic /OfficinaSans-Book /OfficinaSans-BookItalic /OfficinaSerif-Bold /OfficinaSerif-BoldItalic /OfficinaSerif-Book /OfficinaSerif-BookItalic /OldEnglishTextMT /Onyx /OnyxBT-Regular /OzHandicraftBT-Roman /PalaceScriptMT /Palatino-Bold /Palatino-BoldItalic /Palatino-Italic /PalatinoLinotype-Bold /PalatinoLinotype-BoldItalic /PalatinoLinotype-Italic /PalatinoLinotype-Roman /Palatino-Roman /PapyrusPlain /Papyrus-Regular /Parchment-Regular /Parisian /ParkAvenue /Penumbra-SemiboldFlare /Penumbra-SemiboldSans /Penumbra-SemiboldSerif /PepitaMT /Perpetua /Perpetua-Bold /Perpetua-BoldItalic /Perpetua-Italic /PerpetuaTitlingMT-Bold /PerpetuaTitlingMT-Light /PhotinaCasualBlack /Playbill /PMingLiU /Poetica-SuppOrnaments /PoorRichard-Regular /PopplLaudatio-Italic /PopplLaudatio-Medium /PopplLaudatio-MediumItalic /PopplLaudatio-Regular /PrestigeElite /Pristina-Regular /PTBarnumBT-Regular /Raavi /RageItalic /Ravie /RefSpecialty /Ribbon131BT-Bold /Rockwell /Rockwell-Bold /Rockwell-BoldItalic /Rockwell-Condensed /Rockwell-CondensedBold /Rockwell-ExtraBold /Rockwell-Italic /Rockwell-Light /Rockwell-LightItalic /Rod /RodTransparent /RunicMT-Condensed /Sanvito-Light /Sanvito-Roman /ScriptC /ScriptMTBold /SegoeUI /SegoeUI-Bold /SegoeUI-BoldItalic /SegoeUI-Italic /Serpentine-BoldOblique /ShelleyVolanteBT-Regular /ShowcardGothic-Reg /Shruti /SILDoulosIPA /SimHei /SimSun /SimSun-PUA /SnapITC-Regular /StandardSymL /Stencil /StoneSans /StoneSans-Bold /StoneSans-BoldItalic /StoneSans-Italic /StoneSans-Semibold /StoneSans-SemiboldItalic /Stop /Swiss721BT-BlackExtended /Sylfaen /Symbol /SymbolMT /SymbolTiger /SymbolTigerExpert /Tahoma /Tahoma-Bold /Tci1 /Tci1Bold /Tci1BoldItalic /Tci1Italic /Tci2 /Tci2Bold /Tci2BoldItalic /Tci2Italic /Tci3 /Tci3Bold /Tci3BoldItalic /Tci3Italic /Tci4 /Tci4Bold /Tci4BoldItalic /Tci4Italic /TechnicalItalic /TechnicalPlain /Tekton /Tekton-Bold /TektonMM /Tempo-HeavyCondensed /Tempo-HeavyCondensedItalic /TempusSansITC /Tiger /TigerExpert /Times-Bold /Times-BoldItalic /Times-BoldItalicOsF /Times-BoldSC /Times-ExtraBold /Times-Italic /Times-ItalicOsF /TimesNewRomanMT-ExtraBold /TimesNewRomanPS-BoldItalicMT /TimesNewRomanPS-BoldMT /TimesNewRomanPS-ItalicMT /TimesNewRomanPSMT /Times-Roman /Times-RomanSC /Trajan-Bold /Trebuchet-BoldItalic /TrebuchetMS /TrebuchetMS-Bold /TrebuchetMS-Italic /Tunga-Regular /TwCenMT-Bold /TwCenMT-BoldItalic /TwCenMT-Condensed /TwCenMT-CondensedBold /TwCenMT-CondensedExtraBold /TwCenMT-CondensedMedium /TwCenMT-Italic /TwCenMT-Regular /Univers-Bold /Univers-BoldItalic /UniversCondensed-Bold /UniversCondensed-BoldItalic /UniversCondensed-Medium /UniversCondensed-MediumItalic /Univers-Medium /Univers-MediumItalic /URWBookmanL-DemiBold /URWBookmanL-DemiBoldItal /URWBookmanL-Ligh /URWBookmanL-LighItal /URWChanceryL-MediItal /URWGothicL-Book</s>
|
<s>/URWGothicL-BookObli /URWGothicL-Demi /URWGothicL-DemiObli /URWPalladioL-Bold /URWPalladioL-BoldItal /URWPalladioL-Ital /URWPalladioL-Roma /USPSBarCode /VAGRounded-Black /VAGRounded-Bold /VAGRounded-Light /VAGRounded-Thin /Verdana /Verdana-Bold /Verdana-BoldItalic /Verdana-Italic /VerdanaRef /VinerHandITC /Viva-BoldExtraExtended /Vivaldii /Viva-LightCondensed /Viva-Regular /VladimirScript /Vrinda /Webdings /Westminster /Willow /Wingdings2 /Wingdings3 /Wingdings-Regular /WNCYB10 /WNCYI10 /WNCYR10 /WNCYSC10 /WNCYSS10 /WoodtypeOrnaments-One /WoodtypeOrnaments-Two /WP-ArabicScriptSihafa /WP-ArabicSihafa /WP-BoxDrawing /WP-CyrillicA /WP-CyrillicB /WP-GreekCentury /WP-GreekCourier /WP-GreekHelve /WP-HebrewDavid /WP-IconicSymbolsA /WP-IconicSymbolsB /WP-Japanese /WP-MathA /WP-MathB /WP-MathExtendedA /WP-MathExtendedB /WP-MultinationalAHelve /WP-MultinationalARoman /WP-MultinationalBCourier /WP-MultinationalBHelve /WP-MultinationalBRoman /WP-MultinationalCourier /WP-Phonetic /WPTypographicSymbols /XYATIP10 /XYBSQL10 /XYBTIP10 /XYCIRC10 /XYCMAT10 /XYCMBT10 /XYDASH10 /XYEUAT10 /XYEUBT10 /ZapfChancery-MediumItalic /ZapfDingbats /ZapfHumanist601BT-Bold /ZapfHumanist601BT-BoldItalic /ZapfHumanist601BT-Demi /ZapfHumanist601BT-DemiItalic /ZapfHumanist601BT-Italic /ZapfHumanist601BT-Roman /ZWAdobeF /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 150 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 2.00333 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 150 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 2.00333 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.00167 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /ARA <FEFF06270633062A062E062F0645002006470630064700200627064406250639062F0627062F0627062A002006440625064606340627062100200648062B062706260642002000410064006F00620065002000500044004600200645062A064806270641064206290020064506390020064506420627064A064A0633002006390631063600200648063706280627063906290020062706440648062B0627062606420020062706440645062A062F062706480644062900200641064A00200645062C062706440627062A002006270644062306390645062706440020062706440645062E062A064406410629061B0020064A06450643064600200641062A062D00200648062B0627062606420020005000440046002006270644064506460634062306290020062806270633062A062E062F062706450020004100630072006F0062006100740020064800410064006F006200650020005200650061006400650072002006250635062F0627063100200035002E0030002006480627064406250635062F062706310627062A0020062706440623062D062F062B002E> /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e55464e1a65876863768467e5770b548c62535370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc666e901a554652d965874ef6768467e5770b548c52175370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /CZE <FEFF005400610074006f0020006e006100730074006100760065006e00ed00200070006f0075017e0069006a007400650020006b0020007600790074007600e101590065006e00ed00200064006f006b0075006d0065006e0074016f002000410064006f006200650020005000440046002000760068006f0064006e00fd00630068002000700072006f002000730070006f006c00650068006c0069007600e90020007a006f006200720061007a006f007600e1006e00ed002000610020007400690073006b0020006f006200630068006f0064006e00ed0063006800200064006f006b0075006d0065006e0074016f002e002000200056007900740076006f01590065006e00e900200064006f006b0075006d0065006e007400790020005000440046002000620075006400650020006d006f017e006e00e90020006f007400650076015900ed007400200076002000700072006f006700720061006d0065006300680020004100630072006f00620061007400200061002000410064006f00620065002000520065006100640065007200200035002e0030002000610020006e006f0076011b006a016100ed00630068002e> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000650067006e006500720020007300690067002000740069006c00200064006500740061006c006a006500720065007400200073006b00e60072006d007600690073006e0069006e00670020006f00670020007500640073006b007200690076006e0069006e006700200061006600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200075006d002000650069006e00650020007a0075007600650072006c00e40073007300690067006500200041006e007a006500690067006500200075006e00640020004100750073006700610062006500200076006f006e00200047006500730063006800e40066007400730064006f006b0075006d0065006e00740065006e0020007a0075002000650072007a00690065006c0065006e002e00200044006900650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000520065006100640065007200200035002e003000200075006e00640020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f0073002000640065002000410064006f00620065002000500044004600200061006400650063007500610064006f007300200070006100720061002000760069007300750061006c0069007a00610063006900f3006e0020006500200069006d0070007200650073006900f3006e00200064006500200063006f006e006600690061006e007a006100200064006500200064006f00630075006d0065006e0074006f007300200063006f006d00650072006300690061006c00650073002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f006200650020005000440046002000700072006f00660065007300730069006f006e006e0065006c007300200066006900610062006c0065007300200070006f007500720020006c0061002000760069007300750061006c00690073006100740069006f006e0020006500740020006c00270069006d007000720065007300730069006f006e002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /GRE <FEFF03a703c103b703c303b903bc03bf03c003bf03b903ae03c303c403b5002003b103c503c403ad03c2002003c403b903c2002003c103c503b803bc03af03c303b503b903c2002003b303b903b1002003bd03b1002003b403b703bc03b903bf03c503c103b303ae03c303b503c403b5002003ad03b303b303c103b103c603b1002000410064006f006200650020005000440046002003ba03b103c403ac03bb03bb03b703bb03b1002003b303b903b1002003b103be03b903cc03c003b903c303c403b7002003c003c103bf03b203bf03bb03ae002003ba03b103b9002003b503ba03c403cd03c003c903c303b7002003b503c003b903c703b503b903c103b703bc03b103c403b903ba03ce03bd002003b503b303b303c103ac03c603c903bd002e0020002003a403b10020005000440046002003ad03b303b303c103b103c603b1002003c003bf03c5002003ad03c703b503c403b5002003b403b703bc03b903bf03c503c103b303ae03c303b503b9002003bc03c003bf03c103bf03cd03bd002003bd03b1002003b103bd03bf03b903c703c403bf03cd03bd002003bc03b5002003c403bf0020004100630072006f006200610074002c002003c403bf002000410064006f00620065002000520065006100640065007200200035002e0030002003ba03b103b9002003bc03b503c403b103b303b503bd03ad03c303c403b503c103b503c2002003b503ba03b403cc03c303b503b903c2002e> /HEB <FEFF05D405E905EA05DE05E905D5002005D105D405D205D305E805D505EA002005D005DC05D4002005DB05D305D9002005DC05D905E605D505E8002005DE05E105DE05DB05D9002000410064006F006200650020005000440046002005E205D105D505E8002005D405E605D205D4002005D505D405D305E405E105D4002005D005DE05D905E005D4002005E905DC002005DE05E105DE05DB05D905DD002005E205E105E705D905D905DD002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E05D905D505EA05E8002E002D0033002C002005E205D905D905E005D5002005D105DE05D305E805D905DA002005DC05DE05E905EA05DE05E9002005E905DC0020004100630072006F006200610074002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E> /HRV (Za stvaranje Adobe PDF dokumenata pogodnih za pouzdani prikaz i ispis poslovnih dokumenata koristite ove postavke. Stvoreni PDF dokumenti mogu se otvoriti Acrobat i Adobe Reader 5.0 i kasnijim verzijama.) /HUN <FEFF00410020006800690076006100740061006c006f007300200064006f006b0075006d0065006e00740075006d006f006b0020006d00650067006200ed007a00680061007400f30020006d0065006700740065006b0069006e007400e9007300e900720065002000e900730020006e0079006f006d00740061007400e1007300e10072006100200073007a00e1006e0074002000410064006f00620065002000500044004600200064006f006b0075006d0065006e00740075006d006f006b0061007400200065007a0065006b006b0065006c0020006100200062006500e1006c006c00ed007400e10073006f006b006b0061006c00200068006f007a006800610074006a00610020006c00e9007400720065002e0020002000410020006c00e90074007200650068006f007a006f00740074002000500044004600200064006f006b0075006d0065006e00740075006d006f006b00200061007a0020004100630072006f006200610074002000e9007300200061007a002000410064006f00620065002000520065006100640065007200200035002e0030002c0020007600610067007900200061007a002000610074007400f3006c0020006b00e9007301510062006200690020007600650072007a006900f3006b006b0061006c0020006e00790069007400680061007400f3006b0020006d00650067002e> /ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 5.0 e versioni successive.) /JPN <FEFF30d330b830cd30b9658766f8306e8868793a304a3088307353705237306b90693057305f002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b4f7f75283057307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a3067306f30d530a930f330c8306e57cb30818fbc307f3092884c3044307e30593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020be44c988b2c8c2a40020bb38c11cb97c0020c548c815c801c73cb85c0020bcf4ace00020c778c1c4d558b2940020b3700020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d002000650072002000650067006e0065007400200066006f00720020007000e5006c006900740065006c006900670020007600690073006e0069006e00670020006f00670020007500740073006b007200690066007400200061007600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002e> /POL <FEFF0055007300740061007700690065006e0069006100200064006f002000740077006f0072007a0065006e0069006100200064006f006b0075006d0065006e007400f300770020005000440046002000700072007a0065007a006e00610063007a006f006e00790063006800200064006f0020006e00690065007a00610077006f0064006e00650067006f002000770079015b0077006900650074006c0061006e00690061002000690020006400720075006b006f00770061006e0069006100200064006f006b0075006d0065006e007400f300770020006600690072006d006f0077007900630068002e002000200044006f006b0075006d0065006e0074007900200050004400460020006d006f017c006e00610020006f007400770069006500720061010700200077002000700072006f006700720061006d006900650020004100630072006f00620061007400200069002000410064006f00620065002000520065006100640065007200200035002e0030002000690020006e006f00770073007a0079006d002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f00620065002000500044004600200061006400650071007500610064006f00730020007000610072006100200061002000760069007300750061006c0069007a006100e700e3006f002000650020006100200069006d0070007200650073007300e3006f00200063006f006e0066006900e1007600650069007300200064006500200064006f00630075006d0065006e0074006f007300200063006f006d0065007200630069006100690073002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /RUM <FEFF005500740069006c0069007a00610163006900200061006300650073007400650020007300650074010300720069002000700065006e007400720075002000610020006300720065006100200064006f00630075006d0065006e00740065002000410064006f006200650020005000440046002000610064006500630076006100740065002000700065006e007400720075002000760069007a00750061006c0069007a00610072006500610020015f006900200074006900700103007200690072006500610020006c0061002000630061006c006900740061007400650020007300750070006500720069006f0061007201030020006100200064006f00630075006d0065006e00740065006c006f007200200064006500200061006600610063006500720069002e002000200044006f00630075006d0065006e00740065006c00650020005000440046002000630072006500610074006500200070006f00740020006600690020006400650073006300680069007300650020006300750020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e00300020015f00690020007600650072007300690075006e0069006c006500200075006c0074006500720069006f006100720065002e> /RUS <FEFF04180441043f043e043b044c04370443043904420435002004340430043d043d044b04350020043d0430044104420440043e0439043a043800200434043b044f00200441043e043704340430043d0438044f00200434043e043a0443043c0435043d0442043e0432002000410064006f006200650020005000440046002c0020043f043e04340445043e0434044f04490438044500200434043b044f0020043d0430043404350436043d043e0433043e0020043f0440043e0441043c043e044204400430002004380020043f04350447043004420438002004340435043b043e0432044b044500200434043e043a0443043c0435043d0442043e0432002e002000200421043e043704340430043d043d044b04350020005000440046002d0434043e043a0443043c0435043d0442044b0020043c043e0436043d043e0020043e0442043a0440044b043204300442044c002004410020043f043e043c043e0449044c044e0020004100630072006f00620061007400200438002000410064006f00620065002000520065006100640065007200200035002e00300020043800200431043e043b043504350020043f043e04370434043d043804450020043204350440044104380439002e> /SLV <FEFF005400650020006e006100730074006100760069007400760065002000750070006f0072006100620069007400650020007a00610020007500730074007600610072006a0061006e006a006500200064006f006b0075006d0065006e0074006f0076002000410064006f006200650020005000440046002c0020007000720069006d00650072006e006900680020007a00610020007a0061006e00650073006c006a00690076006f0020006f0067006c00650064006f00760061006e006a006500200069006e0020007400690073006b0061006e006a006500200070006f0073006c006f0076006e0069006800200064006f006b0075006d0065006e0074006f0076002e00200020005500730074007600610072006a0065006e006500200064006f006b0075006d0065006e0074006500200050004400460020006a00650020006d006f0067006f010d00650020006f0064007000720065007400690020007a0020004100630072006f00620061007400200069006e002000410064006f00620065002000520065006100640065007200200035002e003000200069006e0020006e006f00760065006a01610069006d002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f0074002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002c0020006a006f0074006b006100200073006f0070006900760061007400200079007200690074007900730061007300690061006b00690072006a006f006a0065006e0020006c0075006f00740065007400740061007600610061006e0020006e00e400790074007400e4006d0069007300650065006e0020006a0061002000740075006c006f007300740061006d0069007300650065006e002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d00200070006100730073006100720020006600f60072002000740069006c006c006600f60072006c00690074006c006900670020007600690073006e0069006e00670020006f006300680020007500740073006b007200690066007400650072002000610076002000610066006600e4007200730064006f006b0075006d0065006e0074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /TUR <FEFF005400690063006100720069002000620065006c00670065006c006500720069006e0020006700fc00760065006e0069006c0069007200200062006900720020015f0065006b0069006c006400650020006700f6007200fc006e007400fc006c0065006e006d006500730069002000760065002000790061007a0064013100720131006c006d006100730131006e006100200075007900670075006e002000410064006f006200650020005000440046002000620065006c00670065006c0065007200690020006f006c0075015f007400750072006d0061006b0020006900e70069006e00200062007500200061007900610072006c0061007201310020006b0075006c006c0061006e0131006e002e00200020004f006c0075015f0074007500720075006c0061006e0020005000440046002000620065006c00670065006c0065007200690020004100630072006f006200610074002000760065002000410064006f00620065002000520065006100640065007200200035002e003000200076006500200073006f006e0072006100730131006e00640061006b00690020007300fc007200fc006d006c00650072006c00650020006100e70131006c006100620069006c00690072002e> /ENU (Use these settings to create Adobe PDF documents suitable for reliable viewing and printing of business documents. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.)>> setdistillerparams /HWResolution [600 600] /PageSize [612.000 792.000]>> setpagedevice</s>
|
<s>Aspect Extraction from Bangla Reviews using Convolutional Neural Network2018 Joint 7th International Conference on Informatics, Electronics & Vision (ICIEV) and 2018 2nd International Conference on Imaging, Vision& Pattern Recognition (icIVPR)Aspect Extraction from Bangla Reviews usingConvolutional Neural NetworkMd. Atikur RahmanInstitute of Information TechnologyUniversity of DhakaDhaka, BangladeshEmail: bsse0521@iit.du.ac.bdEmon Kumar DeyInstitute of Information TechnologyUniversity of DhakaDhaka, BangladeshEmail: emonkd@iit.du.ac.bdAbstractโThe extensive customer reviews in web assist cus-tomers for purchase-decision-making as well as providers for business planning. Summarization of reviews is desirable as reading all reviews is not feasible to evaluate properly. To find aspect categories from reviews is a sub-task of summarization known as Aspect Based Sentiment Analysis (ABSA). In this paper, we present two Bangla datasets to perform ABSA task. We collected user comments on cricket game and annotated manually. The other dataset consists of consumer reviews of restaurants. A model to extract aspect category based on Convo-lutional Neural Network (CNN) is presented. The model shows the convincing performance of the proposed datasets compared to the conventional classifiers.Keywords. Bangla Aspect Based Sentiment Analysis, Bangla Dataset of ABSA, Aspect Extraction in Bangla.I. INTRODUCTIONPeople rely on human judgment more than conventional advertising. For example, customers are used to asking for a recommendation and suggestion from others before important purchase decisions. Word of Mouth (WOM) has always been important to make such decisions for customers. On the other hand, WOM carries great significance for providers. It has a stronger effect on new customer acquisition than traditional forms of marketing [1].Sharing experience has become very frequent these days with the help of internet. Social media like Twitter and Facebook have made it easy to exchange judgment about product, service or brand. This extended form of word of mouth is Electronic Word of Mouth (eWOM). For example, E-commerce like Amazon and Alibaba have a large number of reviews of their products and services shared by consumers. These reviews are more admissible for consumers than the various form of marketing information [2]. On the other hand, companies are eager to mine all these activities and interactions to understand how the majority are feeling about a particular brand or product. It would help them to develop their business strategy in this competitive world.Our involvement with social media is rapidly growing day by day. We share our views and opinions on every aspect of life. Therefore, it is necessary to automatically analyze all these data to produce such useful information that helps both companies and consumers. Aspect based sentiment analysis is a major technique to obtain such convenient information.Sentiment analysis, also known as opinion mining, is a pro-cess to determine if the expression is favorable, unfavorable, or neutral. Sentiment analysis has three levels to analyze [3]: document level, sentence level, and aspect level. The document level analyzes a piece of text and determines whether the text has a positive or negative sentiment. Sentence level identifies the polarity of each sentence. These two levels of analysis do not reveal what exactly people liked and did not like. Aspect level is known as Aspect</s>
|
<s>Based Sentiment Analysis (ABSA) identifies the aspects of a given document and the sentiment expressed towards each aspect. ABSA is the most detailed version of sentiment analysis that discovers desired information from a document.There are two major tasks to perform aspect based sentiment analysis. 1) extract the particular areas mentioned in a given review. 2) classify the polarity into positive, negative or neutral for every aspect. For example, a review of a restaurant is:โThe place was relaxed and stylish but the food was not good.โHere, this review reveals two aspects: โambienceโ and โfoodโ. The โambienceโ aspect category indicates โpositiveโ sentiment and the โfoodโ aspect category indicates โnegativeโ sentiment. In the review, aspect categories are mentioned explicitly. People can share their opinion implicitly such as, โAll the money went into the interior decoration, none of it went to the chefs.โhere, it carries the same aspects โambienceโ and โfoodโ without directly mentioned.In the NLP domain, SemEval (Semantic Evaluation) is a reputed workshop. It introduces a dataset [4] for ABSA task in English. Later they extend this work by adding more domains with several languages. To perform Aspect Based Sentiment Analysis (ABSA), the datasets of different languages like French [5], Czech [6] and Arabic [7] have been created.Previous works [8], [9], [10] attempt to detect aspects in opinion mining task. Most of them use Latent Dirichlet Allo-cation (LDA) as topic modeling. [11] presents the first deep learning approach for aspect extraction where deep convolu-tional neural network (CNN) is applied. In Bangla language, sentiment analysis [12] [13] [14] has been performed. They identify the polarity (positive or negative) from Bangla text.978-1-5386-5163-6/18/$31.00 ยฉ2018 IEEE262Aiming to work with ABSA in Bangla, the contributions ofthis paper in the following.โข We propose two Bangla datasets in the field of AspectBased Sentiment Analysis (ABSA). These datasets werecollected and annotated manually.โข We present a model to extract aspect category that showsconvincing performance than the other popular machinelearning models.One of the presented datasets is collected from Facebook on the topic of cricket. The other dataset on the restaurant is collected from English benchmark dataset [4] by abstract translation. The collection and annotation processes of the datasets are described in section IV. These datasets are pub-licly available 1. Using these datasets, we present a CNN model and perform aspect category extraction which is a sub-task of ABSA.The structure of the paper is as follows. Section II discusses the related works in the field of ABSA. We present our methodology in Section III. In Section IV, we present the experimental results. In this section, we discuss the dataset collection and annotation process. Finally, the conclusion is presented in Section V.II. RELATED WORKTo improve rating predictions, [15] provides a restaurant review dataset that introduces aspect category. They catego-rize a review into six aspects and a general polarity. They didnโt prepare complete ABSA dataset as aspect category is present but the polarity for those identified aspect category is absent. For example, a review like โBurger was appetizing but a little expensive.โ have two aspect categories:</s>
|
<s>โfoodโ and โpriceโ. They annotate the polarity on an overall review that is โpositiveโ.SemEval 2014 evaluation campaign [4] extends their dataset adding more three fields with aspect category. Datasets of sev-eral languages are published in semeval 2016 workshop [16], these are English, French, Russian, Arabic, Turkish, Dutch, Spanish and Chinese. They also introduce different domains like mobile phone, restaurant, digital camera, laptop, hotel, museum and telecommunication. In [6], an IT product review dataset is created in Czech language for ABSA task in which total 2200 reviews are contained. In Arabic language, another dataset of book reviews is provided by [7]. They classify book reviews into 14 categories and 4 types of polarities.To perform ABSA, common approaches include topic mod-eling where Latent Dirichlet Allocation (LDA) is the most popular method to discover aspects. A weakly supervised topic modeling approach is proposed in [17]. It uses word co-occurrence information to capture latent topics in the corpus and four different topic models are introduced where local LDA gives the highest accuracy. Sentence-LDA [18], a probabilistic generative model, assumes all words in a single sentence are generated from one aspect which is a limitation of their work. Recently common-sense knowledge SenticNet1https://github.com/AtikRahman/Bangla Datasets ABSA[19] is incorporated in LDA to improve the performance ofaspect extraction [20].Association rule mining is the major technique in [21]that utilize co-occurrence frequency of words. They proposeboth a supervised and an unsupervised method based on co-occurrence frequencies. Using double propagation (DP), anunsupervised method for opinion aspect extraction is presentedin [22]. Double propagation provides recommendations thatare based on aspect similarity and aspect association. Semanticsimilarity uses word vectors for similarity comparison thatincorporate synonymous aspects of DP.Recently one new dimension is integrated to ABSA namedโtargetโ in [23]. It detects the aspects and classify the sentimentpolarity given a target in the document. This work is extendedin [24] that uses SenticNet as commonsense knowledge toimprove the accuracy. They propose โSentic LSTMโ as anextension of LSTM to leverage SenticNet efficiently.Convolutional neural network (CNN) is successfully usedfor text classification in [25]. They propose a new CNNarchitecture for sentence classification and provide a series ofexperiments with pretrained word vectors. [26] apply similarCNN model in ABSA. They perform aspect category identi-fication, extraction of opinion target expression and polarityidentification. [11] includes part of speech tags with wordvectors in word embedding and uses CNN to extract aspects.In their experiment, they utilize a set of linguistic patternsand result shows that a little bit improvement in accuracy.Recurrent neural network (RNN) is also successfully appliedin aspect identification. Several important RNN architecturesare experimented in [27]. They initialize RNN models withpopular pretrained word vectors.In Bangla language, many researchers perform only sen-timent analysis from Bangla text. [12] identifies the overallpolarity from Bangla microblog posts as either positive ornegative. Semi-supervised bootstrapping approach is appliedto develop a training corpus. They use Support Vector Ma-chine (SVM) and Maximum Entropy (MaxEnt) to classify thepolarity. [13] detects polarity (positive, negative or neutral)using contextual valence analysis. They use the WordNetto get the senses of each word according to its parts ofspeech. SentiWordNet is also used</s>
|
<s>to get the prior valence(i.e. polarity) of each Bangla word. A dataset of Bangla textis proposed in [28] for sentiment analysis task. Long ShortTerm Memory (LSTM) of deep recurrent model is applied ontheir dataset. Only 850 Bangla comments are collected in [29]and Convolutional Neural Network (CNN) is used to classifythe comments either positive or negative sentiment.III. METHODOLOGYThe architecture of our CNN model for aspect extractionis shown in Figure 1. The network is consist of a singleconvolutional layer followed by a non-linearity, max-poolingand finally a fully connected layer as output. In the following,we provide a concise interpretation of the major components ofour network: review matrix, convolutional, pooling and outputlayer. We also describe regularization to prevent overfitting.263Figure 1. The CNN architecture for aspect extraction from Bangla reviewA. Review matrixEach review is treated as a sequence of words where eachword represents a vector of fixed size. These vectors areinitialized randomly. Let a word xi is initialized randomlywith q-dimensional vector. If n is the number of maximumwords of reviews, a review (padded with n length if needed)is represented as,x1:n = (x1 โ x2 โ ยท ยท ยท โ xn) (1)where โ is the concatenation operator. It produces a matrix forone review. For each review R, we build a review matrix R โRxรq where each row i represent a word embedding xi of thei-th word in the review. To extract features of individual wordsfrom a given review, the neural network applies conversionsto the input review matrix R using convolution and poolingoperations which are explained in the next.B. ConvolutionThe convolutional operation between an input matrix R anda filter W โ Rpรq of p words window results in a vectorC โ Rnโp+1, where each component is computed as follows:Ci = f(Ri:i+pโ1 โW + b) (2)Here โ is the element-wise multiplication, b โ R is abias term and f is a non-linear function such as rectifiedlinear units (ReLUs). As shown in Figure 1, an element-wisemultiplication between a row slice of R and a filter matrixW is performed and then summed to a single value whichis the outcome of one component Ci . It is noted that theconvolution filter is of the same dimensionality q to capturea entire word vector of the input matrix. One feature map Mcan be represented as,Mi = [C1, C2, ยท ยท ยท , Cnโp+1] (3)The above procedure construct only one feature map. To builda developed representation of the input, a set of filters withvarying window size are used which produce a stack of featuremaps (Figure 1). A non-linear activation function is used ineach convolutional layer. We apply rectified linear (ReLU) asan activation function in our model.C. PoolingThe output, produced from the convolutional layer, is pro-ceeded to the pooling layer. This layer aggregate the informa-tion and reduce the dimensionality of feature maps. The resultof the pooling operation is,Mpool = [M1,M2, ยท ยท ยท ,Mj ] (4)Here j is the number of total filters used in convolutionallayers. The most popular methods for reduction are maxpooling and average pooling. Max pooling has demonstratedfaster convergence and better performance</s>
|
<s>compared to theaverage pooling. The max pooling is used in our model whichreturn the maximum value of every feature map.D. Output layerThe eventual features, produced from the penultimate pool-ing layer, are passed to a fully connected layer that generateoutputs over each aspect. We determine a threshold f andchoose all aspects whose predicted value exceed the threshold.We use binary cross entropy loss where output y is defined asyi = 1 when the review has aspect i, otherwise yi = 0.Aspect extraction is a multi-label classification problem.One review might carry multiple aspects. Previous works [26]use softmax as activation function in the output layer. Insoftmax function, when the score for one is increased, allothers are decreased as itโs a probability distribution and thena threshold would be hard to find. Sigmoid activation functionis used in our output layer. This non-linear function computesthe probability over aspects equally that exists between 0 to 1which would solve the threshold finding complication.E. RegularizationMultiple non-linear hidden layers are contained in deepneural networks that assist to learn very complicated rela-tionships between inputs and outputs. Overfitting happenswhen these networks learn limited training data too well thatnegatively impact the performance on test data. To mitigate theovefitting issue, we employ dropout [30] on the penultimatelayer. Dropout prevents units from co-adapting too much bysetting to zero of a portion of hidden units.264Figure 2. Word frequency of Bangla Cricket dataset using Zipfโs law.Figure 3. A portion from cricket datasetIV. EXPERIMENTSThis section is divided into two parts: data collection andresult discussion. Data collection includes the process ofcollecting and annotating our proposed datasets. In the nextsubsection, the result and discussion of our model on thedatasets are presented.A. Data CollectionWe created two datasets from two different domains named cricket dataset and restaurant dataset. Nowadays the cricket is the most popular game in Bangladesh. People share their opin-ions in Bangla on cricket more than other issues. Therefore we choose to collect opinions of people from cricket domain. On the other hand, English restaurant dataset is the benchmark dataset which is used by almost all researchers in the field of aspect based sentiment analysis. The collection and annotation process of the datasets are presented in the following.1) Cricket Dataset: We collected Bangla user comments manually on the topic of cricket from two popular facebook pages 2. People usually comments in Bangla under a cricket related post. Only Bangla comments were collect from those post. However, English comments and Bangla sentences writ-ten in English alphabet are also found. Again, some comments contain only emoticons. These kind of comments are not2https://www.facebook.com/BBCBengaliService andhttps://www.facebook.com/DailyProthomAloFigure 4. A portion from restaurant datasetconsidered for our dataset. We applied Zipfโs law [31] on the Cricket dataset. Figure 2 shows that our cricket dataset follows Zipfโs law.After completing the collection process, the dataset was annotated individually by the authors and a group of un-dergraduate students from IIT in University of Dhaka. Five aspect categories were selected which are bowling, batting, team, team management and other. The polarity is divided into three classes i.e, positive, negative and neutral. Every comment</s>
|
<s>was annotated by each participant. We calculated the majority voting to choose the final aspect category and the polarity of a comment. A part of the cricket dataset is given in Figure 3. The summary of the Cricket dataset are mentioned in table I.TABLE ITHE SUMMARY OF CRICKET DATASETAspect Category Polarity TotalPositive Negative NeutralBowling 150 144 33 327Batting 136 385 55 576Team 165 490 66 721Team Management 24 290 15 329Other 89 820 96 1005Total Comments 29582) Restaurant Dataset: We created Bangla restaurant dataset from the English benchmark dataset [4] by abstract translation. The same participants were engaged to translate of this dataset. The same annotation process has been con-sidered in which 5 types of aspect categories i.e, food, price, service, ambiance and miscellaneous. There were four types of polarities in the original dataset i.e., positive, negative, neutral and conflict. We merged the conflict category with neutral in our translated Bangla dataset. A part of the restaurant dataset is given in Figure 4. Both cricket and restaurant datasets are provided in xlsx file format. The summary of the restaurant dataset is presented in table IISome popular machine learning models are applied to compare with our proposed model. After removing punctu-ations and stop words, a TF-IDF (Term Frequency โ Inverse Document Frequency) feature matrix has been created to learn the following models:1) Support Vector Machine (SVM)2) Random Forest (RF)3) K-Nearest Neighbor (KNN)265TABLE IITHE SUMMARY OF RESTAURANT DATASETAspect Category Polarity TotalPositive Negative NeutralFood 495 125 87 707Price 98 60 16 174Ambiance 135 53 43 231Service 185 115 32 332Miscellaneous 298 118 193 609Total Reviews 2053TABLE IIITHE EXPERIMENTED RESULT USING OUR DATASETSDataset Model Precision Recall F1-scoreCricketProposed-CNN 0.54 0.48 0.51SVM 0.71 0.22 0.34RF 0.60 0.27 0.37KNN 0.45 0.21 0.35RestaurantProposed-CNN 0.67 0.61 0.64SVM 0.77 0.30 0.38RF 0.69 0.31 0.38KNN 0.54 0.34 0.42B. Result and DiscussionTable III shows the experimental results on our created datasets using the proposed CNN model along with other conventional approaches. Our model shows significant recall and F1 score for both datasets. Though precision rate is higher in SVM, the proposed CNN shows the highest recall rate by big margin for both datasets. Recall rate indicates the higher learning rate for CNN than other approaches. From the result, we can say that our model identifies more aspect categories than popular machine learning approaches. It is clear from Table III that for most of the cases the precision and recall rate shows different results. For this reason, we calculated F1 score which is the harmonic mean of precision and recall. The proposed CNN achieved the highest F1 score in both datasets. Cricket dataset show 51% F1-scores whereas Restaurant dataset shows 64% scores.Figure 5 shows the overall accuracy on both datasets. Proposed CNN model shows the significant level of accuracy for both datasets. For Cricket dataset we got 81% accuracy whereas classification using SVM, RF and KNN shows only 19%, 25% and 22% respectively. Again Restaurant dataset shows 83% accuracy using the proposed CNN whereas SVM, RF and KNN shows only 29%, 30% and</s>
|
<s>32% accuracy respectively. So, in terms of accuracy we can say that use of Convolutional Neural Network in aspect extraction is the best option for these two proposed Bangla datasets.We can see from the result that the performances of the models are lower in both datasets. Different people think differently as well as they share their opinion from numerous dimension. Therefore, too much diversity of opinion might be the reason behind lower performance. On the other hand,Figure 5. The comparison using accuracy measurementOneโs review or comment might have multiple aspect cate-gories. Some of those aspect categories are missed by theconventional classifiers.One of our dataset named cricket dataset is collected fromuser comments in Facebook pages. In cricket related posts,some user share their comments about out of cricket domaini.e. about politics or personal matter of cricket players. Thesekind of comments canโt be categorized properly within se-lected five aspect categories. These comments are included inour dataset as โotherโ aspect category which may reduce thequality of the dataset.V. CONCLUSION AND FUTURE WORKIn Bangla language, we provided two datasets in the field of Aspect Based Sentiment Analysis (ABSA). The first dataset named cricket dataset is created with user comments from cricket related post in Facebook pages. The second dataset consists of restaurant reviews which is collected from English benchmark dataset. These datasets are intended to perform two task which are extraction of aspect category and identification of polarity.We proposed a model for aspect category extraction based on CNN architecture. we initilized random numbers to gener-ate matrix for convolutional layer in the CNN model. We com-pared our model with popular machine learning approaches using our proposed datasets. Our experimental result shows convincing performance compared to other models.Use of pretrained word vectors instead of random initial-ization might enhance the performance of our model. We are working on Bangla word embeddings for better initialization in the CNN model. As future work, we aim to connect sentiments with the corresponding aspects to complete the objective of Aspect Based Sentiment Analysis.ACKNOWLEDGEMENTThis research is supported by the fellowship from ICT Division, Ministry of Posts, Telecommunications and Informa-tion Technology, Bangladesh, No - 56.00.0000.028.33.094.18-168, Date 03-05-2018. The authors are grateful to all the participants who supported in collecting, annotating and/or translating the datasets.266REFERENCES[1] M. Trusov, R. E. Bucklin, and K. Pauwels, โEffects of word-of-mouthversus traditional marketing: findings from an internet social networkingsite,โ Journal of marketing, vol. 73, no. 5, pp. 90โ102, 2009.[2] S.-J. Doh and J.-S. Hwang, โHow consumers evaluate ewom (electronicword-of-mouth) messages,โ CyberPsychology & Behavior, vol. 12, no. 2,pp. 193โ197, 2009.[3] A. Jeyapriya and C. K. Selvi, โExtracting aspects and mining opinions inproduct reviews using supervised learning algorithm,โ in Electronics andCommunication Systems (ICECS), 2015 2nd International Conferenceon. IEEE, 2015, pp. 548โ552.[4] M. Pontiki, D. Galanis, J. Pavlopoulos, H. Papageorgiou,I. Androutsopoulos, and S. Manandhar, โSemeval-2014 task 4: Aspectbased sentiment analysis,โ in Proceedings of the 8th InternationalWorkshop on Semantic Evaluation, SemEval@COLING 2014, Dublin,Ireland, August 23-24, 2014., 2014, pp. 27โ35. [Online]. Available:http://aclweb.org/anthology/S/S14/S14-2004.pdf[5] M. Apidianaki, X. Tannier, and C. Richart, โDatasets for aspect-basedsentiment analysis in french,โ</s>
|
<s>in Proceedings of the Tenth InternationalConference on Language Resources and Evaluation LREC 2016,Portorozฬ, Slovenia, May 23-28, 2016., 2016. [Online]. Available:http://www.lrec-conf.org/proceedings/lrec2016/summaries/61.html[6] A. Tamchyna, O. Fiala, and K. Veselovskaฬ, โCzech aspect-basedsentiment analysis: A new dataset and preliminary results,โ inProceedings ITAT 2015: Information Technologies - Applications andTheory, Slovensky Raj, Slovakia, September 17-21, 2015., 2015, pp.95โ99. [Online]. Available: http://ceur-ws.org/Vol-1422/95.pdf[7] M. Al-Smadi, O. Qawasmeh, B. Talafha, and M. Quwaider, โHumanannotated arabic dataset of book reviews for aspect based sentimentanalysis,โ in 3rd International Conference on Future Internet of Thingsand Cloud, FiCloud 2015, Rome, Italy, August 24-26, 2015, 2015, pp.726โ730. [Online]. Available: https://doi.org/10.1109/FiCloud.2015.62[8] S. Moghaddam and M. Ester, โOn the design of lda models for aspect-based opinion mining,โ in Proceedings of the 21st ACM internationalconference on Information and knowledge management. ACM, 2012,pp. 803โ812.[9] S. Kiritchenko, X. Zhu, C. Cherry, and S. Mohammad, โNrc-canada-2014: Detecting aspects and sentiment in customer reviews,โ in Proceed-ings of the 8th International Workshop on Semantic Evaluation (SemEval2014), 2014, pp. 437โ442.[10] T. Brychcฤฑฬn, M. Konkol, and J. Steinberger, โUwb: Machine learningapproach to aspect-based sentiment analysis,โ in Proceedings of the 8thInternational Workshop on Semantic Evaluation (SemEval 2014), 2014,pp. 817โ822.[11] S. Poria, E. Cambria, and A. Gelbukh, โAspect extraction for opinionmining with a deep convolutional neural network,โ Knowledge-BasedSystems, vol. 108, pp. 42โ49, 2016.[12] S. Chowdhury and W. Chowdhury, โPerforming sentiment analysis inbangla microblog posts,โ in Informatics, Electronics & Vision (ICIEV),2014 International Conference on. IEEE, 2014, pp. 1โ6.[13] K. A. Hasan, M. Rahman et al., โSentiment detection from banglatext using contextual valency analysis,โ in Computer and InformationTechnology (ICCIT), 2014 17th International Conference on. IEEE,2014, pp. 292โ295.[14] M. A. Rahman and E. Kumar Dey, โDatasets for aspect-based sentimentanalysis in bangla and its baseline evaluation,โ Data, vol. 3, no. 2, p. 15,2018.[15] G. Ganu, N. Elhadad, and A. Marian, โBeyond the stars: Improvingrating predictions using review text content,โ in 12th InternationalWorkshop on the Web and Databases, WebDB 2009, Providence,Rhode Island, USA, June 28, 2009, 2009. [Online]. Available:http://webdb09.cse.buffalo.edu/papers/Paper9/WebDB.pdf[16] M. Pontiki, D. Galanis, H. Papageorgiou, I. Androutsopoulos,S. Manandhar, M. Al-Smadi, M. Al-Ayyoub, Y. Zhao, B. Qin, O. D.Clercq, V. Hoste, M. Apidianaki, X. Tannier, N. V. Loukachevitch,E. Kotelnikov, N. Bel, S. M. J. Zafra, and G. Eryigit, โSemeval-2016task 5: Aspect based sentiment analysis,โ in Proceedings of the 10thInternational Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2016, San Diego, CA, USA, June 16-17, 2016, 2016, pp. 19โ30.[Online]. Available: http://aclweb.org/anthology/S/S16/S16-1002.pdf[17] B. Lu, M. Ott, C. Cardie, and B. K. Tsou, โMulti-aspect sentimentanalysis with topic models,โ in Data Mining Workshops (ICDMW), 2011IEEE 11th International Conference on. IEEE, 2011, pp. 81โ88.[18] Y. Jo and A. H. Oh, โAspect and sentiment unification model foronline review analysis,โ in Proceedings of the fourth ACM internationalconference on Web search and data mining. ACM, 2011, pp. 815โ824.[19] E. Cambria, S. Poria, R. Bajpai, and B. Schuller, โSenticnet 4: A se-mantic resource for sentiment analysis based on conceptual primitives,โin Proceedings of COLING 2016, the 26th International Conference onComputational Linguistics: Technical Papers, 2016, pp. 2666โ2677.[20] S. Poria, I. Chaturvedi, E. Cambria, and F. Bisio, โSentic lda: Improvingon lda with semantic similarity for aspect-based sentiment</s>
|
<s>analysis,โin Neural Networks (IJCNN), 2016 International Joint Conference on.IEEE, 2016, pp. 4465โ4473.[21] K. Schouten, O. van der Weijde, F. Frasincar, and R. Dekker, โSuper-vised and unsupervised aspect category detection for sentiment analysiswith co-cccurrence data,โ IEEE transactions on cybernetics, 2017.[22] Q. Liu, B. Liu, Y. Zhang, D. S. Kim, and Z. Gao, โImproving opinionaspect extraction using semantic similarity and aspect associations.โ inAAAI, 2016, pp. 2986โ2992.[23] M. Saeidi, G. Bouchard, M. Liakata, and S. Riedel, โSentihood: targetedaspect based sentiment analysis dataset for urban neighbourhoods,โarXiv preprint arXiv:1610.03771, 2016.[24] Y. Ma, H. Peng, and E. Cambria, โTargeted aspect-based sentimentanalysis via embedding commonsense knowledge into an attentive lstm,โin AAAI, 2018.[25] Y. Kim, โConvolutional neural networks for sentence classification,โarXiv preprint arXiv:1408.5882, 2014.[26] L. Xu, J. Lin, L. Wang, C. Yin, and J. Wang, โDeep convolutional neuralnetwork based approach for aspect-based sentiment analysis,โ Adv SciTechnol Lett, vol. 143, pp. 199โ204, 2017.[27] P. Liu, S. Joty, and H. Meng, โFine-grained opinion mining with recur-rent neural networks and word embeddings,โ in Proceedings of the 2015Conference on Empirical Methods in Natural Language Processing,2015, pp. 1433โ1443.[28] A. Hassan, N. Mohammed, and A. K. A. Azad, โSentiment analysison bangla and romanized bangla text (BRBT) using deep recurrentmodels,โ CoRR, vol. abs/1610.00369, 2016. [Online]. Available:http://arxiv.org/abs/1610.00369[29] M. H. Alam, M.-M. Rahoman, and M. A. K. Azad, โSentiment analysisfor bangla sentences using convolutional neural network,โ in Computerand Information Technology (ICCIT), 2017 20th International Confer-ence of. IEEE, 2017, pp. 1โ6.[30] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhut-dinov, โDropout: A simple way to prevent neural networks from over-fitting,โ The Journal of Machine Learning Research, vol. 15, no. 1, pp.1929โ1958, 2014.[31] A. Pak and P. Paroubek, โTwitter as a corpus for sentiment analysis andopinion mining.โ in LREc, vol. 10, no. 2010, 2010.267</s>
|
<s>TitleSee discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/311693706An Investigative Design Based Statistical Approach for Determining BanglaSentence ValidityArticle ยท December 2016CITATIONSREADS1845 authors, including:Some of the authors of this publication are also working on these related projects:Self-esteem View projectBangla Automatic Spell Checking & Correction View projectRiazur RahmanDaffodil International University15 PUBLICATIONS 18 CITATIONS SEE PROFILEMd. Tarek HabibDaffodil International University42 PUBLICATIONS 153 CITATIONS SEE PROFILEMd. Sadekur RahmanDaffodil International University15 PUBLICATIONS 51 CITATIONS SEE PROFILEShaon Bhatta ShuvoDaffodil International University11 PUBLICATIONS 14 CITATIONS SEE PROFILEAll content following this page was uploaded by Riazur Rahman on 17 December 2016.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/311693706_An_Investigative_Design_Based_Statistical_Approach_for_Determining_Bangla_Sentence_Validity?enrichId=rgreq-1db9637864059989b9c32152b624b9c4-XXX&enrichSource=Y292ZXJQYWdlOzMxMTY5MzcwNjtBUzo0NDAxNTYzNDkwNDY3ODRAMTQ4MTk1Mjg1OTgyNw%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/311693706_An_Investigative_Design_Based_Statistical_Approach_for_Determining_Bangla_Sentence_Validity?enrichId=rgreq-1db9637864059989b9c32152b624b9c4-XXX&enrichSource=Y292ZXJQYWdlOzMxMTY5MzcwNjtBUzo0NDAxNTYzNDkwNDY3ODRAMTQ4MTk1Mjg1OTgyNw%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Self-esteem-2?enrichId=rgreq-1db9637864059989b9c32152b624b9c4-XXX&enrichSource=Y292ZXJQYWdlOzMxMTY5MzcwNjtBUzo0NDAxNTYzNDkwNDY3ODRAMTQ4MTk1Mjg1OTgyNw%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Bangla-Automatic-Spell-Checking-Correction?enrichId=rgreq-1db9637864059989b9c32152b624b9c4-XXX&enrichSource=Y292ZXJQYWdlOzMxMTY5MzcwNjtBUzo0NDAxNTYzNDkwNDY3ODRAMTQ4MTk1Mjg1OTgyNw%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-1db9637864059989b9c32152b624b9c4-XXX&enrichSource=Y292ZXJQYWdlOzMxMTY5MzcwNjtBUzo0NDAxNTYzNDkwNDY3ODRAMTQ4MTk1Mjg1OTgyNw%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Riazur_Rahman?enrichId=rgreq-1db9637864059989b9c32152b624b9c4-XXX&enrichSource=Y292ZXJQYWdlOzMxMTY5MzcwNjtBUzo0NDAxNTYzNDkwNDY3ODRAMTQ4MTk1Mjg1OTgyNw%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Riazur_Rahman?enrichId=rgreq-1db9637864059989b9c32152b624b9c4-XXX&enrichSource=Y292ZXJQYWdlOzMxMTY5MzcwNjtBUzo0NDAxNTYzNDkwNDY3ODRAMTQ4MTk1Mjg1OTgyNw%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Daffodil_International_University?enrichId=rgreq-1db9637864059989b9c32152b624b9c4-XXX&enrichSource=Y292ZXJQYWdlOzMxMTY5MzcwNjtBUzo0NDAxNTYzNDkwNDY3ODRAMTQ4MTk1Mjg1OTgyNw%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Riazur_Rahman?enrichId=rgreq-1db9637864059989b9c32152b624b9c4-XXX&enrichSource=Y292ZXJQYWdlOzMxMTY5MzcwNjtBUzo0NDAxNTYzNDkwNDY3ODRAMTQ4MTk1Mjg1OTgyNw%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Habib6?enrichId=rgreq-1db9637864059989b9c32152b624b9c4-XXX&enrichSource=Y292ZXJQYWdlOzMxMTY5MzcwNjtBUzo0NDAxNTYzNDkwNDY3ODRAMTQ4MTk1Mjg1OTgyNw%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Habib6?enrichId=rgreq-1db9637864059989b9c32152b624b9c4-XXX&enrichSource=Y292ZXJQYWdlOzMxMTY5MzcwNjtBUzo0NDAxNTYzNDkwNDY3ODRAMTQ4MTk1Mjg1OTgyNw%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Daffodil_International_University?enrichId=rgreq-1db9637864059989b9c32152b624b9c4-XXX&enrichSource=Y292ZXJQYWdlOzMxMTY5MzcwNjtBUzo0NDAxNTYzNDkwNDY3ODRAMTQ4MTk1Mjg1OTgyNw%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Habib6?enrichId=rgreq-1db9637864059989b9c32152b624b9c4-XXX&enrichSource=Y292ZXJQYWdlOzMxMTY5MzcwNjtBUzo0NDAxNTYzNDkwNDY3ODRAMTQ4MTk1Mjg1OTgyNw%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Rahman489?enrichId=rgreq-1db9637864059989b9c32152b624b9c4-XXX&enrichSource=Y292ZXJQYWdlOzMxMTY5MzcwNjtBUzo0NDAxNTYzNDkwNDY3ODRAMTQ4MTk1Mjg1OTgyNw%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Rahman489?enrichId=rgreq-1db9637864059989b9c32152b624b9c4-XXX&enrichSource=Y292ZXJQYWdlOzMxMTY5MzcwNjtBUzo0NDAxNTYzNDkwNDY3ODRAMTQ4MTk1Mjg1OTgyNw%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Daffodil_International_University?enrichId=rgreq-1db9637864059989b9c32152b624b9c4-XXX&enrichSource=Y292ZXJQYWdlOzMxMTY5MzcwNjtBUzo0NDAxNTYzNDkwNDY3ODRAMTQ4MTk1Mjg1OTgyNw%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Rahman489?enrichId=rgreq-1db9637864059989b9c32152b624b9c4-XXX&enrichSource=Y292ZXJQYWdlOzMxMTY5MzcwNjtBUzo0NDAxNTYzNDkwNDY3ODRAMTQ4MTk1Mjg1OTgyNw%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Shaon_Shuvo?enrichId=rgreq-1db9637864059989b9c32152b624b9c4-XXX&enrichSource=Y292ZXJQYWdlOzMxMTY5MzcwNjtBUzo0NDAxNTYzNDkwNDY3ODRAMTQ4MTk1Mjg1OTgyNw%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Shaon_Shuvo?enrichId=rgreq-1db9637864059989b9c32152b624b9c4-XXX&enrichSource=Y292ZXJQYWdlOzMxMTY5MzcwNjtBUzo0NDAxNTYzNDkwNDY3ODRAMTQ4MTk1Mjg1OTgyNw%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Daffodil_International_University?enrichId=rgreq-1db9637864059989b9c32152b624b9c4-XXX&enrichSource=Y292ZXJQYWdlOzMxMTY5MzcwNjtBUzo0NDAxNTYzNDkwNDY3ODRAMTQ4MTk1Mjg1OTgyNw%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Shaon_Shuvo?enrichId=rgreq-1db9637864059989b9c32152b624b9c4-XXX&enrichSource=Y292ZXJQYWdlOzMxMTY5MzcwNjtBUzo0NDAxNTYzNDkwNDY3ODRAMTQ4MTk1Mjg1OTgyNw%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Riazur_Rahman?enrichId=rgreq-1db9637864059989b9c32152b624b9c4-XXX&enrichSource=Y292ZXJQYWdlOzMxMTY5MzcwNjtBUzo0NDAxNTYzNDkwNDY3ODRAMTQ4MTk1Mjg1OTgyNw%3D%3D&el=1_x_10&_esc=publicationCoverPdfIJCSNS International Journal of Computer Science and Network Security, VOL.16 No.11, November 2016 Manuscript received November 5, 2016 Manuscript revised November 20, 2016 An Investigative Design Based Statistical Approach for Determining Bangla Sentence Validity Md. Riazur Rahmanโ , Md. Tarek Habibโ , Md. Sadekur Rahmanโ , Shaon Bhatta Shuvoโ , Mohammad Shorif Uddinโ โ โ Department of Computer Science and Engineering, Daffodil International University, Dhaka, Bangladesh โ โ Department of Computer Science and Engineering, Jahangirnagar University, Dhaka, Bangladesh Summary Automatic grammatical verification of sentences is an essential task in natural language processing. There has been a scarcity of resources in Bangla for such tasks. To address this issue this paper presents a new n-gram based statistical approach to check the syntactic and semantic correctness of sentences in Bangla. An n-gram frequency count-based probabilistic language model is employed combining standard n-gram statistics with appropriate smoothing and advanced backoff language model to detect validity of any sentence in Bangla to design the proposed method. A new Bangla corpus of 10 million words is used to train the proposed method. The system was tested on both valid and invalid sentences collected separately from training corpus. In terms of detecting correct and incorrect sentences the proposed system achieved 82% precision and 81% recall scores outperforming the existing systems. Key words: Sentence validity detection; natural language processing; n-gram; smoothing; backoff strategy; language model.. 1. Introduction Identifying the grammatical correctness of a sentence is an emerging research area in natural language processing (NLP). Checking the validity of a text is the task of determining whether the text in question is proper with respect to the grammatical regulations of the respective language. An automated system that can judge the correctness of a given text is very useful in many application such as word processors, compilers, text messaging system, computer aided language learning systems etc. There are three methodologies that are widely used for the purpose of grammar checking namely syntax-based checking, rule-based checking and statistics-based checking. Syntax-based method [1] works by building a parse tree or table for each given sentence. The sentence is deemed valid if the parsing process succeeds. Otherwise the sentence is marked as invalid. In case of rule-based method [2], a set of manually developed grammatical rules are used to determine to correctness of the given text. On the contrary, in case of statistics-based approach [3], a statistical language model (SLM) is built from a text corpus of the</s>
|
<s>target language that can estimate the distribution of the language as accurately as possible. A SLM is a probability distribution P(s) over strings S that attempts to reflect how frequently a string S occurs as a sentence. The target text is regarded as invalid if the SLM probability score for it is below some threshold. Though there are a lot of tools & techniques developed for grammar checking in recent years such as Grammarly, WhiteSmoke, CorrectEnglishComplete etc. [4], there is still a lot of scope to improve the performance of grammar checking systems. In the last few years a lot effort has been made on the detection of correct and incorrect sentences in many different languages such as English, French, and Chinese etc. [5]. Whereas although Bangla ranks 7 among the most spoken languages in the world with more than 250 million native speakers, surprisingly there is a large scarcity of resources for Bangla sentence error detection [6, 7]. To address the above mentioned issues in this work, a new statistical method is proposed which used n-gram based language model combined with Witten-Bell smoothing and Backoff language modeling strategy [8, 9] to decide the validity of a sentence in Bangla. The presented technique was trained on a large Bangla corpus of 10 million words collected from various sources such as online newspapers, blogs, literature etc. A strategy was developed to determine appropriate threshold to distinguish between valid and invalid sentences. The threshold was finalized by performing a 5-fold cross validation [9] on the training set. The proposed method was tested on a test set of 10000 valid and 10000 invalid sentences. The proposed method outperforms the existing systems achieving 82% precision and 81% recall on the test set. The rest of the paper is organized as follows; section 2 presents a review of the previous works on Bangla grammar checking while some theoretical background on IJCSNS International Journal of Computer Science and Network Security, VOL.16 No.11, November 2016 n-gram based sentence probability calculation is provided in section 3. Whereas section 4 describes the methodology used for developing the system. Section 5 presents the experimental results while section 6 concludes the paper. 2. Related works There has been very little development in grammar checking for Bangla language. The authors in [10] proposed a context free grammar based predictive parser to recognize grammaticality of Bangla sentences. On the other hand, in [11] the authors presented an n-gram based statistical grammar checking system for Bangla, which used the n-gram frequency based probability analysis of parts-of-speech (POS) tags of words to decide whether the sentence is grammatically correct or not. Their method suffers from the zero frequency problem [8] which severely degrades the performance of the system. Also the fact that they only used POS tag information made their method only useful for detecting syntactic structure of the sentence missing on the semantic information. They used a very small corpus of only 5000 words to build the n-gram model. Using this model they reported</s>
|
<s>a moderate success rate for only detecting correct sentences on a very tiny test set of 378 sentences. In a recent work [12], another n-gram based statistical method was proposed. In this work, rather than using frequency of POS tags of words the authors used n-gram frequency based probability analysis of words to train and test their system. To resolve the zero frequency problem of n-gram models, they used Witten-Bell discounting [8] with their n-gram model. They trained their statistical n-gram model with a small experimental corpus of 1 million words with a test set of 1000 correct and 1000 incorrect sentences. But in their approach, the authors used a manually selected predefined threshold to separate the valid and invalid sentences which is not a practical approach if the method is trained and tested in different data sets. As mentioned above there is clearly no comprehensive and reliable grammar checker available for Bangla yet. This motivated us to develop a robust sentence grammaticality detection method for Bangla language. 3. N-gram based Sentence Probability Calculation 3.1 N-Grams N-grams [13] of texts are extensively used in text mining and natural language processing tasks. An n-gram is the pair of a word sequence ini ww ...1+โ containing n words and its according count, based on the occurrences of the sequence in a corpus. More concisely, an n-gram model predicts the probability of a word iw based on the probability of 11... โ+โ ini ww words sequence. According to probability theory, this can be written as )...|( 11 โ+โ inii wwwP . When used for language modeling, independence assumptions also known as markov assumptions [8] are made so that each word depends only on the last n-1 words. This probability can be calculated as, โ โ+โโ+โ =iniiniiniiwwwCwwCwwwP)...()...()...|(11 (1) )...( 1 ini wwC +โ is the count of occurrences of word sequence ini ww ...1+โ and โ โ+โw ini wwwC )...( 11 indicates the sum of counts of all the n-grams that starts with 11... โ+โ ini ww . When n = 1 it is called unigram or 1-gram. For n = 2, it is known as bigram or 2-gram. N-grams with n = 3 is called trigrams. With n = 4, they are termed as quadrigrams or 4-grams and all the higher n-grams are simply termed as n-grams such as for n = 5 the model is known as 5-grams. 3.2 Sentence Probability Calculation using N-grams To calculate the probability of a sentence using an n-grams language model, first the probability of each possible n-grams of words in the sentence are calculated. Then these n-gram probabilities are multiplied to find the sentence probability. The higher the probability of a sentence the higher chances it has to be a properly formed sentence of the target language. For a sentence S = (w1 w2 w3โฆwN) with N words separated by blank space, the probability of S can be computed as below, โ+โ=inii wwwPSP11 )...|()( (2) IJCSNS International Journal of Computer Science and Network Security, VOL.16 No.11,</s>
|
<s>November 2016 These probabilities are normalized to be within the range of 0 to 1. 3.3 Zero Frequency Problem & Discounting No matter how large a training corpus is, it cannot cover a natural language entirely. There will always be some perfectly acceptable word sequences that are missing from the corpus. This means, there will be many cases of acceptable zero probability n-grams that should have some non-zero probability. These zero frequency words that never occur in training set but occur in the test sets poses serious problems. Firstly because they indicate the underestimation of all sorts of word sequences that may appear. Secondly, since the probability of a sentence is calculated by multiplying the n-grams of different word sequences if any of the n-gram has a zero probability, the entire sentence will have zero probability which will miss-calculate the correct sentences with zero probability. To keep a language model from assigning zero probabilities to unseen words or contexts, a small portion of probability mass is taken from the more frequent words or word sequences and distributed to unseen events i.e. in this case unknown words or contexts. This process is known as discounting. There are several discounting algorithms available such as Add-One discounting, Witten-Bell discounting, Good-Turing discounting etc. [10, 12]. The Witten-Bell discounting is chosen in this work due to its simplicity and robustness. Witten-bell discounting technique will be discussed in details next. 3.4 Witten-Bell Discounting Witten-Bell (WB) discounting uses the counts of events occurring at least once to estimate the counts of events that never occurred. To compute the counts of all n-grams that has been seen in the corpus at least once, one need to calculate the number of n-gram types since each unique n-gram is present at least once in the training corpus. WB discounting works by taking some of the probability mass from n-grams that are seen at least once to distribute them among the n-grams that are never seen in the training data to prevent any n-gram from having zero probability. The total probability mass that is discounted to all the zero n-grams is calculated as below: )1...1()1...1()1...1()1...1(โ+โโ+โโ+โ=โ+โiwniwNiwniwTiwniwTiwniwฮด (3) )1...1( โ+โ iwniwฮด indicates the total probability mass discounted for zero n-grams with the context 1...1 โ+โ iwniw . )1...1( โ+โ iwniwT is the number of n-gram types with a common preceding word sequence of 1...1 โ+โ iwniw and )1...1( โ+โ iwniwN represents the total number of n-gram tokens that starts with 1...1 โ+โ iwniw context. If )1...1( โ+โ iwniwZ indicates the total number of zero n-grams starting with history 1...1 โ+โ iwniw , then the probability for any zero count n-gram can be easily calculated as, )}1...1()1...1({*)1...1()1...1()1...1|(0)...( 1โ+โ+โ+โโ+โโ+โ=โ+โ==+โiwniwNiwniwTiwniwZiwniwTiwniwwP iwwc ini (4) Since the total probability mass must equal to 1, the leftover probability mass of for all non-zero count n-grams can easily be calculated as follows: )1...1()1...1()1...1()1...1(1โ+โโ+โโ+โ=โ+โโiwniwNiwniwTiwniwNiwniwฮด (5) Now the probability for any n-gram with non-zero count can be computed as, )1...1()1...1()...(*)1...1()1...1|(0)...( 1โ+โ+โ+โโ+โ=โ+โ>+โiwniwNiwniwTwwCiwniwNiwniwwPiniwwc ini (6) IJCSNS International Journal of Computer Science and Network Security,</s>
|
<s>VOL.16 No.11, November 2016 Backoff language model is discussed next. 3.5 Backoff N-gram Language Model Introduced by Katz in 1987, Backoff (BO) language model [14] for n-grams is a non-linear method that builds an n-gram language model based on an (n-1)-gram model. BO model works on the principle that if a higher order n-gram has non-zero count then it only uses the higher order counts to calculate the probability. But if the higher order n-gram has zero count, then it backs off to the lower order n-gram i.e (n-1)-gram model to calculate the probability. The general form of recursive BO model is written below, โ+โโ+โโ+โ>+โโ+โotherwiseiwniwiwBOPiwniwiwniwCifiwniwiwPiwniwwP),1...1|()1...1(0)...1(),1...1|(*)1...1|( (7) Since in BO model, algorithm backs off to lower order model when the probability of n-gram is zero, extra probability mass gets added into the equation making the total probability of an n-gram or word sequence greater than 1 which is undesirable. So, in BO language models some probability mass needs to be discounted from the higher order models to lower order models. In (7), )1...1|(*โ+โ iwniwiwP is the discounted probability and )1...1(โ+โ iwniwฮฑ is the backoff weight that denotes the amount of probability mass discounted to an (n-1)-gram. The discounted probabilities are calculated using Good-Turing [15] estimates as follows, โ+โ+โ=โ+โ w wiwniwCiwniwC)1...1()...1(*...|(* (8) Where )...1(**iwniwCC= is calculated as, CC 1)1(* ++= (9) In (9), CN is the number of n-grams with count C . The backoff weight )1...1(โ+โ iwniwฮฑ is computed as follows, > โ+โ> โ+โโ+โ0)...(:...|(*0)...(:...|(*)1...1(iniiiniiwwCw iwwCw iiwniwฮฑ (10) 4. Proposed Method This work developed an n-gram based statistical language model (LM) combining Witten-Bell (WB) discounting with Backoff (BO) language model strategy to detect syntactic and semantic validity of any sentence in Bangla. The proposed LM is named Witten-Bell Backoff (WBB). The general workflow of the proposed system is depicted in Fig. 1. The proposed method and its related algorithms are discussed in details in the following subsections.Fig 1. Work Flow Diagram for the Proposed Method4.1 Proposed Witten-Bell Backoff Language Model Witten-Bell Backoff (WBB) is a LM which associates the Witten-Bell (WB) discouting into the Backoff (Bo) LM. The general form of WBB LM for computing the probability of an n-gram is as follows, IJCSNS International Journal of Computer Science and Network Security, VOL.16 No.11, November 2016 โ+โโ+โโ+โ>+โโ+โโ+โโotherwiseiwniwiwWBBPiwniwiwniwCifiwniwiwPiwniwWBBiwniwwP),1...1|()1...1(0)...1(),1...1|())1...1(1()1...1|(ฮด (11) Where )...โ+โ iwP is the maximum likelihood estimate (MLE) probability of an n-gram defined in (1). The backoff weight for the lower order models in WBB is the discounted probability mass )1...1(โ+โ iwniwฮด defined in (3). The leftover probability mass ))1...1(1(โ+โโ iwniwฮด after discounting is defined in (5), which is used to recalculate the discounted probability for the higher order n-grams. 4.2 Training the LMs To train the language models, a corpus of 10 million word tokens collected online with topics ranging in politics, literature, science, education, sports, music and other news wire was used. The steps for training a LM are listed in Algorithm 1. ALGORITHM 1. aLGORRITHM for Training LM 1. Extract the sentences from the corpus. 2. Compute and store the</s>
|
<s>n-gram frequencies into the backup storage for n = 1 to 4. 3. Compute the probabilities and backoff weights (if any) for all n-grams calculated in step 2 using appropriate LM and store them in the storage in arpa format. In this work, WB, BO and WBB all three models were trained for evaluation purpose. 4.3 Testing the LMs To test if a sentence is valid or invalid; the counts of all the n-grams are first calculated. These frequencies are then used to calculate the probabilities of the n-grams using respective LM methods and training data. The sentence probability score is calculated using (2). If the sentence score is higher than some threshold, it is regarded as valid otherwise invalid. The threshold calculation is discussed in the next section. The detail procedure for testing a list of sentences for validity is presented in Algorithm 2. ALGORITHM 2. ALGORITHM for Testing Sentences 1. Extract the sentences to be tested from test file. 2. For each sentence S Do, 3. Compute N, the number of n-grams in S. 4. Get the probabilities for the n-grams the using equations (4) & (6) or (7) or (11) for respective LM. 5. Set score = 1. 6. For i = 1 to N Do, 7. Get p = the probability of ith n-gram. 8. score = score * p . 9. End For 10. If score > T Do, 11. Predict S as valid. 12. Else, 13. Predict S as invalid. 14. End For 15. Store the prediction results. 4.4 Threshold Calculation Language modelling methods to grammatical correctness detection are typically based on a probability score produced by a language model (LM) learned from a large corpus of correct sentences. A valid sentence will usually have higher probability score than an invalid one. With this simple assumption, an initial threshold Tmin is defined as the minimum probability score of all valid sentences when testing the LM on a held-out data set. To reduce the generalization error and to achieve better performance on the test set, 5-fold cross validation is used in this work. The threshold selection procedure is depicted in Algorithm ALGORITHM 3. ALGORRITHM for Training LM 1. Given the training corpus D; divide it into 5 sets Dall ={D1, D2, D3, D4, D5} of size N/5 each where N is size of corpus. 2. For i = 1 to 5 Do, 3. Divide Dall into two subsets Dheldout = Di . and Dtrain = Dall - Di . 4. M = train the LM on Dtrain. 5. Scores = test M on Dheldout. 6. Tmin = Find minimum score from Scores. 7. Si = Tmin . 8. End For 9. T = AVG(S). //average on 5-fold cross validation. 10. return T . //T is the final threshold selected 5. Experimental Results This work implemented three different n-gram based statistical language model (LM) namely Witten-Bell (WB) LM, Backoff (BO) LM and proposed Witten-Bell Backoff (WBB) LM using the same setup and methodologies as explained section</s>
|
<s>4. In order to avoid model over fitting the training corpus was divided into two sets namely training set comprised of 80% of the data and held-out set IJCSNS International Journal of Computer Science and Network Security, VOL.16 No.11, November 2016 with rest of the 20% data. The trained LMs are tested on the held-out to find the best threshold to detect the valid and sentences as explained in section 4.4. To test the different LMs to detect grammatical validity of Bangla sentences, a set of 10000 correct sentences were collected distinct from the training data. Another 10000 ill-formed or invalid sentences were auto generated using insertion, deletion, transposition and substitution operations on the valid sentences. By inserting a word from a word list },...,{ 21 mwwwW = at )1( +n positions, )1( +ร nm sentences can be generated. Removing a particular word at a time from a sentence with n words, a total of n sentences can be generated each with )1( โn words. By exchanging two consecutive words in a sentence, )1( โn sentences can be generated allowing only one exchange at a time. Substituting each word once with its most possible l cohorts one can produce )( ln ร sentences from a sentence with n words. Thus, using insertion, deletion, substitution & transposition operations, approximately rlnnnnm รร+โ+++ร )]()1()1([ invalid sentences can be generated from a set of r valid sentences. This method will produce a huge number of sentences. The selected 10000 invalid sentences were randomly collected from these auto-generated sentences filtered with lower n-gram scores. The comparative performance of the LM methods has been evaluated by Precision (PRC) and Recall (REC) which are calculated in terms of True Positives (TP), False Positives (FP), True Negatives (TN) and False Negatives (FN) [16]. Precision and Recall for positive or correct sentences can be defined as, FPTPposPRC= (12) FNTPposREC= (13) Precision and Recall for negative or incorrect sentences can also be defined analogously as, FNTNnegPRC= (14) FPTNnegREC= (15) Since FP and FN are quite dangerous for grammar verification systems, PRC and REC were selected for performance evaluation. Table 1 shows the comparative performances of the LM systems for all n-gram orders for both valid and invalid test sentences respectively. As can be seen, the performance of each of the methods improves with the order of n-gram i.e. they perform well with higher order n-grams. This can be easily derived from the fact that each method performs their best with the 4-gram model. Table 1: Comparative performance Analysis of Different LM Systems Results attained with Valid Sentences Methods Precision(PRCpos) Recall (RECpos) gram gram gram gram gram gram (existing) 59% 67% 78% 76% 78.5% 81% BO 56% 68% 76% 76% 80% 81.5% WBB 62% 71% 80 % 78% 82% 84% Results attained with Invalid Sentences Methods Precision (PRCneg) Recall (RECneg) gram gram gram gram gram gram (existing) 72% 80% 81% 32% 67% 76% BO 70% 79% 81.5% 31% 66% 77% WBB 74% 81% 83% 32% 68% 78.5% Average Precision & Recall</s>
|
<s>for All Methods Methods Precision (PRCavg) Recall (RECavg) gram gram gram gram gram gram (existing) 66% 74% 80% 54% 73% 79% BO 63% 74% 79% 54% 73% 79% WBB 68% 76% 82% 55% 75% 81% As can be noticed from the Table 1, the precision for the grammatical data are quite low compared to recall values for all models. Whereas, the recall values for the ungrammatical sentences are quite low compared to precision values for all LMs. This is due to the fact that there was a higher number of FPs found in the experiments. There may be two reasons that may have influenced the high FPs. Firstly, since due to unavailability of some standard real error corpus; the ungrammatical sentences were generated artificially. There may be some artificially generated error test sentences that may have still remained grammatically correct and hence detected as correct sentence increasing the FP rate. Table 2 shows some of the artificially generated invalid sentences that are actually valid grammatically which caused the classifier methods to misclassify them as FPs. Secondly, selecting lowest probability score among all correct sentences as threshold in 5-fold cross validation process has set a hard boundary for positive i.e. grammatical sentences reducing the number of FNs but adversely also increased the chances of FPs. The trade-off between precision and recall is depicted in Fig. 2 and Fig. 3 for both valid and invalid sentences respectively. Clearly as for all LMs the PRC increases with decreasing REC. The method that finds the best trade-off between them is the one most desirable for application. Following the results from Table 1 and Fig. 2 & 3, itโs clear that the proposed WBB method outperforms both WB (existing) and BO methods in terms of precision-recall IJCSNS International Journal of Computer Science and Network Security, VOL.16 No.11, November 2016 trade-off achieving highest PRC of 80% and 83% for valid and invalid data sets respectively with its 4-gram model. It also attained the highest REC values for both correct and incorrect sentences with 84% and 78.5% REC values respectively. In terms of average PRC and REC among all test sentences the proposed method achieved the highest PRC of 82% and REC of 81% outperforming other methods. Table 3 & 4 presents some examples of correctly predicted valid and invalid sentences respectively. Fig 2. Precision and recall trade-off for valid sentences set for all LMs. Fig 3. Precision and recall trade-off for invalid sentences set for all LMs. Table 2: Examples of some valid sentences that are generated as invalid sentences & misclassified as FPs Table 3: Examples of some correctly predicted valid senetences by the proposed method Table 4: Examples of some correctly predicted invalid senetences by the proposed method 6. Conclusions In this work, a statistical Bangla sentence validity checking system has been developed which outperforms the existing systems for sentence grammaticality verification. As per our knowledge, this was first attempt to train and test the system on a large corpus of 10 million words</s>
|
<s>for the purpose of grammar checking, which provided better clarity and generalization of performance measures. We expect that our attempt will encourage other researchers to work on Bangla grammar verification which needs further attention as development in this research area is not yet up to the mark. In future, we will try to combine some linguistic information into our statistical system for better performance. References [1] K. Jensen, G.E. Heidorn, S.D. Richardson, Natural Language Processing, the PL-NLP approach. 1993. [2] D. Naber, A Rule-Based Style and Grammar Checker, Diploma Thesis, Computer Science. University of Bielefeld, 2003. [3] C. D. Manning, P. Raghavan, H. Schรผtze, An Introduction to Information Retrieval. Cambridge University Press, 2009. [4] TopTenReviews, โOnline Grammar Check Reviewsโ, http://www.toptenreviews.com/services/education/best-online-grammar-checker/, Access Date: 28 October 2016. [5] Y. Wu, โThe Impact of Technology on Language Learning,โ Future Information Technology, Lecture Notes in Electrical Engineering, vol. 309, pp. 727-731, 2014. [6] J. Lane, โThe 10 Most Spoken Languages in the World,โ https://www.babbel.com/en/magazine/the-10-most-spoken-languages-in-the-world, Access Date: 24 October 2016. IJCSNS International Journal of Computer Science and Network Security, VOL.16 No.11, November 2016 [7] Wikipedia, โBengali languageโ, https://en.wikipedia.org/wiki/Bengali_language, Access Date: 24 October 2016. [8] D. Jurafsky, J.H. Martin, Speech and Language Processing An Introduction to Natural Language Processing: Computational Linguistics and Speech Recognition. Prentice Hall, Englewood Cliffs, New Jersey 07632 , September 28, 1999. [9] C. Manning and H. Schรผtze, Foundations of Statistical Natural Language Processing. MIT Press, Cambridge, MA: May 1999. [10] K.M.A. Hasan, A. Mahmud, A. Mondal and A. Saha, โRecognizing Bangla Grammar Using Predictive Parser,โ International Journal of Computer Science & Information Technology (IJCSIT), vol. 3, pp. 61-73, December 2011. [11] M.J. Alam, N. UzZaman, M. Khan, โN-gram based Statistical Grammar Checker for Bangla and English,โ Ninth International Conference on Computer and Information Technology (ICCIT), December 2006. [12] N.H. Khan, M.F. Khan, M.M. Islam, M.H. Rahman and B. Sarker, โVerification of Bangla Sentence Structure using N-Gram,โ Global Journal of Computer Science and Technology: A Hardware & Computation, vol. 14, 2014. [13] Wikipedia, โn-gramโ, https://en.wikipedia.org/wiki/N-gram, Access Date: 30 October 2016. [14] S.M. Katz, โEstimation of probabilities from sparse data for the language model component of a speech recogniser,โ IEEE Transactions on Acoustics, Speech, and Signal Processing, 35(3), 400โ401, 1987. [15] I.J. Good, โThe population frequencies of species and the estimation of population parameters,โ Biometrika, 40 (3โ4): 237โ264, 1953. [16] R.B. Yates, B.R. Neto, Modern Information Retrieval. New York, NY: ACM Press, Addison-Wesley, Seiten 75 ff., 1999. Md. Riazur Rahman obtained his B.Sc. degree in Computer Science from Daffodil International University, Dhaka, Bangladesh. Now he is working as a Lecturer at the Department of Computer Science and Engineering in Daffodil International University. He is very keen on doing research work. He has a number of publications in international and national journals and conference proceedings. His research interest includes Natural Language Processing, Text Mining, Information Retrieval, Artificial Intelligence, Pattern Recognition, and Image Processing. Md. Tarek Habib is continuing his Ph.D. degree at the Department of Computer Science and Engineering in Jahangirnagar University. He obtained his</s>
|
<s>M.S. degree in Computer Science and Engineering (Major in Intelligent Systems Engineering) and B.Sc. degree in Computer Science from North South University in 2009 and BRAC University in 2006, respectively. Now he is an Assistant Professor at the Department of Computer Science and Engineering in Daffodil International University. He is much fond of research. He has had a number of publications in international and national journals and conference proceedings. His research interest is in Artificial Intelligence, especially Artificial Neural Networks, Pattern Recognition, Computer Vision and Natural Language Processing. Md. Sadekur Rahman obtained his B.Sc. and M.Sc. degree in Applied Mathematics & Informatics from Peoples' Friendship University of Russia. Now he is working as an Senior Lecturer at the Department of Computer Science and Engineering in Daffodil International University. He has a number of publications in international and national journals and conference proceedings. His research interest includes Data Mining, Artificial Intelligence, Pattern Recognition, and Natural Language Processing. Shaon Bhatta Shuvo obtained his M.Sc. degree in Computer Science from South Asian University, New Delhi, India in 2015. He obtained B.Sc. (Engineering) degree in Computer Science & Telecommunication Engineering from Noakhali Science & Technology University, Bangladesh. He is currently working as Lecturer at the Department of Computer Science &Engineering in Daffodil International University, Dhaka, Bangladesh. His research interest includesBig Data, Artificial Intelligence, especially Artificial Neural Networks, Natural Language Processing and Computer Vision. Dr. Mohammad Shorif Uddin received his PhD in Information Science from Kyoto Institute of Technology, Japan, Master of Education in Technology Education from Shiga University, Japan, Bachelor of Science in Electrical and Electronic Engineering from Bangladesh University of Engineering and Technology (BUET) and also MBA from IBA, Jahangirnagar University. He currently serves as Professor and Chairman in the Department of Computer Science and Engineering, Jahangirnagar University, Dhaka. His research is motivated by applications in the fields of imaging informatics, computer vision and image velocimetry. He has published more than 70 papers in peer-reviewed international journals and conference proceedings and also delivered keynote speeches in some of international conferences in home and abroad. He holds two patents for his scientific inventions. He is the co-author of three books. He is a Fellow of Bangladesh Computer Society and also a senior member of IEEE and IACSIT. View publication statsView publication statshttps://www.researchgate.net/publication/311693706</s>
|
<s>See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/337199069A Technique For Perceiving Abusive Bangla CommentsArticle ยท July 2019DOI: 10.5281/zenodo.3544583CITATIONSREADS2252 authors:Some of the authors of this publication are also working on these related projects:Complexity View projectHuman Robot Interaction Using Sensor Based Hand Gestures For Assisting Disable People View projectMd Gulzar HussainGreen University of Bangladesh19 PUBLICATIONS 11 CITATIONS SEE PROFILETamim Al MahmudAston University17 PUBLICATIONS 16 CITATIONS SEE PROFILEAll content following this page was uploaded by Md Gulzar Hussain on 12 November 2019.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/337199069_A_Technique_For_Perceiving_Abusive_Bangla_Comments?enrichId=rgreq-c0ace8da4efa779c4d2769016b15e19e-XXX&enrichSource=Y292ZXJQYWdlOzMzNzE5OTA2OTtBUzo4MjQ0ODM1NDM4MDE4NTZAMTU3MzU4MzYwMzA3Mg%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/337199069_A_Technique_For_Perceiving_Abusive_Bangla_Comments?enrichId=rgreq-c0ace8da4efa779c4d2769016b15e19e-XXX&enrichSource=Y292ZXJQYWdlOzMzNzE5OTA2OTtBUzo4MjQ0ODM1NDM4MDE4NTZAMTU3MzU4MzYwMzA3Mg%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Complexity-5?enrichId=rgreq-c0ace8da4efa779c4d2769016b15e19e-XXX&enrichSource=Y292ZXJQYWdlOzMzNzE5OTA2OTtBUzo4MjQ0ODM1NDM4MDE4NTZAMTU3MzU4MzYwMzA3Mg%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Human-Robot-Interaction-Using-Sensor-Based-Hand-Gestures-For-Assisting-Disable-People?enrichId=rgreq-c0ace8da4efa779c4d2769016b15e19e-XXX&enrichSource=Y292ZXJQYWdlOzMzNzE5OTA2OTtBUzo4MjQ0ODM1NDM4MDE4NTZAMTU3MzU4MzYwMzA3Mg%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-c0ace8da4efa779c4d2769016b15e19e-XXX&enrichSource=Y292ZXJQYWdlOzMzNzE5OTA2OTtBUzo4MjQ0ODM1NDM4MDE4NTZAMTU3MzU4MzYwMzA3Mg%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Gulzar_Hussain?enrichId=rgreq-c0ace8da4efa779c4d2769016b15e19e-XXX&enrichSource=Y292ZXJQYWdlOzMzNzE5OTA2OTtBUzo4MjQ0ODM1NDM4MDE4NTZAMTU3MzU4MzYwMzA3Mg%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Gulzar_Hussain?enrichId=rgreq-c0ace8da4efa779c4d2769016b15e19e-XXX&enrichSource=Y292ZXJQYWdlOzMzNzE5OTA2OTtBUzo4MjQ0ODM1NDM4MDE4NTZAMTU3MzU4MzYwMzA3Mg%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Green_University_of_Bangladesh?enrichId=rgreq-c0ace8da4efa779c4d2769016b15e19e-XXX&enrichSource=Y292ZXJQYWdlOzMzNzE5OTA2OTtBUzo4MjQ0ODM1NDM4MDE4NTZAMTU3MzU4MzYwMzA3Mg%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Gulzar_Hussain?enrichId=rgreq-c0ace8da4efa779c4d2769016b15e19e-XXX&enrichSource=Y292ZXJQYWdlOzMzNzE5OTA2OTtBUzo4MjQ0ODM1NDM4MDE4NTZAMTU3MzU4MzYwMzA3Mg%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Tamim_Mahmud?enrichId=rgreq-c0ace8da4efa779c4d2769016b15e19e-XXX&enrichSource=Y292ZXJQYWdlOzMzNzE5OTA2OTtBUzo4MjQ0ODM1NDM4MDE4NTZAMTU3MzU4MzYwMzA3Mg%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Tamim_Mahmud?enrichId=rgreq-c0ace8da4efa779c4d2769016b15e19e-XXX&enrichSource=Y292ZXJQYWdlOzMzNzE5OTA2OTtBUzo4MjQ0ODM1NDM4MDE4NTZAMTU3MzU4MzYwMzA3Mg%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Aston_University?enrichId=rgreq-c0ace8da4efa779c4d2769016b15e19e-XXX&enrichSource=Y292ZXJQYWdlOzMzNzE5OTA2OTtBUzo4MjQ0ODM1NDM4MDE4NTZAMTU3MzU4MzYwMzA3Mg%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Tamim_Mahmud?enrichId=rgreq-c0ace8da4efa779c4d2769016b15e19e-XXX&enrichSource=Y292ZXJQYWdlOzMzNzE5OTA2OTtBUzo4MjQ0ODM1NDM4MDE4NTZAMTU3MzU4MzYwMzA3Mg%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Gulzar_Hussain?enrichId=rgreq-c0ace8da4efa779c4d2769016b15e19e-XXX&enrichSource=Y292ZXJQYWdlOzMzNzE5OTA2OTtBUzo4MjQ0ODM1NDM4MDE4NTZAMTU3MzU4MzYwMzA3Mg%3D%3D&el=1_x_10&_esc=publicationCoverPdfGUB JOURNAL OF SCIENCE AND ENGINEERING, VOLUME 04, ISSUE 01, DECEMBER 2017A TECHNIQUE FOR PERCEIVINGABUSIVE BANGLA COMMENTSMd Gulzar Hussain and Tamim Al MahmudAbstractโMost of the research on abusive commentsor text detection is conducted in English, some of whichare intended to detect humiliating or insulting text. Buta few works are found in the Bangla language. Detect-ing abusive text for Bangla language will be helpfulto prevent cyber crimes such as online blackmailing,harassment and cyber bullying which are nowadaysbecoming the main concern in Bangladesh. Our goal is todetect abusive Bangla comments that are gathered fromdifferent social sites where people share their views,feelings, opinions, etc. in this paper. In order to classifya bangla comments is abusive or not, we proposed aroot level algorithm and also proposed uni-gram stringfeatures to achieve a better result. We have collectedseveral comments from renowned social media Facebookfor our work.Index TermsโAbusive, Bangla, Comment, NaturalLanguage Processing (NLP), Unigram.I. INTRODUCTIONNATURAL languages are the languages of hu-man, but computer programming languages e.g.C and C++, are not. For example, Bangla, French,English, and Chinese are natural languages. In com-puter science, making a computer to understand nat-ural languages may be the most challenging issue.Natural language processing (NLP) is a pathway forcomputers to understand, explore, and derive meaningfrom human language in a smart and useful path. Byutilizing NLP, developers can organize and structureknowledge to perform tasks such as automatic sum-marizing, translation, relationship extraction, namedentity recognition, sentiment analysis, topic segmen-tation, and speech recognition. A large portion ofthe examination being done on natural language pro-cessing revolves around search, particularly enterprisesearch. This includes enabling users to query datasets in the form of a question that they might poseto another person. The machine translates the im-portant elements of the human dialect sentence, suchas those that might compare to particular highlightsin a data set, and returns an answer. NLP can beused to interpret and analyze free text. There is anenormous amount of information, such as medicalrecords of patients, stored in free text files. BeforeThis paper was received on 22 April 2019 and accepted on 15July 2019. This work was supported financially by GUB.Md Gulzar Hussain is with the Computer Science & Engineer-ing, Green University of Bangladesh, Dhaka, Bangladesh. E-mail:gulzar@cse.green.edu.bdTamim Al Mahmud is with the Computer Science & Engineer-ing, Green University of Bangladesh, Dhaka, Bangladesh. E-mail:tamim@cse.green.edu.bddeep learning NLP models, this information wasnot accessible for computer - assisted analysis andcould not be systematically analyzed. However, NLPenables analysts to find relevant information in thefiles by means of massive troves of free text. Anotherprimary use case for</s>
|
<s>NLP is sentiment analysis. Datascientists can evaluate comments on social mediausing sentiment analysis to see how their businessโsbrand is performing, for example, or review notesfrom customer service teams to identify areas wherepeople want the company to perform better. Googleand other search engines use NLP deep learningmodels to base their machine translation technology.This allows algorithms to read text on a web page,interpret its meaning and translate it into differentlanguage. In this field, research work is increasing dayby day. Natural Language Processing contributes toalmost every sector such as Customer Service, Health-care and Automotive etc. According to the reportby Tractica(2017), market profit on NLP software,hardware, and various kinds of services would bearound $22.3 billion by 2025. It forecasts that thisArtificial Intelligence software service solution willrise from $136 (million) in 2016 to a market value of$5.4 (billion) by 2025 [1].In 2010 Bangla Language was spoken by 205million native speakers and is the seventh most fre-quently spoken native language in the world [2]. Thistotal number covers 3.05 percent of the worldโs totalpopulation. And in 2017 there will be around 250- 300 million total speakers worldwide [3]. Thereare 81.7 million Internet users in Bangladesh, 30.0million of whom are active social media users and28.0 million of whom use mobile phones in January2018 to access social media [4]. Facebook is theworldโs most popular social networking sites with2.23 billion active monthly users, YouTube is secondwith 1.9 billion active monthly users and Instagramis third with 1 billion active monthly users in August2018 [5].A lot of research has been done in the field ofabusive text detection in English Language usingsocial networks but limited amount of research hasbeen done in Bangla Language. Though we foundsome works on sentiment analysis on Bangla languagebut recently very few research has been done to detectabusive Bangla text using social network sites. So, inthis field, there are lots of research scope for us. In thisarticle, we are proposing a new technique to detect11 | PageA TECHNIQUE FOR PERCEIVING ABUSIVE BANGLA COMMENTSabusive Bangla text. We proposed two algorithmsusing machine learning idea. One of them is to trainour system using training dataset and another one isto classify the test dataset. We are just classifying aBangla text as if it is abusive or not. Our proposedsystem giving a satisfied accuracy of 71.7%.II. LITERATURE REVIEWWe briefly discussed the methodologies of differentresearch carried out on the English and Bangla lan-guage. There are a large number of approaches thathas been developed to date for classifying sentimentsor polarities in English texts. These methods canbe classified in two categories- (1) machine learningor statistical-based approach and (2) unsupervisedlexicon based approach. But detecting abusive textin social sites is one of the challenging work due tothe changing nature and the variation in the languageused. Researchers tried to develop many approachesto detect abusive or offensive text to get a betterresult. When it comes to work with Bangla languageit becomes more difficult to detect abusive text.A. Work In English LanguageIn paper [6], they developed a machine learningbased method for detecting hate speech on onlineuser comments and categorized the sentences intoHate</s>
|
<s>Speech, Derogatory and Profanity categorizes.They used Vowpal Wabbits regression model tomeasure different aspects of the user comment andused N-grams, Linguistic, Syntactic features. Usingmulti-class classifier, [7] categorized tweets into hatespeech, offensive and neither of these two and differhate speech from offensive language. The authors of[8] proposed a Lexical Syntactic Feature (LSF) ar-chitecture for detecting offensive content and identifypotential offensive users in social media. In [9], theauthors proposed a statistical topic modeling to detectprofanity-related offensive content in Twitter.[10] is one of first papers to apply supervisedmachine learning methods to sentiment classification.The authors perform the classification on movie re-views and show that MaxEnt and SVM outperformNave Bayes (NB) classifier. One of the first papers onthe automatic classification of sentiments in Twittermessages, using machine learning techniques, is by[11].Through distant supervision, the authors use atraining corpus of Twitter messages with positive andnegative emoticons and train this corpus on threedifferent machine learning techniques- SVM, NaveBayes, and MaxEnt, with features such as N-grams(unigrams and bigrams) and Part of Speech (POS)tags. They obtain a good accuracy of above 80%.[12] follow the same procedures as [11] to developthe training corpus of Twitter messages, but theyintroduce a third class of objective tweets in theircorpus and form a dataset of 3 classes- positivesentiments, negative sentiments, and a set of ob-jective texts (no sentiments). They use multinomialNB, SVM, and Conditional Random Field (CRF) asclassifiers with N-grams and POS-tags as features.The authors of [13] use 50 hashtags and 15 emoticonsto train a supervised sentiment classifier using theK-Nearest Neighbors (KNN) algorithm as sentimentlabels. In [14], the authors integrate a two-stepssentiment detection framework by first distinguishingnon-subjective tweets from subjective tweets and thenfurther classify the subjective tweets into negative andpositive polarities. The authors find that using meta-features (POS tags) and tweet-syntax features (emoti-cons, punctuations, links, retweets, hashtags, and up-percases) to train the SVM classifiers enhances thesentiment classification accuracy by 2.2% comparedto SVMs trained from unigrams only. Although su-pervised machine learning methods have been widelyemployed and proven effective in sentiment classifi-cation, they normally depend on a large amount oflabeled data, which is both time consuming and laborintensive work.Unsupervised lexicon-based methods rely on man-ually or semi-automatically constructed lexical re-sources, such as lexicons, to identify the polarity oftexts in general. Lexicon is the collection of strongsentiment-bearing words or phrases, which are labeledwith their prior polarity, or the context-independentpolarity most commonly associated with the lexiconentries. There are several lexicons in English whichare available online. One of the initial works to applyunsupervised techniques to sentiment classificationproblem is by [15]. In the paper, the average semanticorientation of sentences containing adjectives or ad-verbs classifies a document as positive or negative.The semantic orientation of a phrase is calculatedas the Pointwise Mutual Information (PMI) with apositive seed word excellent minus the PMI witha negative seed word poor. This approach achieves84% accuracy in automotive reviews and 66% ac-curacy in film reviews. In [16], the authors developa sentiment lexicon manually consisting of negativeand positive sentiment bearing words annotated withtheir POS tags. This sentiment lexicon, along with aset of regulations, is used to first classify the tweetsas subjective</s>
|
<s>or objective and then further classifythe subjective tweets as positive, negative or neutral.They use a corpus of political tweets collected overthe UK pre-election period in 2010. For the taskof correctly identifying that a document contains apolitical sentiment and then correctly identifying itspolarity, they get 62% Precision and predict 37%Recall. However, methods based on lexical resourcesoften have the problem of obtaining low recall valuesbecause they depend on the presence of the wordscomprising the lexicon in the message to determinethe orientation of opinion. And due to the variedand changing nature of the language used on Twitter,this approach is not suitable for our thesis work.Moreover, as such lexical resources are not availablefor many other languages spoken in social media,such as Bangla, hence this approach often becomesunsuitable for scarce-resource languages.B. Work In Bangla LanguageThe authors of [17] calculated the total positivity,negativity of sentence or document with regard to thetotal meaning of the sentence. Tf. Idf (term frequency12 | PageGUB JOURNAL OF SCIENCE AND ENGINEERING, VOLUME 04, ISSUE 01, DECEMBER 2017- inverse document frequency) was used to find abetter solution in this process of obtaining informa-tion from a document. They wanted to determinesome patterns in this experiment so that positive andnegative sentences could be categorized. In paper[18], they tried to extract from Bangla Micro-blogโsposts the negative or positive opinion or feeling of afull text. They used Support Vector Machine (SVM)and Maximum Entropy (MaxEnt) for classificationand developed the training corpus using a semi -supervised bootstrapping approach. By combining theresults of word2vec word co - occurrence score withthe sentiment polarity score of the words [19], theauthors tried to classify Bengali comments with sen-timent and found 75.5% accuracy.Authors of [20] proposed an root leveled algorithmto find out the abusiveness of a Bangla comment. Theyworked with 300 Bangla comments but didnโt foundany accuracy using those data. We found a paperwhich tried to detect abusive Bangla text is [21], usedRandom Forest(RF), Multinomial Nave Bayes(MNB),Support Vector Machine (SVM) with Radial BasisFunction (RBF), Polynomial, Linear, and Sigmoidkernel and compared with uni-gram, bi-gram and tri-gram based Count-Vectorizer and Tf.idf-Vectorizerfeatures to detect Bengali abusive text. They foundthat with the Tf.idf - Vectorizer trigram functions, theSVM Linear kernel performs best.III. METHODOLOGYAbusive text classification techniques may be cat-egorized into two types, i.e., binary classification &multi-class classification. In binary, we can just de-termine if a comment is abusive or not. But in multi-class classification a comment can be categorized ashate speech, anger to someone or some group ofpeople, criticism, insult etc. Binary classification ismuch easier than multi-class classification for EnglishLanguage. It becomes more harder in case of BanglaLanguage. So we decided to work with the binaryclassification in Bangla Language text. For conductingthe experiment we developed a manual algorithmto detect abusive Bangla text. We implemented ourproposed algorithms and automate the procedure toget the result. The system architecture is the basicstructure and general vision of a system. Our systemarchitecture is shown in Figure 1, which outlines theentire process.A. Dataset CollectionFor conducting experiment, we have collected com-ments on posts from Facebook pages, Prothom-Alonews, and YouTube channels e.g. Prothom Alo [22],Mashrafe</s>
|
<s>Bin Mortaza [23], Shakib Al Hasan [24],SalmoN TheBrownFish [25], Naila Nayem [26] andProthom Alo News Portal [27]. Only public commentsare collected without the commenters informationdue to protect privacy. In total, we collected 300comments as we will do the whole experiment man-ually. We have done the experiment in three differentset of 100, 200 and 300 comments. We used 80%of the comments to train the term weighting usingproposed Training Algorithm and remaining 20% ofCollected Raw Comments Preprocessing -Removal of usernames, emotions -Removal of Special Characters, hashtags etc. Survey of Labeling whether Abusive or Not Abusive Processed Comments Labeled Training Data Unlabeled Test Data Assign Word Weight using our Proposed Training Algorithm Proposed Classifier Algorithm Flag Abusive or Not Fig. 1: System Architecturethe comments will be tested using proposed Classi-fier Algorithm. However, the comments we collectedcontained English text as well; instead of filtering outthe English text from the comments, we take in themas part of our training and testing sets. Since Englishwords express strong abusiveness, they are likely tocontribute to the classification of our data set.B. SurveySurvey is used to gather the opinions, beliefs andfeelings of collected comments in our research. Thissurvey for collecting comments is the part whichno other authors had done. They used rule basedclassifier and methods to label the training data. Butwe were doing this survey to ensure the fact that thecomments given the right polarity of abusiveness. Weran the survey in our varsity campus and students ofvarious age participated in our survey. To label everycomment we had to run survey on the comments. Wecreated a survey form as shown in Figure 2 and ransurvey in Green University of Bangladesh. For everycomment, we were taken opinion from at least 50persons to take a decision whether the comment isabusive or not. Finally we shown found results for300 comments in Figure 3 and labeled every commentbased on the result.C. PreprocessingPreprocessing is an significant task and a crucialstep in Text mining, Natural Language Processing(NLP). The information requirement of the user isrepresented by a query or profile and includes one ormore search terms and additional information such as13 | PageA TECHNIQUE FOR PERCEIVING ABUSIVE BANGLA COMMENTSSurvey No: Date: Number of Participant: This is a survey on Bangla comments that we have collected from different social sites. We are doing a researchentitle โDetecting Bengali abusive sentencesโ. Please provide your opinion by putting ( ๏ ) mark in theappropriate column whether the comment is abusive or not.https://www.facebook.com/DailyProthomAlo/SL Comments เฆเฆเฆฟ เฆฌเฆพเฆเง เฆเฆฅเฆพ เฆเฆเฆฟ เฆฌเฆพเฆเง เฆเฆฅเฆพ เฆจเฆพ1 เฆเฆฐ เฆนเฆฟเฆเฆกเฆผเฆพ เฆชเฆผเฆพ เฆเฆผเฆพเฆเฆผเฆพ เฆเงเฆฒเงเฆฒเงเฆฐ เฆเฆจเงเฆฏ เฆเฆฎเฆผเฆพเฆฐ เฆนเฆฟเงเง เฆฌเงเฆผเฆพเฆฐ เฆเฆจเงเฆเฅค เงเฆผเฆพเฆเฆจเงเฆเฆผเฆพ เฆเฆเฆผเฆพเฆธ เฆนเงเฆฒเงเฆผเฆพเฅค2 เฆนเฆฐเฆฒเฆชเฆผเฆพเฆเฆ เฆผเฆพเฆฐ เฆคเงเฆนเฆฎ เฆฎเฆฟเฆผเฆพเฆจเง! เฆฌเง*เงเงเฆฒเฆฟ เฆนเงเฆฒเงเฆฒเงเฆผเฆพ เงเฆผเฆพเฆเงเฆผเฆพเฆนเฆฎเฆฐ เฆธเฆฎเงเฆฎเฆผเฆพเฆจเง3 เฆเฆ เฆเฆผเฆพ*เฆนเฆเฆเฆผเฆพเฆฒเฆฟ เฆเงเฆเฆฒเงเฆ เฆเฆฎเฆเฆผเฆพเฆ เฆจเงเฆธเงเฆ เฆฟเง4 เฆธเฆผเฆพเฆพเฆเฆฌเฆผเฆพเฆนเงเฆฟ เฆ เฆเงเฆผเฆพเฆฌเฆผเฆพ เฆเฆผเฆพเฆเฆฒเงเฅค เฆชเฆฐเฆฌเฆนเฆคเฆ เฆเฆฌเฆฐ เฆฌเง*เฆนเง เฆฌเฆผเฆพเงเงเฆฆเงเฆทเฆฃ เฆฟเฆฐเฆฒเงเฅค5 เฆฌเฆผเฆพ*เฆฐ เฆเฆฌเฆฐ เฆธเฆผเฆพเงเฆผเฆพเฆฐเฆผเฆพ6 เฆฟเงเฆฟเฆผเฆพเฆคเฆผเฆพเฆฐ เฆชเฆนเฆฐเฆเฆผเฆพเงเฆฟเฆเงเงเฆผเฆพ เฆฌเฆผเฆพเฆพเฆเงเฆผเฆพเฆฒเงเฆฒเงเฆฐ เฆจเงเฆผเฆพเฆนเงเฆฟเฆผเฆพเฆฒเงเฆฐ เฆเฆเฆฒเง เฆนเงเฆฒเง,เฆเฆเฆผเฆพ เฆฎเฆผเฆพเฆจเงเฆผเฆพ เฆฏเฆผเฆพเง เฆจเงเฆผเฆพเฅค7 เฆจเงเฆฏเฆผเฆพเฆพเฆเฆเฆผเฆพ เฆฟเฆฒเง เฆจเงเฆผเฆพเฆฒเฆเฆผเฆพ8 เฆเฆเฆฒเฆ* เฆฌเฆผเฆพ*!9 เฆฟเฆผเฆพเฆฒเงเฆจเงเฆผเฆพเฆฐเฆผเฆพ เฆเงเงเฆเฆผเฆพ เฆงเงเฆฌเฆพเฆเฆธ เฆฟเฆฒเฆฐ เฆนเงเฆฒเงเฆผเฆพ!! 10 เฆฌเงเง เฆฎเฆผเฆพเฆฟเฆ เฆผเฆพ เฆเฆจเงเฆเฆจเง เฆชเฆผเฆพเฆเฆฒเง เฆฎเฆผเฆพเง เฆเฆฏเฆญเฆผเฆพเฆฒเฆฌ เฆฎเฆผเฆพเง เฆฟเฆผเฆพเฆฎเฆผเฆพเง..Fig. 2: A Sample Survey FormFig. 3: Survey Result for all 300 Commentsthe weight of the words. The decision to retrieve istherefore taken by</s>
|
<s>comparing the query terms with theindex terms (important words or phrases) that appearin the document itself. The decision may be binary(reclaim / reject) or may involve an assessment of thedegree of relevance that the document needs to query.Unfortunately, the words in documents and queriesoften have many structural variations. Therefore, thedata preprocessing techniques are used on the targetdata set to reduce the size of the data set, whichincreases the efficiency of the IR system, before theinformation is retrieved from the documents. Theaim of this study is to analyze the problems ofpre-processing methods such as tokenization, worddeletion and stemming for the text documents.1) Normalization: Text needs to be standardizedbefore further processing. Normalization generallyrefers to a series of related tasks which means thatall text is placed on a level playing field: all text isconverted to the same case (upper or lower), punctua-tion is removed, numbers are converted to their wordequivalents, etc. Normalization places all words onthe same footing and allows uniform processing. Theraw comments contain special characters e.g. @, #, -etc., punctuation, and emotions. At the time of prepos-sessing, we removed these special characters, whitespaces, numbers, punctuation and Unicode emotionsmanually. Conjunctions are also removed as they areunnecessary to abusive text detection.2) Noise Removal: Bear in mind once again thatwe are not dealing with a linear process whosesteps must be applied exclusively in a given order.Therefore, noise removal can take place before orafter the previously outlined sections or somewherebetween them. In this step we removed user names,hash tags, unwanted signs etc.D. Proposed AlgorithmThe proposed algorithm have two parts. One is fortraining and another is for classify the comments. Thetraining algorithm is applied on the labeled trainingdata for term weighting and the classifier algorithmis applied on the unlabeled data to classify them if itis abusive or not.1) Training Algorithm: Our Training Algorithmis based on bag-of-words model approach [28]. Inthis model, A text is represented as an un-orderedcollection of words, disregarding grammar andeven word order. It is counting the frequency of aword and creating a term weighting table. To trainAlgorithm 1 is proposed-Algorithm 1: Training Algorithm1 Step 1: Start2 Step 2: For each comments of the data-set steps3 and 4 is taken3 Step 3: Initial,nWc = NumberOfWordsInTheComment4 Step 4:5 for Each words in a comment do6 if Comment is labeled as abusive then7 if Word is not in the list ofweighted-words then8 WeightAbusive =nWc9 else10 WeightAbusive =(old)WeightAbusive +nWc11 end12 else13 if Word is not in the list ofweighted-words then14 WeightNotAbusive =nWc15 else16 WeightNotAbusive =(old)WeightNotAbusive +nWc17 end18 end19 end2) Classifying Algorithm: Our Classifying Algo-rithm is calculating the summation of the abusivenessor non-abusiveness of a text using term weightingtable. To classify the test comments Algorithm 2 isproposed-E. Feature ExtractionWe can conduct the experiment with three typesof string features to find out, with what kind offeatures our proposed algorithm performs better. Theyare uni-gram, bi-gram, and tri-gram [29]. In uni-gramcharacteristics, the relationship between words in a14 | PageGUB JOURNAL OF SCIENCE AND ENGINEERING, VOLUME 04, ISSUE 01, DECEMBER 2017Algorithm 2: Classifying Algorithm1 Step 1: Start2 Step 2: For each comments</s>
|
<s>of the data-set steps3 and 4 is taken3 Step 3:4 for Each words in the comment do5 if Word is not in the Term-Weight List then6 TotalWeightAbusive =TotalWeightAbusive + 07 TotalWeightNotAbusive =TotalWeightNotAbusive + 08 else9 TotalWeightAbusive =WeightAbusive + TotalWeightabusive10 TotalWeightNotAbusive =WeightNotAbusive +TotalWeightNotAbusive11 end12 end13 Step 4:14 ifTotalWeightAbusive > TotalWeightNotAbusivethen15 Set the label of that comment as abusive thatis 116 else17 Set the label of that comment as not abusivethat is 018 endsentence is not considered. But using this feature,it can be found which words are more abusive. Bi-gram features consider the relationship between twoconsecutive words in a sentence. In the tri-gram, therelationship is considered in a sentence between threeconsecutive words.F. Illustration using test commentsA step-by-step illustration of our proposed algo-rithm is given below using some sample comment.Consider unlabeled comments of figure 4,Serial Comments01 เฆเง*เฆฐ เฆฌเฆพเฆเงเฆเฆพ เฆเฆเฆพเฆฅเฆพเฆเฆพเฆฐ เฅค02 เฆฌเฆพเฆฌเงเฆเฆ เฆเฆฐเฆ เฆเฆพเฆฌเฆพเฆฐ เฆฆเฆพเฆ เฅค03 เฆเง*เฆฐ เฆฌเฆพเฆเงเฆเฆพ เฆเฆฆเฆฐเฆเฆ เฆเฆพเฆฌเฆพเฆฐ เฆฆเฆพเฆ เฅคFig. 4: Unlabeled sample commentsHere think the first and second comments as train-ing data. After running the survey suppose we foundthe following labeled comments in figure 5-Now the part of preprocessing the comments anduse of training algorithm comes into play. So thefollowing table found given in figure 6-For the third comment, if we run the test algorithmthen the following table of figure 7 calculated-Serial Comments Label01 เฆเง*เฆฐ เฆฌเฆพเฆเงเฆเฆพ เฆเฆเฆพเฆฅเฆพเฆเฆพเฆฐ เฅค 102 เฆฌเฆพเฆฌเงเฆเฆ เฆเฆฐเฆ เฆเฆพเฆฌเฆพเฆฐ เฆฆเฆพเฆ เฅค 003 เฆเง*เฆฐ เฆฌเฆพเฆเงเฆเฆพ เฆเฆฆเฆฐเฆเฆ เฆเฆพเฆฌเฆพเฆฐ เฆฆเฆพเฆ เฅคFig. 5: Labeled sample comments after surveyComment Weight abusive Weightnotabusiveเฆเง*เฆฐ 0.33 0.0เฆฌเฆพเฆเงเฆเฆพ 0.33 0.0เฆเฆเฆพเฆฅเฆพเฆเฆพเฆฐ 0.33 0.0เฆฌเฆพเฆฌเงเฆเฆ 0.0 0.2เฆเฆฐเฆ 0.0 0.2เฆเฆพเฆฌเฆพเฆฐ 0.0 0.2เฆฆเฆพเฆ 0.0 0.2Fig. 6: Term Weighting Table after preprocessing andrunning the training algorithmComment เฆเง*เฆฐ เฆฌเฆพเฆเงเฆเฆพ เฆฆเงเฆฐเฆเง เฆเฆพเฆฌเฆพเฆฐ เงเฆพเฆ SumTotalWeightabusive 0.33 0.33 0 0 0 0.66TotalWeightnotabusive 0.0 0 0 0.2 0.2 0.4Fig. 7: After applying the classifying algorithmFigure 7 shows that for the third comment the sumTotalWeightabusive > TotalWeightnotabusiveSo that the third comment is abusive.IV. RESULT ANALYSIS AND EVALUATIONWe have collected 300 random comments for ourresearch work. Among them, We divided our data setinto three sets with 100 comments, 200 comments,and 300 comments. We provide the experimentalresults of Correct Abusive, Wrong Abusive, Correctnot-Abusive, Wrong not-Abusive for the binary clas-sification task using our proposed algorithm with uni-gram feature for 20% test data in TABLE I and Figure8. From TABLE I and Figure 8 we can see that thenumber of accurateness is increasing with the increaseof number of comments. //TABLE I: Correct and wrong result for abusive andnot abusive class for 20% test data of three sets ofcommentsNumber ofCommentsCorrectAbusiveWrongAbusiveCorrectnotAbusiveWrongnotAbusive100comments10 0 4 6200comments16 4 10 10300comments25 5 18 1215 | PageA TECHNIQUE FOR PERCEIVING ABUSIVE BANGLA COMMENTSA. Evaluation MetricsIn order to examine the performance of the pro-posed algorithm, we first use the standard precision,recall and F-measure to measure the abusive or thenot abusive for the classifier algorithm using uni-gramfeatures. We then use the accuracy metric to find theoverall performance of the proposed classifier algo-rithm. Abusiveness analysis task can be interpretedas a classification task where each classification labelrepresents a Abusiveness. Therefore, the four metricsfor each label (abusive and not abusive) are definedand calculated in the</s>
|
<s>same way as in the generalclassification task.Fig. 8: Number of correct abusive, correct not abu-sive, wrong abusive and wrong not abusive class forthree sets of commentsIn a classification task, precision, recall, F-measureand accuracy are explained by four terms - true pos-itive, true negative, false positive and false negative.โข True Positive (TP ): is defined as the numberof comments from the test set that the classifiercorrectly labels as belonging to a specific classor label.โข True Negative (TN): is defined as the numberof comments from the test set that the classifiercorrectly labels as not belonging to a specificclass or label.โข False Positive (FP ): is defined as the numberof comments from the test set that the classifierincorrectly labels as belonging to a specific classor label.โข False Negative (FN): is defined as the numberof comments from the test set which the classifierdoes not label but should have belonged to aspecific class or label.Using these four terms, we now define the evalua-tion metrics as follows:โข Precision is the number of comments in thetest set that is correctly labeled by the classifierfrom the total comments in the test set that areclassified by the classifier for a specific class.That is,Precision(P ) =TP + FPโข Recall is the number of comments in the test setthat is correctly labeled by the classifier from thetotal comments in the test set that are actuallylabeled for a specific class. That is,Recall(R) =TP + FNโข F-measure is the weighted harmonic mean ofprecision and recall for a specific class. That is,F โmeasure =2 โ P โRP +Rโข Accuracy is the percentage of comments in thetest set that the classifier correctly labels. Thatis,Acuracy(A) =TP + TNTP + TN + FP + FNโ100%To calculate precision, recall, F-Measure and accu-racy, we manually calculated the values of TP, TN,FP, FN.B. ResultWe divide our data set into three sets with 100comments, 200 comments, and 300 comments. Weprovide the experimental results of precision, recall,F-measure, and accuracy for the binary classificationtask using our proposed algorithm with unigram fea-ture in Table II, Table III and Table IV. In table IIand table III, we can govern that, we found the bestF-measure score of 0.75, for both the abusive as wellas the not abusive label for 300 comments. Table IVshows the average values of table II and table IIITABLE II: Experimental outcome of Precision, Recalland F-measure for abusive commentsSerialNumberof Com-mentsPrecision Recall F-measure1 100 0.625 1.0 0.7692 200 0.62 0.8 0.73 300 0.68 0.83 0.75TABLE III: Experimental outcome of Precision, Re-call and F-measure for non abusive commentsSerialNumberof Com-mentsPrecision Recall F-measure1 100 1.0 0.286 0.4452 200 0.71 0.5 0.593 300 0.78 0.6 0.75TABLE IV: Average experimental outcome of Preci-sion, Recall and F-measure from Table II and TableIIISerialNumberof Com-mentsPrecision Recall F-measure1 100 0.81 0.64 0.612 200 0.67 0.65 0.653 300 0.73 0.72 0.75The accuracy of our proposed algorithm for thethree sets of comments and their average are givenin Table V. We can see that, we achieved the bestaccuracy of 71.7% for 300 comments.16 | PageGUB JOURNAL OF SCIENCE AND ENGINEERING, VOLUME 04, ISSUE 01, DECEMBER 2017TABLE</s>
|
<s>V: Accuracy for the experiment resultSerialNumberof Com-mentsAccuracyAverageAccu-racy1 100 70%2 200 65% 68.93 300 71.7%From our experimental results in Table V, wecan say that using more comments is in fact veryeffective and offers promising performance for boththe proposed classifier algorithm.V. CONCLUSIONIn this paper, we discuss how we collect the trainingdata and test data in a manual way and performabusive text analysis on Bangla comment data. Wetried to do that using our proposed classifying al-gorithm. Though this type of root level algorithm isnot appropriate for nowadays, we achieve a satisfyingmaximum accuracy of 71.7% for 300 comments andaverage accuracy of 68.9% for overall test data set.We hope our future plan will give a better resultthan already developed ideas. There are still manyopportunities to improve our experimental methodol-ogy. Natural language processing using various ma-chine learning algorithms and techniques, but work inBangla language is not increasing as expected due tolimited resources and mentoring.A. Future WorksSince our technique have many limitation, the pro-posed method can be extended with the followingfuture works.โข We will try to implement the whole idea to makeit faster and automated.โข Our algorithm can be integrated with various Ma-chine learning algorithms such as Nave Bayes,Random Forest, and Support Vector Machine etc.to observe if the result become more accuratethan the previous methods.โข New features can be integrated to get moreaccurate results.โข The proposed algorithm can be modified to dif-ferentiate funny sentences, hate speeches, angrysentences, and abusive sentences.โข An application can be developed to detect abu-sive texts when people browse various socialsites using browsers or mobile app.โข The number of comments in the data set has tobe increased to get a more accurate result.REFERENCES[1] R. Madhavan. (2018) Natural language process-ing current applications and future possibili-ties. [Online]. Available: https://www.techemergence.com/nlp-current-applications-and-future-possibilities/[2] Wikipedia. (2010) List of languages by number of nativespeakers. [Online]. Available: https://en.wikipedia.org/wiki/List\ of\ languages\ by\ number\ of\ native\ speakers[3] wikipedia. (2017) Bengali language. [Online]. Available:https://en.wikipedia.org/wiki/Bengali\ language[4] W. A. Social and Hootsuite. (2018) 2018 digital yearbook.[Online]. Available: https://digitalreport.wearesocial.com/[5] P. KALLAS. (2018) Top 15 most popu-lar social networking sites and apps [august2018]. [Online]. Available: https://www.dreamgrow.com/top-15-most-popular-social-networking-sites/[6] C. Nobata, J. Tetreault, A. Thomas, Y. Mehdad, and Y. Chang,โAbusive language detection in online user content,โ inProceedings of the 25th International Conference on WorldWide Web, ser. WWW โ16. Republic and Canton of Geneva,Switzerland: International World Wide Web ConferencesSteering Committee, 2016, pp. 145โ153. [Online]. Available:https://doi.org/10.1145/2872427.2883062[7] T. Davidson, D. Warmsley, M. Macy, and I. Weber, โAuto-mated hate speech detection and the problem of offensivelanguage,โ arXiv preprint arXiv:1703.04009, 2017.[8] Y. Chen, Y. Zhou, S. Zhu, and H. Xu, โDetecting offensivelanguage in social media to protect adolescent online safety,โ09 2012, pp. 71โ80.[9] G. Xiang, B. Fan, L. Wang, J. Hong, and C. Rose, โDe-tecting offensive tweets via topical feature discovery overa large scale twitter corpus,โ in Proceedings of the 21stACM international conference on Information and knowledgemanagement. ACM, 2012, pp. 1980โ1984.[10] S. V. Wawre and S. N. Deshmukh, โSentiment classificationusing machine learning techniques,โ International Journal ofScience and Research (IJSR), vol. 5, no. 4, pp. 819โ821, 2016.[11] A. Go, R. Bhayani, and L. Huang, โTwitter sentiment clas-sification using distant</s>
|
<s>supervision,โ CS224N Project Report,Stanford, vol. 1, no. 12, 2009.[12] A. Pak and P. Paroubek, โTwitter as a corpus for sentimentanalysis and opinion mining.โ in LREc, vol. 10, no. 2010,2010, pp. 1320โ1326.[13] D. Davidov, O. Tsur, and A. Rappoport, โEnhancedsentiment learning using twitter hashtags and smileys,โin Proceedings of the 23rd International Conference onComputational Linguistics: Posters, ser. COLING โ10.Stroudsburg, PA, USA: Association for ComputationalLinguistics, 2010, pp. 241โ249. [Online]. Available: http://dl.acm.org/citation.cfm?id=1944566.1944594[14] L. Barbosa and J. Feng, โRobust sentiment detectionon twitter from biased and noisy data,โ in Proceedingsof the 23rd International Conference on ComputationalLinguistics: Posters, ser. COLING โ10. Stroudsburg, PA,USA: Association for Computational Linguistics, 2010, pp.36โ44. [Online]. Available: http://dl.acm.org/citation.cfm?id=1944566.1944571[15] P. D. Turney, โThumbs up or thumbs down?: Semanticorientation applied to unsupervised classification of reviews,โin Proceedings of the 40th Annual Meeting on Associationfor Computational Linguistics, ser. ACL โ02. Stroudsburg,PA, USA: Association for Computational Linguistics, 2002,pp. 417โ424. [Online]. Available: https://doi.org/10.3115/1073083.1073153[16] D. Maynard and A. Funk, โAutomatic detection of politicalopinions in tweets,โ in Extended Semantic Web Conference.Springer, 2011, pp. 88โ99.[17] M. M. Nabi, โDetecting sentiment from bangla text usingmachine learning technique and feature analysis,โ 2016.[18] S. Chowdhury and W. Chowdhury, โPerforming sentimentanalysis in bangla microblog posts,โ in 2014 InternationalConference on Informatics, Electronics & Vision (ICIEV).IEEE, 2014, pp. 1โ6.[19] M. Al-Amin, M. S. Islam, and S. Das Uzzal, โSentimentanalysis of bengali comments with word2vec and sentimentinformation of words,โ 04 2017.[20] M. G. Hussain, T. A. Mahmud, and W. Akthar, โAn approachto detect abusive bangla text,โ in 2018 International Confer-ence on Innovation in Engineering and Technology (ICIET),Dec 2018, pp. 1โ5.[21] S. C. Eshan and M. S. Hasan, โAn application of machinelearning to detect abusive bengali text,โ in 2017 20th Interna-tional Conference of Computer and Information Technology(ICCIT), Dec 2017, pp. 1โ6.[22] Prothom alo - facebook home. [Online]. Available: https://www.facebook.com/DailyProthomAlo/[23] Mashrafe bin mortaza - facebook home. [Online]. Available:https://www.facebook.com/Official.Mashrafe/[24] Shakib al hasan - facebook home. [Online]. Available:https://www.facebook.com/Shakib.Al.Hasan/[25] Salmon thebrownfish. [Online]. Available: https://www.youtube.com/user/salmanmuqtadir[26] Naila nayem - facebook home. [Online]. Available: https://www.facebook.com/artist.nailanayem/[27] Prothom alo - online news portal. [Online]. Available:https://www.prothomalo.com/17 | PageA TECHNIQUE FOR PERCEIVING ABUSIVE BANGLA COMMENTS[28] wikipedia. (2018) Bag-of-words model. [Online]. Available:https://en.wikipedia.org/wiki/Bag-of-words model[29] Wikipedia. (2018) n-gram. [Online]. Available: https://en.wikipedia.org/wiki/N-gramMd. Gulzar Hussain was born in Nimna-gar, Dinajpur City, in 1990. He receivedthe Bachelor of Science degree (2018)in Computer Science and Engineeringfrom the Green University of Bangladesh,Dhaka, Bangladesh.Since 2019, he has been a Lecturerwith the Computer Science and Engi-neering Department, Green University ofBangladesh, Dhaka, Bangladesh. His re-search interests include Artificial Intel-ligence, Machine Learning, Natural Language Processing, TextMining, Topic Modeling etc.Mr. Hussain was a recipient of seven times Vice-Chancellorawards and once Dean award for excellent performance in trimesterresults during his bachelor study.Tamim Al Mahmud was born inEast Ratanpur village, Kazirhat, Barisal,Bangladesh, in 1990. He received hisBachelor Degree (2012) in Computer Sci-ence and Engineering from PatuakhaliScience and Technology University, Pat-uakhali, Bangladesh and Master Degree(2016) in Information Technology fromUniversity of Dhaka, Dhaka, Bangladesh.From 2013 to 2016, he was a Lecturerin Computer Science at Dhaka International University. Since 2017,he has been working as an Assistant Professor in the Depart-ment of Computer Science and Engineering, Green Universityof Bangladesh. His</s>
|
<s>research interests include Machine Learning,Artificial Intelligence, Software Engineering and Human ComputerInteraction . He is a Member of IEEE, Bangladesh ComputerSociety and Asian Business Consortium. He is also a member andteam leader of HCI research group of Computer science Departmentat Green University of Bangladesh.18 | PageView publication statsView publication statshttps://www.researchgate.net/publication/337199069</s>
|
<s>Implementation of Machine Learning to Detect Hate Speech in Bangla LanguageProceedings of the SMARTโ2019, IEEE Conference ID: 46866 8th International Conference on System Modeling & Advancement in Research Trends, 22ndโ23rd November, 2019 College of Computing Sciences & Information Technology, Teerthanker Mahaveer University, Moradabad, India317 Copyright ยฉ IEEEโ2019 ISBN: 978-1-7281-3245-7Abstractโ Hate speech is a crime in all countries. Hate speech can be for women, religions, countries, cultures. The big problem for hate speech is that it entices the evil people. Moreover, it inspires them to spread hatred in the society. Bangla is one of the topmost spoken languages in the world. But hate speech detection in Bangla language is rare. Our purpose is to detect hate speech in Bangla language. To perform the task, we were in need of the Bangla datasets. But the Bangla dataset is not available. So, we have collected data from Facebook. Collecting data from the social site is very hectic. The data contain mixed languages, grammatical mistakes. So, we made a team to collect the data. Another team was to process the data. And finally, we labeled the data as hate speech or not. The team members had enough knowledge about hate speech. They were neutral towards the data. Our data contain hate speech against women, community, culture, ethnicity, race, sex, disability. Machine Learning approach is ideal for our work. We have used the SVM and Naรฏve Bayes algorithm for our work and got a maximum accuracy of 72%.Keywordsโ SVM, Machine Learning, Supervised Learning, Naรฏve Bayes, Hate Speech.\I. Introduction Bangla is the state language of Bangladesh along that millions of people speak in Bangla as their first language. Bangla has come from Sanskrit. Our language has age-old tradition and culture. We are the only country that gave blood for language. Our language has a value, and we have to respect that. We should not use this language for wrong purpose. But it is a matter of great sorrow that there are many cases of hate speech in the Bangla language.Hate speech is a global problem. People of different parts of the world are now connected to each other with the help of the internet. Now people can share their feeling with the whole world in the blink of an eye. At the same time, people can share hatred along with speech in a second which is a matter of concern. We know hatred hurts people mentally and it bears a deep impact in their life. The more people are getting the opportunity to share their thoughts and views on the different matter the more incident of hate speech of taking place. The more the hate speech spreads the more damage it does. These hate speeches are a threat to our unity. Hate speeches are dividing us.We have made the dataset of hate speech along with regular speech. We have collected data from Facebook. It is one of the most popular social networking sites in Bangladesh. Millions of people in Bangladesh use Facebook. And most of them</s>
|
<s>use Bangla language on Facebook. The Internet pack is getting cheaper day by day. The number of internet users is increasing. Now people of every part of Bangladesh are using Facebook. They are giving their analysis and views on different sectors. In this procedure there is a clash of thinking, liking, disliking. Whenever there is a clash, they are using tons of hate speech against each other. So, they use Bangla Language to create a post or to comment on a post. People are attacking each other for different reasons. People have become so intolerant. For this reason, we have chosen Facebook to collect data.In this work, we built a new dataset to find malice in Bangla Language that contains hate speech on different categories such as religion, community, gender, race. We have used web scrapper to scrap the data. We have labeled our data in two categories either it is a hate speech or not. The contribution of our work is to creating a new dataset for hate speech detection in Bangla Language and applying the algorithm to detect it.II. Related WorkNatural Language Processing is playing huge role in detecting hate speech. It is very efficient to detect hate speech with NLP.Axel Rodrรญguez et al. [1] have used four steps to detect hate speech in the English language. They have used data from Facebook. Their first step is the discovery stage. In this step, they had identified some pages which produce hate speech. Not all the post of the pages contains hate speech, so they had used a filter to sperate hate speech from regular speech. They have also done sentimental analysis. They have used Valence Aware Dictionary for sentiment reasoning.Latent Semantic Analysis is a very popular natural language processing method. Ilham Maulana Ahmad et al. [2] have used the LSA method based on the image. Their approach was to extract information from the image. They Implementation of Machine Learning to Detect Hate Speech in Bangla LanguageShovon Ahammed1, Mostafizur Rahman2, Mahedi Hasan Niloy3 and S.M. Mazharul Hoque Chowdhury41,2,3,4Department of CSE, Daffodil International University, Dhaka, BangladeshE-mail: 1shovon15-7671@diu.edu.bd, 2mostafizur15-7764@diu.edu.bd, 3mahedi15-7763@diu.edu.bd 4mazharul2213@diu.edu.bdAuthorized licensed use limited to: University of Exeter. Downloaded on June 17,2020 at 08:47:50 UTC from IEEE Xplore. Restrictions apply. 8th International Conference on System Modeling & Advancement in Research Trends, 22ndโ23rd November, 2019 College of Computing Sciences & Information Technology, Teerthanker Mahaveer University, Moradabad, India318 Copyright ยฉ IEEEโ2019 ISBN: 978-1-7281-3245-7have used data mining. To gather information, they have used twitter. They have got an average accuracy of 57.9%. Ricardo Martins et al. [3] have used emotional words to classify hate speech. They have obtained an accuracy of 80.56% with a support vector machine.The fast Text approach is a great method to find malice sentences. In this paper, Nur Indah Pratiwi et al. [4] have used Fast text approach to detect hate speech in Instagram comments. They have used word n-gram and char n-grams.Arum Sucia Saksesi et al. [5] used a recurrent neural networks to detect hate speech. They have used</s>
|
<s>the Twitter API to collect the data. For text analysis they have performed case folding, tokenizing, cleaning, stemming. They made a combination of LSTM with RNN. To find the result of the LSTM hidden layer they have used softmax regularization. They partitioned the data with different ratios at different times. At learning rate 0.007 they got an accuracy of 93.8%, precision of 92% and recall of 93%.N.D.Gitari et al. [6] have used lexicon in their work. They got average results with the lexicon model. Their F-score was 70.83.Erryan Sazany et al. [7] have detected hate speech using deep learning. Trisna Febriana and Arif Budiarto [8] have detected hate speech in the Indonesian language.Researchers have applied different methods to classify hate speech. Most of them have used social sites to collect data. They have used a machine learning-based approach.III. MethodologyThis paper aims to find hate speech in Bangla language. We have taken a machine learning approach for the task. Now we are going to describe our work. The main challenge of our work was to build the dataset.Example:A. Forming the DatasetData acquisition and data annotation are the two main parts of our data formation.1). Data AcquisitionFacebook is one of the biggest social network platforms. It generates tons of data every day. People of all ages and groups use Facebook. So, we decided to take data from Facebook. Scraper helps us to get data from websites. We have used web scraper to get the data from Facebook. As we have used web scraper the scraper scraped different types of data. The dataset had good as well as bad comments. Some comments were targeting women, religious groups. While some comments were targeting race, community. And there were normal comments like people were giving their opinion on various subjects with a positive mind.Fig. :1 Dataset Visualization2). Data AnnotationData annotation was the key part of our work. We were focused to annotate our data correctly. As our work finds out a sentence is hated speech or not so we made two categories. One category was hating speech and another category was general speech. Then we labeled the data with tags. If the comment has something inappropriate, we labelled that as and if the comment is appropriate, we labelled that as We worked in groups to annotate the data. To be fair with the data the annotation was done in three steps. At the first step, the data was labelled with one group. Then the authenticity of the label was checked by another group. Finally, the ultimate labelling was done with the collaboration of two groups. We worked that way so that we do not commit any mistake on labelling data.For each comment the answer whether the comment is hated speech or not.B. Hate Speech IdentificationNow to identify the hate speech we have worked in four steps โข Pre-processing โข Data Analysis โข Feature ExtractionAuthorized licensed use limited to: University of Exeter. Downloaded on June 17,2020 at 08:47:50 UTC from IEEE Xplore. Restrictions apply. 8th International</s>
|
<s>Conference on System Modeling & Advancement in Research Trends, 22ndโ23rd November, 2019 College of Computing Sciences & Information Technology, Teerthanker Mahaveer University, Moradabad, India319 Copyright ยฉ IEEEโ2019 ISBN: 978-1-7281-3245-7 โข Implementation of Machine Learning1). Pre-processingPre-processing means processing the data according to need. We have collected data from Facebook. Without pre-processing the data will not be able to perform well. And for our work data pre-processing, is vital. So, the data had different types of issue. In Facebook comment, people use different types of emoji. In our machine learning-based classification we cannot work with emoji.So manually we have removed emoji from different comments. Then people perform different types of spelling mistakes while commenting on Facebook.We tried to correct collect the spelling. While working with data negation handling is an important task. So, we have worked with negation. Performing these operations makes our data prepare for the next steps. After completing these steps, the pre-processing of our data completes.2). Data AnalysisData analysis is used to get knowledge about the data. Every data is significant. They have their own patterns and value. For example, spam data and ham data have variations with text length. But while we have analyzed our dataset, we have found that text length does not have any significance to categorize the data.At figure 2 we can see a histogram for text length of hate speech. Text length of 25-40 contains most hate speech. The hate speech length varies but at that points the number of hate speeches is maximum. There is a decent number of hate speeches between the text length of forty-one and ninety-five. The amount of hate speech at the text length of hundred is surprisingly low. The maximum text length of a hate speech is two-hundred and sixty. But one thing is clear most of the hate speech length is around ten to a hundred.Fig. 2: Text length of Hate SpeechFig. 3: Text Length of not Hate SpeechAt figure 3 is the histogram of neutral speech. The maximum number of neutral speeches is at the text length of ten. And this is very common because there are some common and popular sentences like nice pictures, beautiful, good morning, all the best, happy birthday. Though we, have given the example in English because the text length is almost the same in Bangla. After comparing the histogram of hate speech data and not hate speech data, we found that the text length of not neutral speech data is short. Most of the text length of neutral speech is between one and two hundred.Table 1: Dataset DistributionNumber of Sentence Number of Hate Speech Number of Normal Speeches1339 665 674We collected more than 5,000 data to perform the task. Most of them were neutral comments. For this reason, to keep a balance between the data, we made a dataset of 1339 data. At table 1 we can see that in our dataset there are 665 hate speeches and 674 neutral speech. In those hate speeches, people were sharing hatred. On</s>
|
<s>the contrary, in neutral speech people were congratulation each other on different occasions. They were giving suggestion to each other on different topics. 3). Feature ExtractionWe have extracted the feature with count vectorizer and Term frequency-inverse document frequency vectorizer. The count vectorizer tokenizes the text and creates a vocabulary of known words. Count vectorizer encodes a new document using that vocabulary.๐ค๐ค๏ฟฝ๏ฟฝ๏ฟฝ ๏ฟฝ ๏ฟฝ๐๐๏ฟฝ๏ฟฝ๏ฟฝ ๏ฟฝ ๏ฟฝ๏ฟฝ๏ฟฝ ๏ฟฝ ๐๐๐๐๐๐๏ฟฝ๏ฟฝ (1)Authorized licensed use limited to: University of Exeter. Downloaded on June 17,2020 at 08:47:50 UTC from IEEE Xplore. Restrictions apply. 8th International Conference on System Modeling & Advancement in Research Trends, 22ndโ23rd November, 2019 College of Computing Sciences & Information Technology, Teerthanker Mahaveer University, Moradabad, India320 Copyright ยฉ IEEEโ2019 ISBN: 978-1-7281-3245-7Here,tfi,j = number of occurrences of i in jdfi = number of documents containing IN = total number of documentsThe Term frequency-inverse document frequency is the number of times a word appears in a document divided by the total number of words in that document and the second term is Inverse document frequency, computed as the logarithmic of the number of the documents in the corpus divided by the number of documents where the specific term appears. 1. Start 2. Count vectorizer 3. Term frequency inverse document frequency 4. Extracted Feature Fig. 4: Flow Diagram of Feature Extraction4). Implementation of Machine LearningWe have implemented Machine Learning to perform our task. Supervised learning has been used. Supervised learning means supervising the data. As we have collected data from Facebook and then we have labelled the data according to their criteria. After that, we have trained our model with the labelled data. Our model has learnt from the labelled data which are hate speech, and which is not. For classification, we have used two algorithms Naรฏve Bayes and Support Vector Machine. We divided our data into a training set and testing set. Then we fed the data to both algorithms. After that, we got accuracy, precision, recall.Precision = TP ๏ฟฝ ๏ฟฝP TP ๏ฟฝ ๏ฟฝ๏ฟฝ TP ๏ฟฝ T๏ฟฝTP ๏ฟฝ T๏ฟฝ ๏ฟฝ ๏ฟฝP ๏ฟฝ ๏ฟฝ๏ฟฝ (2)Recall = TP ๏ฟฝ ๏ฟฝP TP ๏ฟฝ ๏ฟฝ๏ฟฝ TP ๏ฟฝ T๏ฟฝTP ๏ฟฝ T๏ฟฝ ๏ฟฝ ๏ฟฝP ๏ฟฝ ๏ฟฝ๏ฟฝ (3)Accuracy = TP ๏ฟฝ ๏ฟฝP TP ๏ฟฝ ๏ฟฝ๏ฟฝ TP ๏ฟฝ T๏ฟฝTP ๏ฟฝ T๏ฟฝ ๏ฟฝ ๏ฟฝP ๏ฟฝ ๏ฟฝ๏ฟฝ (4)Precision is the value of true positive divided by the summation of the true positive and false positive. The recall is true positive divided by the summation of the true positive and false positive. Accuracy is the summation of the true positive and true negative divided by the summation of true positive, true negative, false positive and false negativeIV. ResultsIn machine learning, the performance is shown in the form of confusion matrix to F-measures. Recall is true positive divided by the summation of true positive and false negative. And accuracy is summation of a true positive and true negative divided by summation of true positive, true negative, false positive and false negative. After applying SVM we got accuracy of 70% andTable 2: Test Results with SVMPrecision Recall F1-scoreเฆจเฆพ</s>
|
<s>0.73 0.70 0.71เฆนเงเฆฏเฆพเฆ 0.68 0.70 0.69After applying Naรฏve Bayes, we got accuracy of 72% andTable 3: Test Results with Naรฏve BayesPrecision Recall F1-scoreเฆจเฆพ 0.75 0.71 0.73เฆนเงเฆฏเฆพเฆ 0.70 0.74 0.72V. ConclusionIn our work, we made a new dataset in the Bangla language. We divided the dataset into two groups and labeled them. There were anomalies in the dataset, and we processed them to remove the anomalies. After that, we have extracted features from our dataset to use in our model. Then we applied the Support vector machine and Naรฏve Bayes machine learning algorithms. Both the algorithm performed well with our dataset. We showed Precision, Recall and F1-score for both of the algorithms. Naรฏve Bayes gave us an accuracy of 72%. References[1] Axel Rodrรญguez, Carlos Argueta and Yi-Ling Chen*, โAutomatic Detection of Hate Speech on Facebook Using Sentiment and Emotion Analysis,โ International Conference on Artificial Intelligence in Information and Communication, pp. 169 โ 174, 2019.[2] Ilham Maulana Ahmad Niam, Budhi Irawan, Casi Setianingsih and Bagas Prakoso Putra, โHate Speech Detection Using Latent Semantic Analysis (LSA) Method Based on Image,โ International Conference on Control, Electronics, Renewable Energy and Communications, pp. 166โ171, 2018.[3] Ricardo Martins, Marco Gomes, Josยดe Joหao Almeida, Paulo Novais and Pedro Henriques, โHate speech classification in social media using emotional analysis,โ 7th Brazilian Conference on Intelligent Systems, pp. 61โ66, 2018.[4] Nur Indah Pratiwi, Indra Budi, and Ika Alfina, โHate Speech Detection on Indonesian Instagram Comments using FastText Approach,โ International Conference on Advanced Computer Science and Information Systems, pp. 447โ450, 2018.[5] Arum Sucia Saksesi, Muhammad Nasrun and Casi Setianingsih, โAnalysis Text of Hate Speech Detection Using Recurrent Neural Network,โ International Conference on Control, Electronics, Renewable Energy and Communications, pp. 242-248, 2018.[6] N. D. Gitari, Z. Zuping, H. Damien, and J. Long, โA Lexicon-based Approach for Hate Speech Detection,โ Int. J. Multimed. Ubiquitous Eng., vol. 10, no. 4, pp. 215โ230, 2015.[7] Erryan Sazany and Indra Budi, โDeep Learning-Based Implementation of Hate Speech Identification on Texts in Indonesian: Preliminary Study,โ International Conference on Applied Information Technology and Innovation, pp. 114-117, 2018.[8] Trisna Febriana and Arif Budiarto, โTwitter Dataset for Hate Speech and Cyberbullying Detection in Indonesian Language,โ International Conference on Information Management and Technology, pp. 379-382, 2019.Authorized licensed use limited to: University of Exeter. Downloaded on June 17,2020 at 08:47:50 UTC from IEEE Xplore. Restrictions apply.</s>
|
<s>ยฉDaffodil International University USING SOCIAL NETWORKS TO DETECT MALICIOUS BANGLA TEXT CONTENT NADIM AHMED ID: 152-15-5869 This Report Presented in Partial Fulfillment of the Requirements for the Degree of Bachelor of Science in Computer Science and Engineering Supervised By Ms Subhenur Latif Assistant Professor Department of CSE Daffodil International University Co-Supervised By Dr. Sheak Rashed Haider Noori Associate Professor and Associate Head Department of CSE Daffodil International University DAFFODIL INTERNATIONAL UNIVERSITY DHAKA, BANGLADESH DECEMBER 2018 ยฉDaffodil International University APPROVAL This Project titled โUsing Social Network to Detect and Prevent Malicious Bangla Text Contentโ, submitted by Nadim Ahmed to the Department of Computer Science and Engineering, Daffodil International University, has been accepted as satisfactory for the partial fulfillment of the requirements for the degree of B.Sc. in Computer Science and Engineering and approved as to its style and contents. The presentation has been held on 11th December, 2018. BOARD OF EXAMINERS Dr. Syed Akhter Hossain Chairman Professor and Head Department of CSE Faculty of Science & Information Technology Daffodil International University Dr. Sheak Rashed Haider Noori Internal Examiner Associate Professor and Associate Head Department of CSE Faculty of Science & Information Technology Daffodil International University Md. Zahid Hasan Internal Examiner Assistant Professor Department of CSE Faculty of Science & Information Technology Daffodil International University Dr. Mohammad Shorif Uddin External Examiner Professor Department of Computer Science and Engineering Jahangirnagar University ยฉDaffodil International University DECLARATION I hereby declare that, this project has been done by us under the supervision of Ms Subhenur Latif, Assistant Professor, Department of CSE Daffodil International University. I also declare that neither this project nor any part of this project has been submitted elsewhere for award of any degree or diploma. Supervised by: Ms Subhenur Latif Assistant Professor Department of CSE Daffodil International University Co- Supervised by: Dr. Sheak Rashed Haider Noori Associate Professor and Associate Head Department of CSE Daffodil International University Submitted by: Nadim Ahmed ID: -152-15-5869 Department of CSE Daffodil International University iii ยฉDaffodil International University ACKNOWLEDGEMENT First I express our heartiest thanks and gratefulness to almighty God for His divine blessing makes me possible to complete the final year project/internship successfully. I really grateful and wish our profound our indebtedness to Ms. Subhenur Latif, Assistant Professor, Department of CSE Daffodil International University, Dhaka. Deep Knowledge & keen interest of my supervisor in the field of โText Miningโ to carry out this project. Her endless patience ,scholarly guidance ,continual encouragement , constant and energetic supervision, constructive criticism , valuable advice ,reading many inferior draft and correcting them at all stage have made it possible to complete this project. I would like to express my heartiest gratitude to the Almighty Allah and Head, Department of CSE, for his kind help to finish my project and also to other faculty member and the staff of CSE department of Daffodil International University. I would like to thank my entire course mate in Daffodil International University, who took part in this discuss while completing the course work. Finally, I must</s>
|
<s>acknowledge with due respect the constant support and patients of my parents. ยฉDaffodil International University ABSTRACT Social spam has rapidly increased over recent years. Facebook and YouTube contain the most spam content compared with other social media networks. This kind of spam contents like text messaging or comments has a gigantic negative effect on normal userโs experience in social media. In this project, I used Naรฏve Bayes classifier, a supervised machine (SVM) learning algorithm to detect Bangla spam text content. Many spam detection works have been done on English. But I have worked on Bangla language which is used by the most Bangladeshi users. My analysis first collects Bangla text data from YOUTUBE, FACEBOOK and other social media. Then I applied a number of classifiers like Gaussian Naรฏve Bayes, Multinomial Naรฏve Bayes, and Bernoulli Naรฏve Bayes etc. At the end, I verified and compared the detectability of Bangla spam text content through different experiment and evaluation. Experiments showed that the Multinomial Naรฏve Bayes (MNB) algorithm had the best accuracy compared to other machine learning algorithms and my research showed 81.44% accuracy in detecting spam text content from Bangla language. ยฉDaffodil International University TABLE OF CONTENTS CONTENTS PAGE Board of Examinersโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ.i Declarationโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ..........ii Acknowledgementsโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ.............iii Abstractโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ...iv List of Figuresโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ.vii List of Tablesโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ โฆ.viii CHAPTERS CHAPTER 1: INTRODUCTIONโฆโฆโฆโฆโฆ..โฆโฆ..โฆโฆโฆโฆโฆโฆโฆ....1-3 1.1 Introductionโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ................1 1.2 Motivationโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ..............1 1.3 Rationale of the Studyโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ2 1.4 Research Questionsโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ3 1.5 Expected Outputโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ.............3 1.6 Report Layoutโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ.............3 CHAPTER 2: BACKGROUNDโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ......4-7 2.1 Introductionโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ4 2.2 Related Worksโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ............4 2.3 Research Summaryโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ.....................6 2.5 Scope of the Problemโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ..6 2.6 Challengesโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ...7 CHAPTER 3: RESEARCH METHODOLOGYโฆโฆโฆโฆโฆโฆโฆ........8-15 3.1 Introduction โฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ....................8 3.2 Research Subject and Instrumentationโฆโฆโฆโฆโฆ..โฆโฆโฆโฆโฆโฆโฆโฆ..................8 3.2.1 Research Subject โฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ...โฆโฆโฆโฆ..8 ยฉDaffodil International University 3.2.2 Instrument โฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ.โฆโฆโฆโฆโฆโฆโฆ.10 3.3 Data Collection Procedureโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ.10 3.4 Methodology and Data Analysisโฆ............................................................................11 3.4.1 Pre-Processing..โฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ.......................12 3.4.2 Feature Extractionโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ...โฆ13 3.4.3 Trainingโฆโฆ..โฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ.......โฆโฆโฆโฆโฆ..13 3.4.4 Algorithmโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ.. โฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ.14 3.5 Implementation Requirementsโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ.โฆโฆโฆโฆ.โฆโฆโฆโฆโฆโฆ..15 CHAPTER 4: EXPERIMENTAL RESULTS AND DISCUSSION..16-19 4.1 Introductionโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ.โฆ....................16 4.2 Experimental Resultsโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ..16 4.3 Summaryโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ.19 CHAPTER 5: CONCLUSIONโฆโฆโฆโฆโฆโฆโฆ..โฆโฆโฆ..โฆโฆโฆโฆ..20-21 5.1 Summary of the Studyโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ.โฆโฆโฆโฆ.โฆโฆโฆโฆโฆโฆโฆ..20 5.2 Conclusionโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ..โฆโฆโฆโฆโฆโฆโฆ20 5.3 Recommendationโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ..โฆ..........................20 5.4 Implication for Further Researchโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ.โฆโฆโฆโฆโฆโฆ...21 5.5 Future Worksโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ..โฆโฆโฆโฆโฆโฆโฆ.21 REFERENCESโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ...โฆโฆโฆโฆโฆ....22-23 vii ยฉDaffodil International University LIST OF FIGURES FIGURES PAGE Figure 3.1 Raw dataโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ.....11 Figure 3.2 The detail procedures of text classification in terms of a block diagramโฆ...12 Figure 3.3 Naรฏve Bayes Algorithm (Multinomial Model): Training and Testingโฆโฆ...14 Figure 4.1 Result of confusion matrix โฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ17 Figure 4.2.1 Pie chart for accuracy rateโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ...18 viii ยฉDaffodil International University LIST OF TABLES TABLES PAGE Table 3.1 Text category based on interpretation โฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ9 Table 3.3.1 Bangla Spam Dataset โฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ.10 Table 4.2.1 Confusion Matrix โฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆโฆ..โฆโฆโฆโฆโฆโฆโฆโฆโฆ.16 Table 4.2.2 Precision, Recall, F-Score, Error and AUC valuesโฆโฆโฆโฆโฆโฆโฆโฆโฆ.18 ยฉDaffodil International University CHAPTER 1 INTRODUCTION 1.1 Introduction Social Media plays an important role in communications in digital Bangladesh. Social network sites are basically represented by Facebook, Twitter, YouTube, and many others. At present, people spend lots of time on social media. Like Celebrities, public figures, business icons create their social pages for interacting with online users and their fans. But lots of malicious behaviors on the social media makes many troubles to the users. Social Networks (SNs) have become an important part of</s>
|
<s>users social identity. The initial intent of SNs was to facilitate the connection and sharing. So, People are heavily dependent on online interactions for communications. The increases in content in social media are responsible for the increases of social spams. But unfortunately, this wealth of information, as well as the ease with which one can reach many users and also attracted the interest of malicious parties. Even social networking sites do not provide any strong authentication mechanisms to find out the spammers. Experts estimate that as many as 40% of social network accounts are used for spam [1]. So in the beginning, I along with my team collected lots of samples of spam and ham from real-life uses in social media in order to create the training dataset. Then a detailed filtering process of the naive Bayes classification were explained and I applied this method on my samples for testing at the end of the project. These samples were tested throughout the project using the other methods I will discuss. Although several machine learning algorithms have been employed but probably due to their simplicity and accuracy, I implemented Multinomial Naรฏve Bayes (MNB) algorithm, accuracy rate 81.44%. 1.2 Motivation Social networks have lifted the communication system to the utmost level. People spend most of their times Facebook, Twitter, YouTube etc. rather than search engines. In Bangladesh and people who speak Bangla, most of them prefer using Bangla for ยฉDaffodil International University networking and developing communication. Content sharing, content contribution, comment or other feedback system is being used to interact with online users by business entities or other public figures. They set up their social pages to enhance direct interaction. However, at the same time, social media networks become susceptible to different types of unwanted and malicious Bangla spam through text contents. Spammers destroy the network environment and this degrades the userโs experience of using the network. There is a crucial need in the society and industry for saving the image and maintain a healthy environment in social media. Thatโs why I have decided to work on spam Bangla Text content. I have gone through many spam detecting research papers which had been done in English. Work on Bangla language is very few to detect emotion. So, I thought to work in Bangla spam text. In this demo, I propose a scalable and online social media based Bangla spam content detection system for social network security. 1.3 Rationale of the Study Due to textual complexity, detecting spam text from Bangla language is very hard. There have been extensive researches conducted for the spam analysis of English texts [18] which showed promising results. This was possible after the advent of the World Wide Web which made a lot of textual data instantly available in electronic media. Before this period, it was hard to develop training data to test theories and models. However, spam analysis of Bangla texts is still a new area and there is a scope of improvement. There are more than</s>
|
<s>160 million native Bangla speakers and huge amounts of Bangla texts are generated online. Most researches on Bangla texts are performed using news corpus and blogs which are basically extracted by scraping the websites. Another source of data is social media where the opinionated texts are shorter in length but they are informal and full of grammatical and spelling errors and in mixed languages and characters. This Bangla text contains malicious links which misguide the users to fraud and phishing websites. I collected almost two thousand Bangla sentences consisting both positive and negative content. I categorized them into two polarities: spam noted by 1 and ham noted by 0. For example, โ๏ฟฝเฆฎเฆกเฆพเฆฎ เฆเฆพเง๏ฟฝเฆฐ เฆธเฆพเงเฆฅ เฆฏเฆพ เฆเฆฐเฆฒ, ๏ฟฝเฆฆเฆเงเฆจ เฆฟเฆญเฆฟเฆกเฆ เฆธเฆนโ, โเฆฟเฆญเฆฟเฆกเฆ ๏ฟฝเฆ เฆเฆเฆพ เฆเฆเฆพ ๏ฟฝเฆฆเฆเงเฆฌเฆจ เฆฟเฆ๏ฟฝโ this two sentences are spam links which redirect the users to a false news or websites. Like these, I have also collected ยฉDaffodil International University ham like โเฆญเฆพเฆฒ เฆนเฆเงเฆค เฆชเงเฆธเฆพ เฆฒเฆพเงเฆ เฆจเฆพโ, โเฆเฆฐเฆพเฆ ๏ฟฝเฆฆเงเฆถเฆฐ เฆญเฆฟเฆฌเฆท๏ฟฝเงโ to train the data sets. Then I applied multinomial naรฏve Bayes classifier to complex pattern recognition and approximation of the function. 1.4 Research Question Question 1: Does every Bangla sentence have distinct exposition e.g. positive and negative? Question 2: Does every negative interpretation of the sentence contain spam? Question 3: Does spam sentence contain some specific word? Question 4: Can we identify Spam from every new generated informal Bangla sentences? 1.5 Expected Outcome As there have no work been done to detect malicious Bangla Text content, I have decided to go for it. Our expected output is very satisfactory. First, I will train whether the sentence is spam or ham through MNB algorithm and after the analysis, it will detect whether it is spam or ham. 1.6 Report Layout The paper is organized into five sections. Following this introduction, Chapter 2 provides brief background details of spam detection field from an information systems perspective, a survey on text analysis those have been published in different information system journals, also the scope of the problem and its challenges. A detailed description of the research methodology including the procedure of data collection, pre-processing and feature extraction is provided in chapter 3. Chapter 4 presents the experimental result of the applied methodology, a brief description of the analysis. And finally, Chapter 5 describes the summary of the empirical research, important limitations of the approach, the implication for further Study. ยฉDaffodil International University CHAPTER 2 BACKGROUND 2.1 Introduction Extensive researches have been conducted for the spam analysis of English texts which showed promising results. It was possible right after the advent of the World Wide Web which made a lot of instant textual data available in electronic media. Before this age, it was very tough to develop training data to test models and theories. However, spam analysis of Bangla texts is still a new area and there is a scope of improvement. There are more than 160 million native Bangla speakers and lots of Bangla texts are generated online. As a result, it would</s>
|
<s>be easier to check the polarity; how much positive or negative the sentence is. After analyzing the text pattern, the sentence could be categorized according to the polarity it belongs to. In my research, I have mainly researched on how I can detect whether the sentence is spam or ham from a given Bengali text which has been collected from social media such as YouTube, Facebook, FB group like DSU, Murad Takla (เฆฎเงเฆฐเฆพเฆฆ เฆเฆพ๏ฟฝเฆพ) etc. I along with my team collected lots of samples of spam and ham from the real-life uses in social these media in order to create the training dataset. Then a detailed filtering process of the naive Bayes classification would be explained and I applied this method on my samples for testing at the end of the project. These samples will be tested throughout the project using the other methods I will discuss. Although several machine learning algorithms have been employed but probably due to their simplicity and accuracy, I implemented Multinomial Naรฏve Bayes (MNB) algorithm. 2.2 Related Works My work is inspired by Chen Liu and Genying Wangโs work [2]. In [2], they present an ELM-based spam accounts detection model for social networks. In my work, I am going to implement Multinomial Naive Bayes Classifier. They collect messages crawling from Sina Weibo and then, select three categories of features extracted from message contents, social interactions and user profile properties applied to the ELM-based spam accounts ยฉDaffodil International University detection algorithm. In my work, I have categorized our features into 2 properties i.e. spam and ham. I chose Multinomial Naive Bayes classifier to get the optimum outputs. In [3], Wafa Wali et. Al[3] have proposed a model to measure sentence similarity based on semantic and syntactic-semantic knowledge. Several methods have been proposed to measure the sentence similarity based on syntactic and/or semantic knowledge. Most of the Natural language processing work on sentence or word similarities have been done on English. There are few works which have been done on Bangla and basically, it is done on Bangla blogโs[4] and Newspapers[5]. I have chosen the Multinomial Naive Bayes Classification Algorithm because Naive Bayes classifier is very efficient since it is less computationally intensive (in both CPU and memory) and it requires a small amount of training data. Moreover, the training time with Naive Bayes is significantly smaller as opposed to alternative methods [6]. It is one of the most basic text classification techniques with various applications in email spam detection, personal email sorting, document categorization, sexually explicit content detection, language detection, and sentiment detection[6]. In[7], Tiago et. Al[7], they proposed and then evaluated a text processing approach for semantic analysis and context detection. They[7] evaluated their approach with a public, real and non-encoded dataset along with several established machine learning methods which can enhance instant messaging and SMS spam filtering. Naรฏve Bayes (NB) classifiers is particularly popular among others in commercial and open-source spam filters due to their simplicity that makes them easy to implement, their accuracy</s>
|
<s>and linear computational complexity which is comparable to that of more algorithms in spam filtering[8]. In their papers, Sahami et Al.[9] used a Naรฏve Bayes classifier with a multi-variate Bernoulli model, a form of NB which relies on Boolean attributes. On the other hand, Pantel and Lin[10] adopted the Multinomial form of NB that normally takes into account term frequencies. It has been shown experimentally in [11] that Multinomial Naรฏve Bayes performs generally better than the Multivariate Bernoulli NB in text classification. ยฉDaffodil International University Vangelis et Al. [12] adopted an experiment which emulates incremental training of personalized spam filters. They [12] made their non-encoded datasets publicly available and are more realistic compared to previous benchmarks. These datasets emulate the varying proportion of ham and spam messages which users receive over time. 2.3 Research Summary Research is an organized way to find solutions to existing problems or problems that nobody has worked on before. It can be used for solving a new problem or it can be the expansion of past work on any particular field. My research is on detecting spam Bengali text that is associated with NLP(Natural Language Processing).AI(Artificial Intelligence) is challenging the human being to exceed human beings performance. Thereโs been lots of work that has already done to detect spam using texts or documents from various languages. I have studied lots of paper related to detecting spam from a text, lyrics, sentence etc. They used different methods and among them, I have chosen multinomial Naรฏve Bayes classification algorithm for spam text detection. For that reason, I collected lots of samples of spam and ham from real-life uses in social these media in order to create the training dataset. Then a detailed filtering process of the naive Bayes classification will be explained and I will apply this method on our samples for testing at the end of the project. These samples will be tested throughout the project using the other methods I will discuss. Although several machine learning algorithms have been employed but probably due to their simplicity and accuracy, I implemented Multinomial Naรฏve Bayes (MNB) algorithm. 2.4 Scope of the Problem Detecting spam from a text is incipiently a content-based classification which expatiate the concept from Natural language processing (NLP) including Machine Learning(ML) as well. The study of spam detection is very necessary. The increasing number of users in social networks, along with the trust they inherently have in their virtual profile, makes a propitious environment for spammers. In fact, reports clearly indicate that the volume of spam over the social network is dramatically increasing year by year. It ยฉDaffodil International University represents a challenging problem for traditional filtering methods nowadays since such messages or links are usually fairly short and normally rife with slangs, idioms, symbols and acronyms that make even tokenization a difficult task. Improved accuracy and consistency in text mining techniques can help to overcome the current problems. Currently, as the next wave of knowledge discovery, text analysis is achieving high commercial values.</s>
|
<s>In this research, I will analyze Bengali text from Facebook status, YouTube comments etc. for finding associated spam of each sentence like positive or negative. After identifying the polarity of each sentence I will then try to find spam text content of each sentence. 2.5 Challenges Detecting spam or ham from Bangla text content provides huge challenges. some of the sentences like โเฆคเฆพเงเฆฐ ๏ฟฝเฆฆเฆเงเฆค ๏ฟฝเฆคเฆพ เฆฌ๏ฟฝเฆพเง๏ฟฝเฆฐ เฆฌเฆพ๏ฟฝเฆพเฆฐ เฆฎเงเฆคเฆพ ๏ฟฝเฆฆเฆเฆพเง, ๏ฟฝเฆธ เฆจเฆพเฆฟเฆ เฆเฆฌเฆพเฆฐ เฆฟเฆนเงเฆฐเฆพ เฆเฆฒเฆฎโ. Here โเฆคเฆพเงเฆฐ ๏ฟฝเฆฆเฆเงเฆค ๏ฟฝเฆคเฆพ เฆฌ๏ฟฝเฆพเง๏ฟฝเฆฐ เฆฌเฆพ๏ฟฝเฆพเฆฐ เฆฎเงเฆคเฆพ ๏ฟฝเฆฆเฆเฆพเงโ- is used as abuse. It indicates negativity of the sentence and the system will automatically detect as spam (1). On the other hand, โ๏ฟฝเฆธ เฆจเฆพเฆฟเฆ เฆเฆฌเฆพเฆฐ เฆฟเฆนเงเฆฐเฆพ เฆเฆฒเฆฎโ is just a simple sentence which systems detects as ham (0). Though the whole passage indicates a spam behavior, it is very complex to recognize the pattern of each word and sentences. Misspelling, stop words like โ,โ, โ !โ, โ?โ, โ.โ, โโ. โ~โ, โ||โ, โเฅคโ etc degrades the processing which provides a low accuracy rate. Bangla language having a huge vocabulary, words having different meaning and their various uses makes it more complex for text mining. Informal words like โ เฆเงเฆพ๏ฟฝเฆ เฆฅเงเฆเฆเฆเฆเฆเฆเฆโ, โเฆเฆซเฆซเฆซเฆซเฆซโ etc. provides another challenge for modeling Bangla text. Because we are doing our research based on the generated expression of the sentences, it is possible to have the same expression with different polarity. ยฉDaffodil International University CHAPTER 3 RESEARCH METHODOLOGY 3.1 Introduction This chapter will give an outline of research methods that were carried out to detect spam from a given Bengali Text. It provides information about how data can be processed by applying some certain techniques to sort out spam from them. The instrument that is used to extract the spam from Bengali text from Facebook status and other sources is also described and the procedures that were followed to carry out this data extraction are included. It also provides the methods used to analyze the textual data. Lastly, the implementation and requirements that were followed in the process are also discussed. 3.2 Research Subject and Instrumentation 3.2.1 Research Subject The main goal of this research is to detect spam from a given Bengali text in order to come up with spam detection associated with it by using Multinomial Naรฏve Bayes classification algorithm. In case of finding the spam of a sentence, text mining analysis can make it very specific. A set of data is been collected and I categorized them into two sections i.e. Spam and Ham. Spam column contains Spam sentences, words whereas Ham column contains sentences which have a positive meaning. Then we used 80% of these datasets for training and else for testing. Table 3.1 implies the categorization of spam and ham data which have been collected from various social applicationsโ status, comments etc. It is observed that the polarity of different sentences is generated according to the categorization shown in the table 3.1 below: ยฉDaffodil International University Table 3.1 Text category based on interpretation Spam (1) Ham(0) เฆฟเฆญเฆฟเฆกเฆ๏ฟฝเฆ เฆเฆเฆพ เฆเฆเฆพ ๏ฟฝเฆฆเฆเงเฆฌเฆจ</s>
|
<s>เฆฟเฆ๏ฟฝ ๏ฟฝเฆฌเฆฟเฆฐเงเง เฆเงเฆฒเฆพ เฆ
เฆชเง เฆฟเฆฌ๏ฟฝเฆพเงเฆธเฆฐ ๏ฟฝเฆเฆพเฆชเฆจ เฆฟเฆญเฆฟเฆกเฆ เฆเฆฅเฆพ ๏ฟฝเฆฒเฆพ ๏ฟฝเฆเฆค เฆเฆพเฆเงเฆชเฆฐ เฆฎเฆเฆพ เฆชเฆพเฆเฆฟเฆ ๏ฟฝเฆฆเงเฆฒเฆคเฆฟเฆฆเงเฆพเฆฐ เฆเฆฎ๏ฟฝเฆฐ ๏ฟฝเฆเฆพเฆชเฆจ เฆฟเฆญเฆฟเฆกเฆ เฆเฆพเฆเงเฆฒเฆฐ เฆฟเฆคเฆจ เฆจ๏ฟฝเฆฐ เฆฌเฆพ๏ฟฝเฆพ เฆญเฆพเฆฒเฆ ๏ฟฝเฆธเฆฟเฆฒเง๏ฟฝ๏ฟฝเฆ เฆนเฆเงเฆคเงเฆเฆจเฅค ๏ฟฝเฆพเงเฆฎเฆฐ เฆฏเงเฆฌเฆคเง ๏ฟฝเฆฎเงเงเฆฐเฆพ ๏ฟฝเฆฆเฆเงเฆจ เฆฟเฆ เฆเงเฆฐ เฅค ๏ฟฝเฆเฆพเฆชเฆจ เฆฟเฆญเฆฟเฆกเฆ เฆซเฆพเฆธ เฆฟเฆถเฆฟ๏ฟฝเฆค เฆจเง เฆธเงเฆฟเฆถเฆฟ๏ฟฝเฆค เฆนเฆ ๏ฟฝเฆเฆพเฆฐ เฆฏเฆพเฆฐ เฆฎเง๏ฟฝเงเฆ เฆคเฆพเฆฐ เฆฟเฆถ๏ฟฝเฆ เฆ เฆเฆพ๏ฟฝเงเฆฐ ๏ฟฝเฆเฆพเฆชเฆจ เฆฟเฆญเฆฟเฆกเฆ ๏ฟฝเฆฆเฆเงเฆจ เฆฌเฆพเฆเฆฒเฆพ เฆเฆฎเฆพเฆฐ เฆ
เฆนเฆเฆเฆพเฆฐ Whenever a Bangla Sentence is used as input, the system would possibly able to determine whether it is Spam data(1) or Ham data(0) behind the textual content based on the interpretation of sentence pattern. In this experimental study, I have introduced the feature extraction method for detecting malicious Bangla text content. ยฉDaffodil International University 3.2.2 Instrument For research purposes, I have collected around 2000 Bengali sentences from different sources like Facebook status, YouTube comments, textbooks, newspaper, direct speech etc. My work is to detect spam from a sentence by applying text classification algorithm. Some well-performed algorithm like ELM, keyword spotting method, support vector machines(SVM), hidden Markov model etc. are used in case of text analysis. Therefore these algorithms give a very high accuracy of almost 90%. In my research, I have used โMultinomial Naรฏve Bayesโ classification algorithm to find the polarity of my test sentences. 3.3 Data Collection Procedure Even though many datasets in the different language are available in the different databank for research purposes, in terms of Bangla language it is rare. Therefore, I have chosen to build my own dataset from various social media like Facebook, YouTube and named it Bangla Spam Dataset as presented in table 3.3.1 Table 3.3.1 Bangla Spam Dataset Total Instance 1965 Spam 1319 Ham 646 In order to come up with accurate and objective findings, A good research mainly relied on both primary and secondary data. Primary dataโs are the raw data which is mainly used ยฉDaffodil International University for the original purpose. Those data contained many stop-words like punctuation and special symbols which is directly taken from the field by interviews and questionnaires. I removed those symbols and punctuation to get the secondary datasets. Secondary data is collected for purposes other than the original use. The research has been carried out using secondary data. The main intention was to create a properly trained data set consists of Bengali spam keywords. Figure 3.1 shows the collection of our raw data which I have collected from different sites like Facebook, YouTube, Newspaper, Blogs etc. Figure 3.1 Raw data 3.4 Methodology and Data Analysis Prior to applying categorization techniques to Bangla text with the classifiers, it is inevitable to prepare proper datasets for testing and training. At the same times, pre-processing of Bangla text also required before trainings and construction of model for ยฉDaffodil International University successful text categorization. Figure 3.2 illustrates the overall system of Bangla text classification process. Fig 3.2 Overall system of text classification Figure 3.2 The detail procedures of text classification in terms of a block diagram. 3.4.1 Pre-Processing A proper representation of words within text documents is important to acquire good Classification performance. To Train my model, it</s>
|
<s>required tagged data. I formatted our dataset into two column. One is โTextโ that contain actual text data and the other is โStatusโ that contain value 0 or 1. Spam text is labeled as 1, and non-spam as 0. Itโs recommended to apply a classification algorithm on cleaned carpus instead of noisy carpus. The noisy corpus includes insignificant things within the text like numerical values, punctuations marks, emoticon etc. Removal of these entities from corpus will increase accuracy because the size of the sample space of possible features set reduced. For example โ emoticon like :-P :-D are important while sentiment analysis, but may not be important while others classification. After eliminating all the punctuations marks, numerical value, and emoticon, now we have a clean dataset to fit into a classification algorithm. Training Testing Bangla Text Features Extraction Pre-processing Classifier Model Classification Algorithm Bangla Text Features Extraction Pre-processing Classification of new text ยฉDaffodil International University 3.4.2 Feature Extraction After the pre-processing phase, I have to extract features from these words prior to applying the algorithm. Different statistical approaches can be used to extract features from this text corpus like Count vectorizer, TFIDF vectorizer etc. Count vectorizer has just counted the frequency of each word. We used TFIDF (term frequencyโinverse document frequency) vectorizer to extract the features from the document because it provides a way to score the importance of word based on how frequently they appear across multiple documents. ๏ท If a word appears frequently within a document, give that word higher score. ๏ท If a word appears frequently across multiple documents, that means itโs not unique identifiers, give that lower score. That is how, a common word like โเฆเฆฟเฆฎโ, โเฆค๏ฟฝ เฆฟเฆฎโ, โเฆโ, โเฆเฆฌเฆโ that frequently appears across many documents will be scaled down, and word that appears frequently within a single document will be scaled up. This will lead towards a better classification performance. The weight for a term i in terms of TF-IDF is given by ๏ฟฝ๏ฟฝ = (๏ฟฝ๏ฟฝ๏ฟฝ ร๏ฟฝ๏ฟฝ๏ฟฝ(๏ฟฝโ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ (๏ฟฝ๏ฟฝ๏ฟฝ ร๏ฟฝ๏ฟฝ๏ฟฝ())๏ฟฝWhere N= total number of documents and ๏ฟฝ๏ฟฝ= document frequency of term i. 3.4.3 Training With the dataset we got after pre-processing and features extraction, I have to split our dataset into test set, and train set. I split the dataset 80% as train set and 20% as test set. I have to train a classification model. Choosing appropriate algorithm is one of the most crucial point. We choose Naรฏve Bayes algorithm to train our model. Naรฏve Bayes is widely used for text classification. When dealing with text, itโs very common to treat each unique word as a feature, and since the typical personโs vocabulary is numerous thousands of words, this makes for a huge number of features. The simplicity of the algorithm and the independent features assumption of Naive Bayes make it a strong performer for classifying texts. Scikit learn library contain three types of Naรฏve Bayes Model. Gaussian Naรฏve Bayes ยฉDaffodil International University , Bernoulli Naรฏve Bayes and Multinomial Naรฏve Bayes. Which variant of Naรฏve</s>
|
<s>Bayes should applied, depends on data. Multinomial naive Bayes treats features as event probabilities. It has been shown experimentally in [11] that Multinomial Naรฏve Bayes performs generally better than the Multivariate Bernoulli NB in text classification. Multinomial NB surprisingly performs even better if term frequencies can be replaced by Boolean attributes [16]. 3.4.4 Algorithm Naive Bayes Classifier works based on Bayesian theorem. Multinomial and Bernoulli distributions are popular while classifying document classification including Spam Filtering. In my case, Multinomial NB do better than Bernoulli. The Multinomial NB work as follows: Figure 3.3: Naive Bayes Algorithm (Multinomial Model): Training and Testing [17]. ยฉDaffodil International University 3.5 Implementation Requirement We have used Python language for implementation where the platform is Anaconda. The tools are listed following: i. Anaconda. ii. Python. iii. MS Excel. iv. Notepad++. v. Socialfy. (Facebook Comment Extractor tools) vi. ytcomments. (YouTube Comment Extractor tools) For input insertion, we used Avro keyboard. ยฉDaffodil International University CHAPTER 4 EXPERIMENTAL RESULTS AND DISCUSSION 4.1 Introduction This is an experimental based research that I have worked out. In this chapter, the results of malicious spam text content from the Bangla language are presented according to their polarity. In total, I have collected 1965 sentences from Facebook statuses, YouTube comments, Bengali blogs, newspapers, and textbooks. The experiment has been carried by using Naรฏve Bayes (NB) which includes Pre-processing, feature extraction and finally text classification methods. According to their pattern polarity, spam, and ham, the sentences were identified and the results have been discussed which includes the total accuracy of our experiment in details. After evaluating the polarity results we have finally come out with a satisfactory outcome. 4.2 Experimental Results After training, using training dataset, itโs time to taste out dataset using taste set that is unknown to our model. From results, multinomial naive Bayes yields an overall accuracy of 81.44%. The confusion matrix is shown as follows in Table 4.2.1. Table 4.2.1 Confusion matrix Predicted Non Spam Predicted Spam Total Actual Non-Spam 194 11 205 Actual Spam 44 47 91 Total 238 58 296 ยฉDaffodil International University Test results after performing the tests has been pictured in the following Figure 4.1 Figure 4.1 Result of confusion matrix To finalize the total accuracy of our experiment, I have sorted out individual accuracy for test1, test2, test3 and test4. After getting their individual accuracy outcome we then executed the total accuracy which is 81.44%. Accuracy= (TP+TN)/(TP+TN+FP+FN) Here TP= True Positive (case was positive and predicted positive) TN=True Negative (case was negative and predicted negative) FP=False Positive (case was positive but predicted negative) FN=False Negative (case was negative but predicted positive) 10%20%30%40%50%60%70%80%90%100%Spam Prediction Ham Prediction194Confusion Matrix ChartRight Prediction Wrong PredictionยฉDaffodil International University Figure 4.2.1 Pie chart for accuracy rate. Error, Precision, Recall, F1 Score, AUC (Area Under Curve) are shown in the following Table 4.2.2: Table 4.2.2 Precision, Recall, F-Score, Error and AUC values Precision 0.814 Recall 0.814 F-Score 0.814 Error 17.56 AUC(Area Under Curve) 0.73 81.4418.56Accuracy ChartAccuracy InaccuracyยฉDaffodil International University 4.3 Summary During the implementation</s>
|
<s>of our system, I noticed that the bigger the number of sentences, the higher are the recall and precision. Therefore, I believe that the enrichment of our database of Bangla sentences can significantly enhance the results. After experimenting, I have found that a sentence may have spam, or it may be ham. Featuring the extraction and text classification, I have used the โMultinomial Naรฏve Bayesโ Algorithm and after the experimental result, I have come up with 81.44% accuracy. ยฉDaffodil International University CHAPTER 5 CONCLUSION 5.1 Summary of the Study During the last few decadesโ text classification has received an incredible attention from people because it helps to classify spam data and threats. Hence, a lot of work is being done in this domain to find the finest classifier for text classification. From the acquired results in contrast with the pre-processing technique, it is clear that the framework with Multinomial Naive Bayes algorithm performs better as compared to other classifiers. After extensive pre-processing MNB was applied and it comes out to be 81.44 % effective in classifying malicious Bangla Text content. 5.2 Conclusion Detecting spam from a Bengali sentence was not that easy as people disagree on identifying exact interpretation of the same sentence. My text classification method helps us to detect exact expression that majority people think about. Among different approaches, I have used multinomial naรฏve Bayes classification algorithm to extract semantic information from a sentence for detecting spam from Bangla text content. Finally, the accuracy came 81.44%. 5.3 Recommendation In this thesis, I have worked with around two thousand sentences. So, my corpus doesnโt have sufficient lexicons. As every day, new data are generating through social media, a collection of new pattern sentences are necessary. So before going for test add necessary keywords to the database. While giving input keep a focus on the spelling of the lexicons and also the removal of digit, punctuation, special symbols and stemming is very important to get the best accuracy. In the case of a spelling mistake, the program will fail to detect spam accurately. So the user may get lower accuracy. ยฉDaffodil International University 5.4 Implications for Further Research The demand for data mining analyst is highly appreciated in this modern age. This is because of the presence of abundant amount of data in our surroundings. To be more accurate, it is high time to work with these sorts of complex data, so that a new pattern can be introduced to resolve several critical problems. Spam analysis is one of the fundamental branches of data mining. The experimental study which I have carried out on malicious text detection with a satisfactory outcome is leaving a strong footprint behind my work. It has been observed that works on spam detection in Bangla has a lot of valuable impact in our day to day life. We are living in the 3rd worldโs modern age. In this modernized world, people are seen very active in social media like Facebook, YouTube etc. Business entities set</s>
|
<s>up their public pages on social networks and enhance their direct interaction with their customers through content sharing, commenting, or through any other feedback system. Celebrities, online sellers or institutional organization also publish their content for direct interaction. It is unfortunate that some spammers spoil the environment by posting unethical staffs or posting abusive comments in their posts which destroys the images. So it is very urgent to stay safe from malicious trap. Prevention of this kind of spam needs to be executed as soon as possible. As there has been no work done for Bangla languages, so my research will bring a revolutionary changes in the field of data science and to Bangladeshi peopleโs perspective. I will further research for detecting and fighting against the spam accounts through this process. 5.5 Future Work i. Achieving higher accuracy by using classifiers in combination ii. Developing a technique that can catch the sentimental phrases and train methodology for those spams. iii. Multilingual spam email classification iv. Enriching corpus with more words. v. Add a stemmer to reduce the size of our corpus and improve model performance. vi. Detecting and fighting spam accounts. ยฉDaffodil International University REFERENCES [1] Go.proofpoint.com, 2018. [Online]. Available: https://go.proofpoint.com/nexgate-social-media-spam-research-report. [Accessed: 10- Sep- 2018] [2] Chen Liu and Genying Wang, "Analysis and detection of spam accounts in social networks", 2016 2nd IEEE International Conference on Computer and Communications (ICCC), 2016. [3] W. Wali, B. Gargouri and A. Ben Hamadou, "Enhancing the sentence similarity measure by semantic and syntactico-semantic knowledge", Vietnam Journal of Computer Science, vol. 4, no. 1, pp. 51-60, 2016. [4] Mohammad Samman Hossain, Israt Jahan Jui and Afia Zahin Suzana, โSentiment Analysis for Bengali Newspaper Headlinesโ, BRAC University, Dhaka, Bangladesh. [5] Md. Mahfuzur Rahaman and M. A. Alim Mukul, โTrending News Analysis from Online Bangla Newspapesโ, Shahjalal University of Science & Technology. [6] Machine Learning Blog & Software Development News. (n.d.). Retrieved from http://blog.datumbox.com/machine-learning-tutorial-the-naive-bayes-text-classifier/ [7] Almeida, T. A., Silva, T. P., Santos, I., & Hidalgo, J. M. (2016). Text normalization and semantic indexing to enhance Instant Messaging and SMS spam filtering. [8] I. Androutsopoulos, G. Paliouras, and E. Michelakis. Learning to filter unsolicited commercial e-mail. technical report 2004/2, NCSR โDemokritosโ, 2004. [9] M. Sahami, S. Dumais, D. Heckerman, and E. Horvitz. A Bayesian approach to filtering junk e-mail. In Learning for Text Categorization โ Papers from the AAAI Workshop, pages 55โ62, Madison, Wisconsin, 1998. [10] P. Pantel and D. Lin. SpamCop: a spam classification and organization program. In Learning for Text Categorization โ Papers from the AAAI Workshop, pages 95โ98, Madison, Wisconsin, 1998. [11] A. McCallum and K. Nigam. A comparison of event models for naive bayes text classification. In AAAIโ98 Workshop on Learning for Text Categorization, pages 41โ48, Madison, Wisconsin, 1998 [12] Jin, Z. (2008). Spam message self-adaptive filtering system based on Naive Bayes and support vector machine. Journal of Computer Applications,28(3), 714-718. doi:10.3724/sp.j.1087.2008.00714 [13] Wei, Q. (2018). Understanding of the naive Bayes classifier in spam filtering. doi:10.1063/1.5038979 [14] Issac, B. (n.d.). Spam Detection Approaches with</s>
|
<s>Case Study Implementation on Spam Corpora. Cases on ICT Utilization, Practice and Solutions. doi:10.4018/9781609600150.ch012 ยฉDaffodil International University [15] Schneider, K. (2005). Techniques for Improving the Performance of Naive Bayes for Text Classification. Computational Linguistics and Intelligent Text Processing Lecture Notes in Computer Science,682-693. doi:10.1007/978-3-540-30586-6_76 [16] K.-M. Schneider. On word frequency information and negative evidence in Naive Bayes text classification. In 4th International Conference on Advances in Natural Language Processing, pages 474โ485, Alicante, Spain, 2004. [17] D. P. Bhukya and S. Ramachandram, โDecision Tree Induction: An Approach for Data Classification Using AVL-Tree,โ International Journal of Computer and Electrical Engineering, pp. 660โ665, 2010.</s>
|
<s>CATEGORIZATION AND TRANSLATION OPERATING SYSTEMโS ASSISTANCE IN EXPLICATION OF DIFFERENT BANGLADESHI ACCENTSSee discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/342705915CATEGORIZATION AND TRANSLATION OPERATING SYSTEM'S ASSISTANCEIN EXPLICATION OF DIFFERENT BANGLADESHI ACCENTSArticle ยท June 2020CITATIONSREADS4 authors:Some of the authors of this publication are also working on these related projects:BD classic movie restoration Project View projectcomparison View projectNakib Aman TurzoVarendra University16 PUBLICATIONS 1 CITATION SEE PROFILEPritom SarkerVarendra University10 PUBLICATIONS 1 CITATION SEE PROFILEBiplob KumarVarendra University6 PUBLICATIONS 1 CITATION SEE PROFILENiloy Kumar ShahaUniversity of Rajshahi3 PUBLICATIONS 0 CITATIONS SEE PROFILEAll content following this page was uploaded by Nakib Aman Turzo on 06 July 2020.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/342705915_CATEGORIZATION_AND_TRANSLATION_OPERATING_SYSTEM%27S_ASSISTANCE_IN_EXPLICATION_OF_DIFFERENT_BANGLADESHI_ACCENTS?enrichId=rgreq-9d33b516fae9b3424d26b372398d09a9-XXX&enrichSource=Y292ZXJQYWdlOzM0MjcwNTkxNTtBUzo5MTAxMDIxMzEzOTI1MTNAMTU5Mzk5NjY2NTk0OA%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/342705915_CATEGORIZATION_AND_TRANSLATION_OPERATING_SYSTEM%27S_ASSISTANCE_IN_EXPLICATION_OF_DIFFERENT_BANGLADESHI_ACCENTS?enrichId=rgreq-9d33b516fae9b3424d26b372398d09a9-XXX&enrichSource=Y292ZXJQYWdlOzM0MjcwNTkxNTtBUzo5MTAxMDIxMzEzOTI1MTNAMTU5Mzk5NjY2NTk0OA%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/BD-classic-movie-restoration-Project?enrichId=rgreq-9d33b516fae9b3424d26b372398d09a9-XXX&enrichSource=Y292ZXJQYWdlOzM0MjcwNTkxNTtBUzo5MTAxMDIxMzEzOTI1MTNAMTU5Mzk5NjY2NTk0OA%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/comparison-4?enrichId=rgreq-9d33b516fae9b3424d26b372398d09a9-XXX&enrichSource=Y292ZXJQYWdlOzM0MjcwNTkxNTtBUzo5MTAxMDIxMzEzOTI1MTNAMTU5Mzk5NjY2NTk0OA%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-9d33b516fae9b3424d26b372398d09a9-XXX&enrichSource=Y292ZXJQYWdlOzM0MjcwNTkxNTtBUzo5MTAxMDIxMzEzOTI1MTNAMTU5Mzk5NjY2NTk0OA%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Nakib_Turzo?enrichId=rgreq-9d33b516fae9b3424d26b372398d09a9-XXX&enrichSource=Y292ZXJQYWdlOzM0MjcwNTkxNTtBUzo5MTAxMDIxMzEzOTI1MTNAMTU5Mzk5NjY2NTk0OA%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Nakib_Turzo?enrichId=rgreq-9d33b516fae9b3424d26b372398d09a9-XXX&enrichSource=Y292ZXJQYWdlOzM0MjcwNTkxNTtBUzo5MTAxMDIxMzEzOTI1MTNAMTU5Mzk5NjY2NTk0OA%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Varendra_University?enrichId=rgreq-9d33b516fae9b3424d26b372398d09a9-XXX&enrichSource=Y292ZXJQYWdlOzM0MjcwNTkxNTtBUzo5MTAxMDIxMzEzOTI1MTNAMTU5Mzk5NjY2NTk0OA%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Nakib_Turzo?enrichId=rgreq-9d33b516fae9b3424d26b372398d09a9-XXX&enrichSource=Y292ZXJQYWdlOzM0MjcwNTkxNTtBUzo5MTAxMDIxMzEzOTI1MTNAMTU5Mzk5NjY2NTk0OA%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Pritom_Sarker?enrichId=rgreq-9d33b516fae9b3424d26b372398d09a9-XXX&enrichSource=Y292ZXJQYWdlOzM0MjcwNTkxNTtBUzo5MTAxMDIxMzEzOTI1MTNAMTU5Mzk5NjY2NTk0OA%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Pritom_Sarker?enrichId=rgreq-9d33b516fae9b3424d26b372398d09a9-XXX&enrichSource=Y292ZXJQYWdlOzM0MjcwNTkxNTtBUzo5MTAxMDIxMzEzOTI1MTNAMTU5Mzk5NjY2NTk0OA%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Varendra_University?enrichId=rgreq-9d33b516fae9b3424d26b372398d09a9-XXX&enrichSource=Y292ZXJQYWdlOzM0MjcwNTkxNTtBUzo5MTAxMDIxMzEzOTI1MTNAMTU5Mzk5NjY2NTk0OA%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Pritom_Sarker?enrichId=rgreq-9d33b516fae9b3424d26b372398d09a9-XXX&enrichSource=Y292ZXJQYWdlOzM0MjcwNTkxNTtBUzo5MTAxMDIxMzEzOTI1MTNAMTU5Mzk5NjY2NTk0OA%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Biplob_Kumar?enrichId=rgreq-9d33b516fae9b3424d26b372398d09a9-XXX&enrichSource=Y292ZXJQYWdlOzM0MjcwNTkxNTtBUzo5MTAxMDIxMzEzOTI1MTNAMTU5Mzk5NjY2NTk0OA%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Biplob_Kumar?enrichId=rgreq-9d33b516fae9b3424d26b372398d09a9-XXX&enrichSource=Y292ZXJQYWdlOzM0MjcwNTkxNTtBUzo5MTAxMDIxMzEzOTI1MTNAMTU5Mzk5NjY2NTk0OA%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Varendra_University?enrichId=rgreq-9d33b516fae9b3424d26b372398d09a9-XXX&enrichSource=Y292ZXJQYWdlOzM0MjcwNTkxNTtBUzo5MTAxMDIxMzEzOTI1MTNAMTU5Mzk5NjY2NTk0OA%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Biplob_Kumar?enrichId=rgreq-9d33b516fae9b3424d26b372398d09a9-XXX&enrichSource=Y292ZXJQYWdlOzM0MjcwNTkxNTtBUzo5MTAxMDIxMzEzOTI1MTNAMTU5Mzk5NjY2NTk0OA%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Niloy_Kumar_Shaha?enrichId=rgreq-9d33b516fae9b3424d26b372398d09a9-XXX&enrichSource=Y292ZXJQYWdlOzM0MjcwNTkxNTtBUzo5MTAxMDIxMzEzOTI1MTNAMTU5Mzk5NjY2NTk0OA%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Niloy_Kumar_Shaha?enrichId=rgreq-9d33b516fae9b3424d26b372398d09a9-XXX&enrichSource=Y292ZXJQYWdlOzM0MjcwNTkxNTtBUzo5MTAxMDIxMzEzOTI1MTNAMTU5Mzk5NjY2NTk0OA%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/University_of_Rajshahi?enrichId=rgreq-9d33b516fae9b3424d26b372398d09a9-XXX&enrichSource=Y292ZXJQYWdlOzM0MjcwNTkxNTtBUzo5MTAxMDIxMzEzOTI1MTNAMTU5Mzk5NjY2NTk0OA%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Niloy_Kumar_Shaha?enrichId=rgreq-9d33b516fae9b3424d26b372398d09a9-XXX&enrichSource=Y292ZXJQYWdlOzM0MjcwNTkxNTtBUzo5MTAxMDIxMzEzOTI1MTNAMTU5Mzk5NjY2NTk0OA%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Nakib_Turzo?enrichId=rgreq-9d33b516fae9b3424d26b372398d09a9-XXX&enrichSource=Y292ZXJQYWdlOzM0MjcwNTkxNTtBUzo5MTAxMDIxMzEzOTI1MTNAMTU5Mzk5NjY2NTk0OA%3D%3D&el=1_x_10&_esc=publicationCoverPdfEuropean Journal of Computer Science and Information Technology Vol.8, No.3, pp.31-45, June 2020 Published by ECRTD- UK Print ISSN: 2054-0957 (Print), Online ISSN: 2054-0965 (Online) CATEGORIZATION AND TRANSLATION OPERATING SYSTEMโS ASSISTANCE IN EXPLICATION OF DIFFERENT BANGLADESHI ACCENTS Nakib Aman Turzo Lecturer, Department of Computer Science & Engineering, Varendra University, Rajshahi, Bangladesh Email: nakibaman@gmail.com Pritom Sarker B.Sc. in CSE, Department of Computer Science & Engineering, Varendra University, Rajshahi, Bangladesh Email: me.pritom@gmail.com Biplob Kumar B.Sc. in CSE, Department of Computer Science & Engineering, Varendra University, Rajshahi, Bangladesh Email: kumarbiplob336@gmail.com Niloy Kumar Shaha B.Sc. in CSE, Department of Computer Science & Engineering, Varendra University, Rajshahi, Bangladesh Email: niloyshaha20@gmail.com ABSTRACT: National language of Bangladesh is Bengali and it's also the official language used frequently. Our paperโs focal point was to categorize and differentiate West Bangla language or Bangladeshi Bangla accent in a Bengali sentence. We first amassed text from literature files. Then converted text sentence data to numeric data by using TF-IDF. After PCA application by MATLAB, final data set was being obtained. Our strategy for future will assist in developing an automatic software that detects if a sentence has been written in West Bangla or Bangladeshi Bangla and then it will do translation from one to another form. Differences between both Bangladeshi accents is already so minimum that only native speaker can identify them distinctively. There was no data available previously for this study. This work denoted that as if languages seems to be same but are unique and different in their own way and depicts the identity of two geographically separated regions. The major output of this work paid heed on identification of the form of language frequently used today. Many other studies could be conducted, based on the results of our study, on the effects of Sanskrit and Foreign literature KEYWORDS: Bangladeshi Bangla, Inverse Data Frequency, Linear SVM, Principal Component Analysis, Python, Term Frequency, West Bangla mailto:nakibaman@gmail.commailto:me.pritom@gmail.commailto:kumarbiplob336@gmail.commailto:niloyshaha20@gmail.comEuropean Journal of Computer Science and Information Technology Vol.8, No.3, pp.31-45, June 2020 Published by ECRTD- UK Print ISSN: 2054-0957 (Print), Online ISSN: 2054-0965 (Online) INTRODUCTION Bangladeshโs official and national language is Bengali with respect to Constitutionโs third article. 98% of Bangladeshis are fluent in Bengali as their first language. Bengali dialects are being classified in two dimensions i.e spoken vs. literary variations and prestige vs. regional variations. Spoken Bengali exhibits more variations than written one.</s>
|
<s>Formal language including in speeches, news, announcements is in Cholit Bhasha. During Bengali standardization in late 19th and early 20th century, cultural elite mostly belong to regions like Kolkata, Hooghly, Howrah and Nadia. In both Bangladesh and west Bengal the standard today is based on West Central Dialect while the language has been standardized through to centuries of media and education with mostly speakers fluent in both their socio-geographical variety as well as the standard dialect used in the media. Differences in dialects are in three forms literary language vs. colloquial language, regional dialectvs. Standardized dialect and lexical variations. Dialectal names are originated from the districts where they belong. Standard form doesnโt show much varieties across Bengali speaking areas of South Asia. Variations which are regional in spoken Bengali constitutes a dialect continuum. Mostly speech differences occurs at a distance of few miles and have distinct forms among religious communitive vocabularies. Bengali Hindus tend to speak in Sanskritised Bengali while Bengali Muslims use Perso-Arabic. Western border dialects are spoken in the area known as Manbhumi. There are many more minor dialects as well including those spoken in the bordering districts of Purnea and Singhbum and among the tribals of eastern Bangladesh like Chakma and Hajong. Bengaliโs rich literature prior to 19th century was in rhymed verse. Writing system of Modern Bengali developed from an ancient Indian syllabary called Brahmi. Like all Brahmi scripts Bengali is being written from left to right with characters hanging from horizontal line. No distinction is present in upper and lower case letters. LITERATURE REVIEW Classifiers uncover contrasts in language yet not in cognizance. Cantonese use more than five sortal classifiers than Mandarian. 40% of things show up without classifier and 18% of Cantonese and 3% of Mandarian take a sortal [1]. Creation of a NP in Mandarian and Cantonese might be comprises of only a categorizer by using semantic measures to supersede their synctactic merchant [2]. Machine interpretation is a critical piece of Natural Language Processing (NLP) for transformation of one language to another. Interpretation comprises of language model, interpretation model and a decoder. A measurable machine interpretation framework was created to make an interpretation of English to Hindi. The model is created by utilizing programming in Linux condition [3]. Discourse and language preparing frameworks can be sorted by predefined etymological data use and is information driven and it utilized AI techniques to consequently concentrate and procedure applicable units of data are ordered as proper. In this way, a thought was misused utilizing ALISP (Automatic Language Independent Speech Processing) approach, with especially centering discourse handling [4].Issue with numerous discourse understanding frameworks was the setting free language structure and enlarged expression structure syntaxes are requesting computationally. European Journal of Computer Science and Information Technology Vol.8, No.3, pp.31-45, June 2020 Published by ECRTD- UK Print ISSN: 2054-0957 (Print), Online ISSN: 2054-0965 (Online) Limited state language structures are effective however can't speak to the connection of sentence meaning. It was portrayed how language investigation can be firmly coupled</s>
|
<s>by building up an APSG for examination of part and determining naturally. Utilizing this strategy proficient interpretation framework was manufactured that is quick contrasted with others [5]. In another exploration the mix of regular language and discourse preparing in Phi DM-Dialog and its cost-based plan of equivocalness goals were talked about. The synchronous understanding ability was made conceivable by a steady parsing and age calculation [6]. Change of language is the hardest assignment and a contextual investigation was accomplished for this exchange off. This remembered interpretation of customer's framework for restrictive language into programming dialects. Various components were considered that influence robotization level of language transformation [7]. In 1996 CJK Dictionary Publishing Society began an analytical task for the issues top to bottom and for making an elaborative streamlined Chinese and conventional Chinese information base with 100% exactness by working together with Basis Technology in creating advanced division [8]. In hardly any investigations discourse to content of words transformation were accomplished for reconciliation of individuals with hearing weaknesses. Improvement of programming was to help individual through rightness of elocution utilizing English phonetics. This product helps in acknowledgment of potential in English hearing [9]. A presentation of nonexclusive technique for changing over a composed Egyptian everyday sentence to diacritized Modern Standard Arabic (MSA) sentence which could without much of a stretch be reached out to be applied to different vernaculars of Arabic which could undoubtedly be applied to different lingos. A lexical obtaining of informal Arabic was done which is utilized to change over composed Egyptian Arabic to MSA [10]. A framework was likewise evolved in such manner which perceives two speakers in every one of Spanish and English and was constrained o 400 words. Discourse acknowledgment and language examination are firmly coupled by utilizing a similar language model [11]. In an examination by utilizing neural system transformation of content written in Hindi to discourse was done which has numerous applications in everyday life for daze. It is likewise utilized for teaching understudies. The report containing Hindi was utilized as information and neural system was utilized for character acknowledgment [12]. There is limitation of syntactic blunders in inconstancy and capacity in verifiable times of English. In nineteenth and twentieth century they become increasingly beneficial joined by significant expansions in capacity, variations and scope of lexical affiliation [13]. For transformation of Hindi content to discourse in Java Swings a Graphical User Interface has been planned on the grounds that it comprises of various dialects spoken in various zones [14]. As of late advances were made in discourse union has delivered synthesizers with exceptionally high comprehensibility yet the expectation and sound quality is as yet an issue. Be that as it may, its quality has arrived at a sufficient level for some applications [15]. European Journal of Computer Science and Information Technology Vol.8, No.3, pp.31-45, June 2020 Published by ECRTD- UK Print ISSN: 2054-0957 (Print), Online ISSN: 2054-0965 (Online) There are numerous looks into likewise focused on acknowledgment exactness of discourse with installed</s>
|
<s>spelled letter groupings. Various techniques got proposed to limit spelled letter portions and rename them with a particular letter recognizer [16]. Improvement report was set up for interpreter programming which mostly counterbalances the nonattendance of instructive devices that conference weakened, requirement for correspondence. For creating composed language abilities this apparatus could be utilized [17]. For changing over words into triplets Software framework changes over among graphemes and phonemes utilizing vocabulary based, rule based and information driven strategies. A shotgun incorporate these strategies in a half and half framework and includes etymological and instructive data about phonemes and graphemes [18]. An online discourse to content motor was produced for move of discourse into composed language continuously and it required exceptional procedures [19]. Examination of interpretation situations was done in subjective research. Vehicle of composed and communicated in language was fundamentally tested by considering the ramifications of comparable issues. Interpretation as essential issue and how its managed issues raised by portrayal that would be worry for all analysts [20]. A similar work was also performed on differentiation and translation of Sadhu and Cholit language which was the basis of inter-conversion of other languages. Here Linear Discriminant Analysis performed best and speed prediction was also done. That is why Sadhu didnโt remain complex language [21]. METHODOLOGY Sum total of 28550 sentences were taken into account for this task. Altogether 10800 Bengali sentences from 8 distinctive literatures were being selected and 17750 Bengali sentences from West Bengal were picked from 10 distinct literatures. Fig 1 The methodological steps used are as follows: First we amassed a .txt file literature and then got well defined sentences from the literature. From each of the sentence we conjectured stop word. Then text sentence data European Journal of Computer Science and Information Technology Vol.8, No.3, pp.31-45, June 2020 Published by ECRTD- UK Print ISSN: 2054-0957 (Print), Online ISSN: 2054-0965 (Online) is being metamorphosed to numeric data by utilizing TF-IDF. Final data set is obtained by application of PCA on data by using MATLAB and Python a variety of machine learning algorithms on the information set. At the ending point through analytical approach inspection is being done. Fig 2: Work Flow Data Clean We have non- English (which got filtered out before or after processing of natural language data) in our set of information. All the non-English words got axed from it by us. Natural Language Toolkit (NLTK) information center of python is being used for this purpose. We have all of the sentences in non-English in our information set. Ergo, after the moping through the process, on the norm we got 1983 data set. As far as European Journal of Computer Science and Information Technology Vol.8, No.3, pp.31-45, June 2020 Published by ECRTD- UK Print ISSN: 2054-0957 (Print), Online ISSN: 2054-0965 (Online) numeric categorization is concerned Sadhu is dubbed as numeric 0 and cholit is categorized as numeric 1. Term FrequencyโInverse Document Frequency An analytical statistic is a numerical or scientific form of statistic which is being contemplated</s>
|
<s>to mirror the principal of word in a docket or corpus and is called Short Term Frequency-Inverse Document Frequency (TF-IDF). This factor has weightage in retrieving information, text mining and user modeling through hunting of this data. Term Frequency (TF) Frequency of a word which pops up in a docket divided by the gross number of words in the document. Every document has its own term frequency. ๐ก๐๐,๐ =๐๐,๐โ ๐๐,๐๐Inverse Data Frequency (IDF) The log of the documents number divided by word w containing documents. Inverse data frequency determines the weight of rare words across all documents in the corpus. ๐๐๐(๐ค) = log(๐๐๐กTF-IDF is simply the TF multiplied by IDF ๐ค๐,๐ = ๐ก๐๐,๐ ร log(๐๐๐Our most work is being done from Scikit-Learn which is TF-DF Vectorizerโs class. Our text data is taken by it and converted to numeric information set. After this conversion, our data has 3394 features. We have so many less important features we can do features extraction using PCA. Principal Component Analysis For pages other than the first page, start at the top of the page, and continue in double-column format. The two columns on the last page should be as close to equal length as possible. A new coordinate system is being metamorphosed from data through orthogonal linear transformation so that each coordinate has greatest variance by scalar projection of data in an ordered way and so on. This is called principal component analysis. Principal component analysis is a class of Scikit-learn. Higher variance comes to lie in first coordinate which is called first principal component and the lower variance in second coordinate. Our information set has 1678 traits after application of principal component analysis. When applications of dimensions of principal component analysis got reduced and the data quality got lost. In case of principal quality analysis, 95% caliber of data was being maintained. 95% of the quality of real data was preserved by setting value of โnโ components as 0.95. Our latest data has 1678 characteristics after application of principal component analysis. European Journal of Computer Science and Information Technology Vol.8, No.3, pp.31-45, June 2020 Published by ECRTD- UK Print ISSN: 2054-0957 (Print), Online ISSN: 2054-0965 (Online) Fig 3 There are 1194 different sides of numeric data in which the last field denotes 1 for Bangla language of Bangladesh while 0 for Bangla language of West Bengal. RESULTS AND EXPERIMENTAL ANALYSIS After implementing dataset in MATLAB results and factors for total misclassification of top 4 classifiers are as follows: Table 1 Classifier Name Prediction speed Training Time Total Misclassification Cost Accuracy Linear SVM 1200 2715.7 3385 76.3% Quadratic SVM 46 3564.7 3324 76.7% Medium Gaussian SVM 58 1775.7 3448 75.8% Bagged Trees 1600 740.01 4041 71.7% Subspace Discriminant 110 1057.9 3364 76.4% Bagged Trees depiction through prediction speed graph has maximum speed of anticipation while the Quadratic SVM has the lowest anticipation speed. Categorizers like Naรฏve Bayes and Decision Tree are utilized but ameliorated due to less precision. Fig 4 European Journal of Computer Science</s>
|
<s>and Information Technology Vol.8, No.3, pp.31-45, June 2020 Published by ECRTD- UK Print ISSN: 2054-0957 (Print), Online ISSN: 2054-0965 (Online) Subspace Discriminant comes after Bagged Tree which has highest training time. SVM categorizers are much slow in this regard. Fig 5 Fascinating features depicted here is total misclassification cost. Bagged tree which has previously highest value is the only categorizer lowest here while the others are similar. Fig 6 Quadratic SVM gave the highest accuracy followed by Subspace Discriminant. Fig 7 European Journal of Computer Science and Information Technology Vol.8, No.3, pp.31-45, June 2020 Published by ECRTD- UK Print ISSN: 2054-0957 (Print), Online ISSN: 2054-0965 (Online) Fig 8 It can be observed from the functions that Quadratic SVM is highly precise and has high training time. Linnear SVM is at 3rd rank with high precision. So after optimization in MATLAB, Linear SVM is beneficial for categorizing Bangladeshi Bangla and West Bengali Bangla. Here are the preset values used for the classifiers โ Table 2 Quadratic SVM Linear SVM Medium Gaussian SVM Boosted Trees Subspace Discriminant Model Type Model Type Model Type Model Type Model Type Preset: Quadratic SVM Preset: Linear SVM Preset: Gaussian SVM Preset: Bagged Trees Preset: Subspace Discriminate Kernel Kernel function: Quadratic Kernel function: Linear Kernel function: Gaussian Kernel function: Bag Kernel function: Subspace Kernel scale: Automatic Kernel scale: Automatic Kernel scale: Learner Type: Decision tree Learner Type: Discriminant Box constraint level: 1 Box constraint level: 1 Box constraint level: 1 Maximum number of split: 28551 Maximum number of split: 30 Multiclass method: One-vs-One Multiclass method: One-vs-One Multiclass method: One-vs-One Number of learner: 30 Number of learner: 597 Standardize data: true Standardize data: true Standardize data: true European Journal of Computer Science and Information Technology Vol.8, No.3, pp.31-45, June 2020 Published by ECRTD- UK Print ISSN: 2054-0957 (Print), Online ISSN: 2054-0965 (Online) Confusion Matrix of 5 classifiers: Fig 9: Quadratic SVM Fig 10: Linear SVM European Journal of Computer Science and Information Technology Vol.8, No.3, pp.31-45, June 2020 Published by ECRTD- UK Print ISSN: 2054-0957 (Print), Online ISSN: 2054-0965 (Online) Fig 11: Medium Gaussian SVM Fig 12: Boosted Trees Fig 13: Subspace discriminant ROC curve of various classifiers: European Journal of Computer Science and Information Technology Vol.8, No.3, pp.31-45, June 2020 Published by ECRTD- UK Print ISSN: 2054-0957 (Print), Online ISSN: 2054-0965 (Online) Fig 14: Quadratic SVM (Bangladeshi Bangla) Fig 15: Quadratic SVM (West Bengal Bangla) Fig 16: Linear SVM (Bangladeshi Bangla) Fig 17: Linear SVM (West Bengal Bangla) Fig 18: Medium Gaussian SVM (Bangladeshi Bangla)Fig 19: Medium Gaussian SVM (West Bengal Bangla)European Journal of Computer Science and Information Technology Vol.8, No.3, pp.31-45, June 2020 Published by ECRTD- UK Print ISSN: 2054-0957 (Print), Online ISSN: 2054-0965 (Online) Fig 20: Boosted Trees (Bangladeshi Bangla) Fig 21: Boosted Trees (West Bengal Bangla) In case of ROC curves output is directly related to steepness and in linear SVM we get much steeper curve value. Thus Linear SVM depicts the best functionality among categorizing Bangladeshi Bangla and West Bengal Bangla classification. CONCLUSION</s>
|
<s>As we consider whole algorithm, the precise results were given by Linear SVM and gives the expected outcomes. This categorizer assists in classifying languages like Bangla of Bangladesh and Bangla of Western Bengal. This categorizer would prove useful in classifying other accents too and the differentiation of languages will become much easier and understandable. Fig 22: Subspace discriminant (Bangladeshi Bangla)Fig 23: Subspace discriminant (West Bengal Bangla)European Journal of Computer Science and Information Technology Vol.8, No.3, pp.31-45, June 2020 Published by ECRTD- UK Print ISSN: 2054-0957 (Print), Online ISSN: 2054-0965 (Online) REFERENCES [1] M. S. Erbaugh, "Classifiers are for specification: Complementary Functions for Sortal and General Classifiers in Cantonese and Mandarin," Cahiers de Linguistique Asie Orientale, vol. 31, no. 1, pp. 36-69, 2002. [2] S. Y. Killingley, Cantonese classifiers: Syntax and semantics, Newcastle upon Tyne: Grevatt & Grevatt, 1983. [3] N. V. p. S. Sharma, "English to Hindi Statistical Machine Translation System," TIET Digital Repository, 2 August 2011. [4] G. C. M. Petrovska-Delacrรฉtaz, "Data Driven Approaches to Speech and Language Processing," in Springer, Heidelbergh, 2004. [5] D. Roe, F. Pereira, R. Sproat, M. Riley, P. Moreno and A. Macarron, "Efficient grammar processing for a spoken language translation system," in [Proceedings] ICASSP-92: 1992 IEEE International Conference on Acoustics, Speech, and Signal Processing, San Francisco, CA, USA, USA, 1992. [6] H. Kitano, "Phi DM-Dialog: an experimental speech-to-speech dialog translation system," vol. 24, no. 6, pp. 36-50, 1991. [7] A. Terekhov, "Automating language conversion: a case study (an extended abstract)," in Proceedings IEEE International Conference on Software Maintenance. ICSM 2001, Florence, Italy, Italy, 2001. [8] J. a. J. K. Halpern, ""Pitfalls and Complexities of Chinese to Chinese Conversion."," in nternational Unicode Conference, Boston, 1999. [9] M. S. H. Nuzhat Atiqua Nafis, "Speech to Text Conversion in Real-time," International journal of innovation and scientific research, vol. 17, pp. 271-277, 2015. [10] H. A. K. S. a. I. Z. Bakr, "A hybrid approach for converting written Egyptian colloquial dialect into diacritized Arabic."," international conference on informatics and system, 2008. [11] A. l. o. o. p. B. J. W. C. D.RileyAlejandroMacarrรณn, "A spoken language translator for restricted-domain context-free languages," Elsevier B.V., vol. 11, no. 2-3, pp. 311-319, 1992. [12] P. S. Rathod, "Script to speech conversion for Hindi language by using artificial neural network," in 2011 Nirma University International Conference on Engineering, Ahmedabad, Gujarat, India, 2011. [13] B. GRAY, "Grammatical change in the noun phrase: the influence of written language use," Cambridge University Press, vol. 15, no. 2, pp. 223-250, 2011. [14] K. a. R. K. Kamble, "A review: translation of text to speech conversion for Hindi language."," International Journal of Science and Research (IJSR), vol. 3, 2014. European Journal of Computer Science and Information Technology Vol.8, No.3, pp.31-45, June 2020 Published by ECRTD- UK Print ISSN: 2054-0957 (Print), Online ISSN: 2054-0965 (Online) [15] N. a. K. A. Swetha, "Text-to-speech conversion," International Journal of Advanced Trends in Computer Science and Engineering , vol. 2, no. 6, pp. 269-278, 2013. [16] A. W. Hermann Hild, "Integrating Spelling Into</s>
|
<s>Spoken Dialogue Recognition," in European Conference on Speech Communication and Technology, Germany Carnegie Mellon University, Pittsburgh, USA, 1995. [17] B. Sarkar, K. Datta, C. D. Datta, D. Sarkar, S. J. Dutta, I. D. Roy, A. Paul, J. U. Molla and A. Paul, "A Translator for Bangla Text to Sign Language," in 2009 Annual IEEE India Conference, Gujarat, India, 2009. [18] A. N. 1. a. J. Z. Merijn Beeksma 1, "shotgun: converting words into triplets: A hybrid approach to grapheme-phoneme conversion in Dutch," John Benjamins , vol. 19, no. 2, pp. 157-188, 2016. [19] P. Khilari, "A REVIEW ON SPEECH TO TEXT CONVERSION METHODS," Computer Science, 2015. [20] B. temple, "Qualitative Research and Translation Dilemmas," Sage Journals, vol. 4, no. 2, 2004. [21] P. S. B. K. Nakib Aman Turzo, "Interpretation of Sadhu into Cholit Bhasha by Cataloguing and Translation System," International Journal Of Trend in Scientific Research and Development, vol. 4, no. 3, pp. 1123-1130, 2020. View publication statsView publication statshttps://www.researchgate.net/publication/342705915</s>
|
<s>MergedFileThesis No: CSER-M-18-06 A STUDY ON KNOWLEDGE EXTRACTION FROM OFFICIAL BANGLA DOCUMENTS Monika Gope Department of Computer Science and Engineering Khulna University of Engineering & Technology Khulna 9203, Bangladesh December, 2018 A Study on Knowledge Extraction from Official Bangla Documents Monika Gope Roll No: 1207554 A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer Science and Engineering Department of Computer Science and Engineering Khulna University of Engineering & Technology Khulna 9203, Bangladesh December, 2018 DeclarationThis is to certiff that the thesis work entitled "A Study on Knowledge Extraction fromOfficial Bangla Documents" has been carried out by Monika Gope in the Department ofComputer Science and Engineering, Khulna University of Engineering & Technology,Khulna, Bangladesh. The above thesis work or any part of this work has not beensubmitted anywhere for the award of any degree or diploma.Signature of SupervisorW"'l-Wre of CandidatelllApprovalThis is to certifu that the thesis work submitted by Monika Gope entitled o.A study onKnowledge Extraction from Official Bangla Documents" has been approved by the boardof examiners for the partial fulfillment of the requirements for the degree of Master ofScience in Computer Science and Engineering in the Department of Computer Scienceand Engineering, Khulna University of Engineering & Technology, Khulna, Bangladesh inDecember,2018.BOARD OF EXAMINERSProfessor, Department of Computer Science and EngineeringKhulna Universitl' of Engineering & Technology. Khulna-9203.Head of the DepartmentDepartment of Computer Science and EngineeringKhulna university of Engineering & Technology, Khulna-9203.{M,'0'18Dr. K. M. Azharul HasanProfessor, Department of Computer Science and EngineeringKhulna university of Engineering & Technology, Khulna-9i03.WZ*.r- i8Dr. Kazi Md. Rokibul AlamProfessor, Department of computer Science and EngineeringKhulna university of Engineering & Technology, Khulna-9203.,4)s%-tyn.lAA Runqrt trturnProfessor, Computer Science and Engineering DisciplineKhulna University, Khulna.Chairman(Supervisor)MemberMemberMemberMember(External)Dr. Md. Aminul Haque AkhandAcknowledgment All the praise to the almighty Lord, whose blessing and mercy succeeded me to complete this thesis work fairly. I gratefully acknowledge the valuable suggestions, advice and sincere co-operation of Dr. M. M. A. Hashem, Professor, Department of Computer Science and Engineering, Khulna University of Engineering & Technology, under whose supervision this work was carried out. His open-minded way of thinking, encouragement and trust makes me feel confident to go through different research ideas. From him, I have learned that scientific endeavor means much more than conceiving nice algorithm sand to have a much broader view at problems from different perspectives. I would like to convey my heartily ovation to all the faculty members, officials and staffs of the Department of Computer Science and Engineering and IICT as they have always extended their co-operation to complete this work. I am extremely indebted to the members of my examination committee for their constructive comments on this manuscript. I would also like to thank my parents for their wise counsel. Last but not least, I wish to thank my friends and registrar office of KUET for their constant support. Author Abstract Bangla is the seventh largest spoken language in the world. However, the information searching in the digital Bangla papers is a tiresome job as its ends up with incorrect</s>
|
<s>and a very few information. It is difficult because wide computational resources for Bangla are very limited. It is literally infeasible to list and analyze the Bangla data manually. Several approaches for identifying and extracting tables, figures, emotion, reviews, and algorithms have been done in English. Furthermore, knowledge extraction in Bangla documents for emotion or opinion detection and sentence extraction for summarization have been explored. However, they do not provide enough textual information for the user for Bangla text content. In this work, we proposed a domain specific composite approaches to find out the agendas, and its decisions from the minutes of meeting of academic council of Khulna University of Engineering & Technology (KUET) with Query-based features, Content-based features, Context-based features and Semantic features. We also demonstrated the techniques to categorize the knowledge where a single query is given by the user and displayed the result sequentially by date. All the operations are presented with sufficient theoretical analysis and experimental results. Contents PAGE Title Page Declaration Approval Acknowledgment Abstract Contents List of Tables List of Figures iii viii CHAPTER I Introduction 1 1.1 Introduction 1 1.2 Motivation 2 1.3 Problem Statement 3 1.4 Objectives 3 1.5 Scope 4 1.6 Contributions of the Thesis 4 1.7 Organization of the Thesis 5 CHAPTER II Literature Review 6 2.1 Introduction 6 2.2 The Realization of Extracting Elements from Documents 6 2.2.1 OCR-based Analysis of Mathematical Texts from PDF 2.2.2 Extraction of PDF Information 7 2.2.3 Detection and Segmentation of Table of Contents 9 2.2.4 Extracting Metadata Information 7 2.2.5 Extracting Bibliography 11 2.2.6 Extraction of Data Points and Text Blocks 12 2.2.7 Summarizing Figures, Tables, and Algorithms in Scientific Publications 2.2.8 Extracting Algorithms in Scholarly Big Data 15 2.3 The Realization of Extracting Bangla Text, Image, Number and Knowledge 2.3.1 Bangla Number Extraction and Recognition from Document Image 2.3.2 Phrase-level Polarity Identification for Bangla 16 2.3.3 Bangla Text Extraction from Natural Scene Images 17 2.3.4 Sentiment Analysis on Bangla and Romanized Bangla Text 2.3.5 Bangla Text Summarization by Sentence Extraction 18 2. 4 The Realization of Extracting Keywords 18 2.4.1 Rapid Automatic Keyword Extraction 19 2.4.2 Other Schemes 20 vii 2.5 Discussion 20 CHAPTER III Theoretical Consideration 21 3.1 Introduction 21 3.2 Word Density 21 3.3 Similarity with Caption 22 3.4 Naive Bayes Classifier 23 3.5 Sentence Selection 24 3.6 Confusion Matrix 24 3.7 Graph-Based Centrality and PageRank and TextRank 26 3.8 Mixture Models and EM Algorithm 27 3.9 Discussion 28 CHAPTER V Methodology 29 4.1 Introduction 29 4.2 Realization of the Method for Proposed Bangla Knowledge Extraction 4.3 Data Selection and Pre-processing 32 4.3.1 Selection of the Target Data 33 4.2.2 Pre-processing the Data 33 4.4 Feature and Patterns Specifications for Decision Extraction 34 4.4.1 Query-Based Features 34 4.4.2 Content-Based Features 36 4.4.3 Context-Based Features 37 4.5 Ordering the Documents Chronologically 40 4.6 Processing of the Keywords from the Extracted Decision Pool 4.7 Feature and Pattern Extraction for Decisions with User Query 4.7.1 Content-Based Features 42 4.7.2 Semantics Features 43 4.7.3 Context-Based</s>
|
<s>Features 44 4.7.4 Classify the Documents with Keywords 44 4.8 Conclusion 45 CHAPTER V Results and Discussions 47 5.1 Experimental Setup 47 5.2 Performance Analysis of the Structure 47 5.2.1 Extraction of Agenda And Decisions Text Analysis 47 5.2.2 Finding User Query from the Extracted Decision Pool Analysis 5.3 Discussion 53 CHAPTER VI Conclusions 55 5.1 Summary 5.2 Recommendations for Future Works References 57 viii LIST OF TABLES Table No. Description Page 2.1 Features Used in Book Metadata Extraction 11 2.2 Rules for Generating Venue Alias 12 2.3 A Grammar for Document-Element Captions 14 5.1 Precision, Recall and F1 for โDecisionโ Detection for Random 8 Documents 5.2 Total Set of Data with Precision, Recall and F1 49 5.3 Total Set of Data for One Keyword โ เฆฎเงเฆเฆพเฆจเฆฟเฆเฆฏเฆพเฆฒ with Precision, Recall and F1 LIST OF FIGURES Figure No. Description Page 1.1 Knowledge Extraction Process 2 2.1 Decision Tree to Identify Two Types of Content Pages: TOC-I and TOC-II. 2.2 Decision Tree for TOC Segmentation 10 3.1 Data, Information, Knowledge, Wisdom Chain 21 3.2 Confusion Matrix 24 3.3 Confusion Matrix for Total Predicted Positive 25 3.4 Confusion Matrix for Actual Positive 26 4.1 Design of Proposed Knowledge Extraction from Official Bangla Documents 4.2 Proposed Knowledge Extraction Algorithm 30 4.3 Design of Agenda and Decision Extraction from Official Bangla Documents 4.4 Proposed Algorithm for Agenda and Decision Extraction from Bangla Documents 4.5 Design of Keyword Extraction and Finding User Query from Documents 4.6 Proposed Algorithm for User Query Extraction with Features from Decision Pool 4.7 Example 4.1 35 4.8 Decision Making Phrases 37 4.9 Example 4.2 38 4.10 Example 4.3 39 4.11 Example 4.4 39 4.12 Example 4.5 40 4.13 Stopwords for the Domain 41 4.14 Example of the Frequency of Two Words :Post Facto And Mechanical 42 4.15 Connection Word List 45 4.16 Example of Occurrence of Words 45 5.1 Precisions of the Three Methods of Random 8 Documents 49 5.2 Total Decisions Detected of the Methods- A) Content-Based Method and B) Context-Based Method Merge with Content-Based Method and C) Total Decision Counted Manually for 29 Documents 5.3 BM25 Score and Sentence Weight (Query Word- โเฆฎเงเฆเฆพเฆจเฆฟเฆเฆฏเฆพเฆฒโ) 50 5.4 Example of โDecisionโ Detection in a Single Document 51 5.5 Top-Ranked Sentences by Textrank 52 Figure No. Description Page 5.6 Gaussian Curve for Three Clusters 52 5.7 K-Means of the Two Clusters 53 CHAPTER I Introduction 1.1 Introduction The volume of data being created and warehoused is rising exponentially, due in great part to the ongoing progresses in computer technology [1]. It is estimated by the experts that approximately 2.5 trillion PDF generated each year all over the world, contributing to every segment of the global economy [2]. Nevertheless, content coded in PDF is condensed to streams of printing commands to present a visual draft with text, images, tables, graphs etc. [3]. As a result a significant number of Bangla digital documents, such as reports, agenda, paper, journals, dealings developed by various officials of different Govt. and non-Govt. organization, is on the rise. To</s>
|
<s>unlock the information embedded within this data that surround us, introduces new challenges [1]. To solve this problems various data mining techniques are developed and also faces various challenges. Data mining is a procedure that takes data as input and outputs knowledge [1]. Data mining is the nontrivial process of identifying valid, novel, potentially useful, and ultimately understandable patterns in data [4]. For data mining process, the first few steps involve preparing the data where the relevant data must be selected from a potentially large and diverse set of data, any necessary preprocessing must then be performed, and finally the data must be transformed into a representation suitable for the data mining algorithm that is applied in the data mining step [1]. The surge of newer and newer dealings, resolution, and agendas with important decisions in Bangla PDF makes it infeasible to list and analyze the data manually. For any kind of decision making in office, research, business, the previous data are very significant as it provides the rules, policies and various decision, solution, problems etc. To find any particular information from this huge data store such as Big Data is literally very difficult. Substantially the information searching in these digital papers is a nontrivial job. The information searching in the digital Bangla papers is a tiresome job as its ends up with incorrect and a very few information. It is difficult because wide computational resources for Bangla are very limited. Bangla document analysis and knowledge extraction are literally very difficult. Precisely, in this research work, we are going to propose a technique which will discover and extracts agenda and decisions which are taken in official meetings of Khulna University of Engineering & Technology (KUET) and find out the exact knowledge searched by the user from digital Bangla official minutes of meeting of KUET. 1.2 Motivation While working with Bangla documents, we didnโt find any option to analyze the text and there was no way to search the information from a set of Bangla PDF files. To gather knowledge manually from these kind of pools, become so boring and tiring. We were looking for some decision making statements on particular topics from the minutes of meeting of academic council of Khulna University of Engineering & Technology (KUET). But it was so difficult to find the exact knowledge as there was no proper system to solve the problem. For these particular problem domain, there is a need for a particular problem representation technique, and/or a particular solution technique with the Bangla minutes of meeting of KUET. Therefore we want to make an effort to solve the problem for assisting the user to extract knowledge for a specific domain of Bangla resolutions of academic council of KUET according to his needs. Motivated by [1] and [4] we have used the following process to solve our problem which is shown in Fig 1.1. Fig 1.1: Knowledge Extraction Process Preprocessing Transformation Data Mining Interpretation Target Data: PDF Pre Processed Data: PDF Extraction Transformed Data:</s>
|
<s>Sentence Extraction with specific Features Patterns and Features Mining Knowledge 1.3 Problem Statement Identifying and extracting informative entities such as mathematical expressions [5], [6], tables of contents [7], [8], figures [9], [10], from documents have been studied widely. Bhatia, et al., recommended a set of methods used for detecting document-elements with captions, e.g. tables, figures, and pseudocodes [11], [12]. Knowledge extraction in Bangla documents for example emotion or opinion detection from Bangla Blogs and News, Bangla text extraction from natural images, Bangla sentiment analysis from micro-blogs, news portal and product review portal, number extraction are extensively explored [13], [14], [15]. A work on Bangla sentence extraction for text summarization [16] is also studied earlier. However, none of these procedures can really extract the information poised in the Bangla official document and therefore do not serve our purpose the specific domain. Furthermore, they do not provide enough textual information for the user. We want to propose a domain specific composite approaches to find out the agendas and its decisions chronologically from the resolutions or minutes of meeting of an official Bangla documents based on various specifications as Content-based features (Query-based or Phrase mapping, Similarity calculation with the Phrase and Sentence scoring) and Context-based features (surroundings factors of the sentence and the location of the sentence) and also want to find out the knowledge where a single query is given by the user and display the result sequentially by date with Content and Semantic based method. 1.4 Objectives The traditional approaches are unable to extract the Bangla knowledge such as decisions and user query from the PDF files and categorize it accordingly proficiently. To cope with this situation, the scientists have appreciated keywords extraction and rules extractions for English or other languages. Therefore, main objectives of this research topic can be summarized as โ ๏ท To find out the proper needs of domain specific Bangla text content in PDF ๏ท Query, Content, Context and Semantic based feature Specifications and Extraction for Agendas, decisions and user query of official Bangla documents ๏ท Create a knowledge base for official Bangla documents and categorize the dataset with the keywords. ๏ท To find out exact and quick knowledge discovery from a surge of Bangla PDFs from a user query. 1.5 Scope The proposed model with the data set is experimented under these scope: ๏ท We have selected the resolutions of the academic council of Khulna University of Engineering & Technology (KUET) as a domain for a case study where data are stored as Bangla PDF. ๏ท We have experimented with 29 resolutions of Academic Councilโs meeting of KUET for Decision extraction and user query extraction. ๏ท Here the PDF should be made from a Unicode file and the font should be embedded with the file. ๏ท Computation can be done independently in each converted PDF and the data set is very small for big data analysis ๏ท The rule base and knowledge base is for a specific domain with a small dataset and do not extract</s>
|
<s>information from the tabular data. All experiments in done on a windows machine and we used Java and Python as programing languages to implement our algorithm and Weka [17] as an implementation tool. We have used nltk and other python packages. Naรฏve-Bayes Classifier model and Gaussian is used to classify the result. 1.6 Contributions of the Thesis This paper has the following vital contributions: ๏ท We have proposed domain specific composite approaches to find out the agendas and its decisions chronologically from the resolutions of the Academic Council of KUET based on various specifications as Content-based features (Phrase mapping, Similarity calculation with the Phrase or Query and Sentence score) and Context-based features (surrounding factors of the sentence and the location of the sentence) and the result is presented sequentially by date with Content-based method. ๏ท We have proposed composite approaches to find out the decisions from the decision pool chronologically from the resolutions of the Academic Council of KUET based on Query-based, Content-based, Semantics-based, Context-based features of the sentences. ๏ท We have also revealed the techniques to categorized and classify the decisions from the documents using keywords extraction. 1.7 Organization of the Thesis ๏ท Chapter II presents Literature Review of the similar domains and finds some limitations of the existing works. ๏ท Chapter III presents theoretical considerations of the background of the work. ๏ท Chapter IV proposes the Model. The chapter also describes different rules and algorithm with examples. ๏ท Chapter V shows the experimental results of the proposed scheme with discussion. ๏ท Chapter VI exhibits the future direction of the proposed model and outlines the conclusions. CHAPTER II Literature Review 2.1 Introduction The intensive expansion of the internet and electronic publishing has led to an enormous amount of scientific documents being accessible to users, but, they are typically unreachable to those with visual deficiencies and often partly compatible with software and hardware such as tablets and e-readers [5]. However, several approaches have been proposed for extracting elements like figures, tables and useful information from the digital documents. 2.2 The Realization of Extracting Elements from Documents The knowledge extraction paradigm is prevalent in most sciences and it has drawn consideration from the data mining research community for several years. Some of the related research are given below: 2.2.1 OCR-based Analysis of Mathematical Texts from PDF [5] Document analysis of mathematical texts is a confounding problem for digital documents in regular formats in the context of PDF documents. Some uses an OCR approach for character identification together with a virtual link network for structural investigation. The other uses straight extraction of symbol information from the PDF file with a two stage parser to extract layout and expression constructions. Through reference to ground truth data, [5] contrast the efficacy and correctness of the two methods with respect to character identification and structural analysis of mathematical expressions with respect to layout analysis. OCR-based scheme of PDF [5] is applied in the Infty system [18] and uses that systemโs services for identifying mathematical texts</s>
|
<s>from scanned documents. That is, it primary evinces the PDF document into an image before finishing layout analysis, segmentation and character and mathematical expression recognition. Great recognition rates can be obtained with digital PDF documents as they are moderately free of noise, consequently less disposed to usual recognition mistakes. Structural elements that Infty can detect are; titles, headings, author information, headers, footnotes, page numbers and mathematical components. Headers and footers are recognized by having smaller than typical sizes and appearing at the top or bottom of the page. 2.2.2 Extraction of PDF Information [5] OCR method has the benefit that the engine is applied to a noise free image created from the PDF version, but it fails to apply any of the information obtainable from the PDF document. Extraction method [5] targets to do accurately this, by extracting information on characters, their fonts and sizes with their precise location in the document. This information is then exploited to collect the main document with a specific importance on the mathematical expressions. These procedures are implemented in the dedicated PDF extraction tool Maxtract [5], which implements a linear grammar tactic for identifying mathematical expressions [19] with font and size information on characters for augmented recognition [20]. For this assessment the tool was considerably prolonged to not only work on by hand clipped mathematical expressions only but automatically on complete PDF documents with layout analysis and segmentation of mathematics and text. Primarily, all characters on a given page with their particular placing have to be extracted. Unluckily, PDF documents do not encompass the true bounding box information about the characters that they contain. As an alternative, they identify the point where the characters are rendered to on the page and deliver only a very simple bounding box guesstimate for individual character. To obtain the precise bounds necessary, the PDF document is rendered to a 600dpi TIFF bitmap image, with the bounding boxes of each glyph (i.e., connected component) in the page extracted then registered with the character information from the original PDF. For most characters this is a simple matter of translating and scaling the coordinates obtained from the PDF and matching them to the corresponding glyphs. The extraction yields an exact list of characters on the PDF page, their bounding boxes and font and size information. Layout analysis was necessary in order to recognize lines and columns. An analyzed PDF page contains number of lines, each with an overall bounding box and list of symbols, with character and glyph information. Each extracted line is then analyzed separately to rebuild its spatial layout. In a first parsing step characters are clustered exploiting both the extracted information on their size and font as well as spacing information calculated from their bounding boxes. Characters are clustered together if they are either alpha characters or single digits and they a) Use the identical font, b) Have the identical font size, c) Have the same base y coordinate, i.e. they share a baseline, and d) The space</s>
|
<s>between the two adjacent edges of any adjacent pair is in threshold class 0. This yields single characters or collections of characters, which form words or numbers. To linearize the two dimensional layout of the characters into a 1-dimensional version using guidelines of mathematical expression arrangement. The rules are based on an original set specified by Anderson [21]. The grammar comprises 12 rules to deal with the dissimilar spatial relationships between sets of symbols including scripts, fractions, limits, enclosed symbols, matrices, cases, accents and symbols spread over several lines. The partition of text and math lines is grounded both on spatial positioning of the line with respect to the left and right margin of the page as well as the number of words on that line, where a word is a series of alpha characters, clustered by the preceding parsing step. A line is then used as a text line if it a) Comprises only a sequence of words, b) Comprises at least two successive words and the number of other expressions is not greater than the number of words, c) Comprises more than three successive words irrespective of the number of other expressions. Everything else will be treated as style mathematics. The one dimensional linearized lines are parsed using a LALR parser, resulting in a parse tree that is used as a midway representation for succeeding interpretation into many output formats. Structural information in the parse tree can be exploited by these interpretation modules. Characters The entire number of lines and PDF characters extracted from the PDF file. This can contain typical characters such as a, 1 or =, and characters which form part of a larger symbol. Several symbols, especially large ones, are made from various characters and lines. Symbols The number of symbols recognized, after plotting characters to glyphs. This is often less than the number of characters extracted, due to multi-character symbols. Misrecognized The number of symbols that cannot be transformed to Unicode. They occur when character names are improper or lost from the font encoding of the PDF. Missing The number of orphan characters left over after glyph matching. No character recognition errors arose using Maxtract. With an appropriate PDF file, Maxtract yields faultless character identification outcomes and can create a high class restoration of text and formulae in LATEX, with parse trees of formulae with semantic information. 2.2.3 Detection and Segmentation of Table of Contents [7, 8, 22] To excerpt the structural information from the table of contents (TOC) to assist to make digital document library by identifying/segmenting the TOC page is done by [7], [8], and [22]. They [7] present fully spontaneous identification and segmentation of table of contents (TOC) page from scanned document. Table of contents (TOC) detection from scanned document pages is significant for a user of the digital document library as a directory for the contents of the books, journals, and reports etc. The TOC is text lines with an organized format. Identification of TOCsโ are based on discovering page</s>
|
<s>numbers related with the name of sections, sub-sections or articles/author. The page number is considered as word as a whole as the rightmost word of a text line [7]. However for TOC-I the right aligned page numbers will have their echo in the vertical projection in the form of a secluded narrow hump at the rightmost section. Discovery is done by examine the difference in the number of characters in the rightmost word of each line in TOC-II. Fig.2.1 and Fig.2.2 showed the decision tree of the method [7]. This [7] technique is based simply on the spatial distribution of the associated modules and low resolution image may not critically disturb the enactment of the segmentation. They have verified this by a 4 fold reduction of the resolution (300 dpi to 75 dpi) of the input images and got a 7 fold timing enhancement with a lowest squalor in segmentation enactment [7]. Fig 2.1: Decision Tree to Identify Two Types of Content Pages: TOC-I and TOC-II. Figure 2.2: Decision tree for TOC segmentation Moreover, in [8], the ToC identifcation is based on the subsequent rules: 1) a ToC is usually in the first few pages of the document, 2) a ToC typically contains some regularities of numbering and indentation, 3) a ToC commonly contains ordered references correlated to titles or sections in body pages. The last property can also be broken into 4 sub-properties: 1) Contiguity: a ToC consists of a series of contiguous references to some other parts, 2) Ordering: the references and the referred parts appear in the same order in the document, 3) No self-reference: all references refer outside the contiguous list of references, 4) Distinctness: the link from the references of ToC to the outside parts is injective, or every reference refers to a distinctive part. Yes Yes Yes Yes Two Three One Yes Right aligned rightmost wordNarrow hump in the right sideTOC-INormal Text blockLow variation in the no. of characters in the last word of each lineTOCโIINormal Text blockNo. of components ?If the gap between the last two words of the corresponding word halos are more than the normal word gapPut the last word in the page no. field and the rest in title fieldPut the whole component in the title field of the tableFirst component is much smaller than the lastPut the first component in the number filed and the second in title fieldPut the first component in the title field and the second in page no. fieldPut them in three consecutive field in orderTheir method [8] does not rely on visual features such as font size or layout so that they could perform the detection purely based text, which is much more efficient for large scale extraction. 2.2.4 Extracting Metadata Information [8] In [8], they proposed a hybrid approach for extracting title and authors from a book that combines results from CiteSeer, a rule based extractor, and a SVM based extractor, leveraging web knowledge. For โtable of contentsโ recognition, they proposed rules based</s>
|
<s>on multiple regularities based on numbering and ordering. Furthermore, they also studied bibliography extraction and citation parsing for a large dataset of books. Lastly, they used the multiple fields accessible in books to rank books in answer to search queries. The system can successfully extract metadata and contents from great collections of online books and offers proficient book search and retrieval facilities. The metadata includes title, authors, ISBN, publish date and copyright. Since ISBN, date, copyright can be detected using strong rules, their main attention is on title and authors extraction. They developed innovative title and authors extractors based on experiential rules resultant from a small sample of books. They presumed that title and authors are constantly on the same page, i.e. the title page, and the title page is before ToC, foreword or preface which is shown in Table 2.1 and Table 2.24. Table 2.1. Features Used in Book Metadata Extraction Feature Description font size Initial Font: the font size of the starting character Average Font: the average font size of all the characters Font Changes: number of changes in font size location Start X, End X, Start Y, End Y : the coordinates of the line block in the page Line Number: the (order) number of the line within the page, e.g. 2 indicates the second line Page Number: the (order) number of the page text Bag-of-word: Top 200 words selected by DF rank in the whole dataset; 1 indicates a word is in the line others Number of Words: the total number of words in the line Number of Digits: the total number of digital words in the line 2.2.5 Extracting Bibliography [8] Bibliography typically has obvious indicators such as โReferencesโ, โBibliographyโ or โSourcesโ. However, unlike papers, books may have a bibliography at the end of each chapter. Thus, they examined bibliography in the entire body of book rather than in only the last few pages. If they recognized a line contains only one of the three keywords and Table 2.2. Rules for Generating Venue Alias Rule Examples of Venue Name None IEEE Transactions on Pattern Analysis and Machine Intelligence Transactions->Trans. Journal->J IEEE Trans. on Pattern Analysis and Machine Intelligence Proceedings->Proc IEEE Trans. on Pattern Analysis and Machine Intelligence Remove โofโ, โonโ, IEEE Trans. Pattern Analysis and Machine Intelligence โinโ, "theโ IEEE Trans. Pattern Analysis and Machine Intelligence Acronymization IEEE Trans. PAMI Pure acronymization PAMI Manual edit IEEE Trans. Pattern Anal. Machine Intell. lines followed are ordered reference items, then they recognized it as a bibliography block. They explore the ordered number at the beginning of individual reference until there are no continuously increasing number found in the following 30 lines. 30 seems like a big distance for references. But they do find some references contained near 10 lines. Also they assumed that the distance between two bibliography blocks in two chapters will be much greater than 30. 2.2.6 Extraction of Data Points and Text Blocks [10] In [10], they outline how data and text can be</s>
|
<s>extracted automatically from these 2-D plots, thus removing a time consuming physical procedure. Their information extraction algorithm recognizes the axes of the figures, extracts text blocks like axes-labels and legends and finds data points in the figure. It also extracts the units appearing in the axes labels and segments the legends to identify the dissimilar lines in the legend, the different symbols and their related text justifications. Their algorithm also achieves the challenging task of splitting out overlapping text and data points successfully. They presented a suite of image analysis and machine learning algorithms that extract data and metadata related to it from the figures and its captions. The tool based upon these algorithms extracts data from the 2-D plots and keeps them in databases, so that this significant origin of data on the web can be searched using search engines. Precisely, the tool extracts the X and Y axis lines, ranges of values in the X and Y axes, the labels, units, and ticks on the axes, data points and lines in the plot, legends and the dissimilar types of data stated in the legend. 2.2.7 Summarizing Figures, Tables, and Algorithms in Scientific Publications [11] Extracting a synopsis for a document-element from a digital document includes purifying information related to the document-element from the rest of the document. Resolving this problem precisely is easy if the semantics of the text is understood spontaneously. Nevertheless, state-of-the-art techniques of natural language processing and statistical text processing still fall short in fully understanding the semantics of text documents. Furthermore, good synopsis generation includes making a decision call concerning the level of detail that may be beneficial to an end-user. If a very huge synopsis is generated, it will be comprehensive, but the usersโ needs of finding information quickly will not be met. Their work is explained as- a) A method for extracting document-element related information from digital documents automatically. They treated the problem as a special case of query-biased summarization where the document-element itself is the query. b) A simple model for sentence selection that tries to strike a balance between the information content and the length of the synopsis. The top-ranked sentences selected by this model are lastly included in the synopsis. The process followed by the [11], have three major parts which are as follows- 1. Pre-processing: The Steps are described in the following- a) Text Extraction: They tried several tools available for PDF to text conversion (PDFBox [23], PDFTextStream [24], XPDF [25] and TET [26]) and they used PDFTextStream for their work. PDFTextStream preserves the sequence of text streams in the order they appeared in the document and for documents in double column format that are common in scientific literature. b) Document-Element Caption Parsing: The CAPTION has 4 sub-parts. DOC_EL_TYPE specifies the type of the document element, namely figure, table or algorithm. FIG_TYPE, TABLE_TYPE and ALGO_TYPE refer to the differences of the words โFigureโ, โTableโ and โAlgorithmโ correspondingly, as they occur in the captions. The DOC_EL_TYPE non-terminal is followed</s>
|
<s>by an integer that characterizes the document-element number. The integer is followed by a DELIMITER that can again be either โ:โ or โ.โ. The last non-terminal TEXT gives a textual explanation of the element. Identifying a grammar enables to follow a cohesive method for dealing with dissimilar types of document-elements. Figure 2.5 showed the grammar. Table 2.3. A Grammar for Document-Element Captions CAPTION DOC_EL_TYPE | Integer | DELIMITER | TEXT DOC_EL_TYPE FIG_TYPE | TABLE_TYPE | ALGO_TYPE FIG_TYPE FIGURE | Figure | FIG. | Fig. TABLE_TYPE TABLE | Table ALGO_TYPE Algorithm | algorithm | Algo. | algo. DELIMITER | TEXT A String of Characters c) Sentence Segmentation: After extracting the caption sentences from the document text, they fragmented the document text into its constituent sentences. Their aim is to identify and extract sentences that are related to document-elements, correct sentence segmentation is very significant in this case. They have considered the average line length and word density. d) Reference Sentence Parsing: To identify reference sentences, they used a grammar alike to that used for caption parsing. In the reference sentence, the delimiter will not be present in maximum cases and the integer will tell which element this sentence is referring. 2. Feature Extraction: The features are discussed below: a) Content based Features: This feature utilizes information cues present in the caption. It is a score assigned to each sentence based on its similarity with the caption. Like captions, the reference sentences also contain important cues providing information about the document-elements. There are certain cue words and phrases that are used frequently by authors while describing a document-element. b) Context based features: It is a binary feature with a value of 1 if a sentence is a reference sentence for the document-element. Otherwise, it has value 0.It is again a binary feature and has a value 1 if a sentence belongs to the same paragraph as the reference sentence. Otherwise, the value is 0.This feature captures the fact that a sentence closer to the reference sentence has a higher probability of being related to the document-element than a sentence located far away from the reference sentence. 3. Classification: The classification methods used by them for identifying document-element related sentences are a) Naive-Bayes Classifier and b) Support Vector Machines. 2.2.8 Extracting Algorithms in Scholarly Big Data [12] They proposed a set of amalgam procedures based on ensemble machine learning to discover pseudo-codes (PCs) and algorithmic procedures (APs) in scholarly documents. Precisely, three variations of a procedure for detecting PCs include an extension of the existing rule based method proposed by Bhatia, et al. [27], one based on ensemble machine learning techniques, and a hybrid of these two. The methods for discovering APs include a rule based method and a machine learning based method. 1. Extracting Algorithms for PCs: a) Rule Based Method: A PC caption must contain at least one algorithm keyword, namely pseudo-code, algorithm, and procedure. Captions in which the algorithm keywords appear after prepositions are excluded, as these are not likely</s>
|
<s>captions of PCs. b) Machine Learning Based Method: These features are classified into 4 groups: Font-style based (FS), Context based (CX), Content based (CN), and Structure based (ST). c) Combined Method: They proposed a combined method rule based method and the machine learning based method. 2. Extracting Algorithms for Aps: a) Rule Based Method: AP indication sentences exhibit certain common properties: The sentences usually end with follows:, steps:, algorithm:, follows:, following:, follows., steps:, below:. And the sentences usually contain at least an algorithm keyword. b) Machine Learning Based Method: The features can be categorized into two groups: Content based features (CN) and Context based features (CX). Using these features and classifiers they have extracted the algorithms from the scholarly Data [12]. 2.3 The Realization of Extracting Bangla Text, Image, Number and Knowledge Several work have been done on Bangla knowledge extraction. Some of the related methods are described in the following section. 2.3.1 Bangla Number Extraction and Recognition from Document Image [15] The method [11] can extract and identify Bangla numbers from a document image. This structure processes the image line by line of the text document. For each text line, the maximum and minimum widths of Bangla digits are assessed. Associated component labeling is used to filter the characters of these varieties of widths. The width cleaning outputs the Bangla digits if any and some single characters. The feature vector is extracted for each character of the filter output and fed the vector to the input of Multi-Layer Perception (MLP) for identification. Back-Propagation algorithm is used to train the Neural Network to recognize the digits only with 96% accuracy. The network has also the ability to find the character if it is not a digit. To increase the classification efficacy, individual identification engine can be used to recognize letters, digits and other symbols. They composed the numbers (series of digits) by filtering out the word from the document image and to recognize in separate digit to produce the number as text. 2.3.2 Phrase-level Polarity Identification for Bangla [28] In this work [28], opinion polarity classification on news texts has been conceded out for Bangla language using Support Vector Machine (SVM). The scheme recognizes semantic direction of an opinionated phrase as either positive or negative. The cataloging of text as either subjective or objective is obviously a precursor to determining the opinion orientation of evaluative text since objective text is not evaluative by description. A rule based subjectivity classifier has been used. This system is a hybrid approach to the problem, works with lexicon entities and linguistic syntactic feature. They proposed a comprehensive opinion mining system that can recognize subjective sentences within a document and a proficient feature based automatic opinion polarity detection algorithm to identify polarity of phrases. Bangla corpus attainment is a vital task for any NLP system development. For this task Bangla news corpus has been identified. News text can be divided into two main types: (1) News reports that aim to objectively present factual information, and</s>
|
<s>(2) Opinionated articles that clearly present authorsโ and readersโ views, evaluation or judgment about some specific events or persons. In order to identify features we started with Part Of Speech (POS) categories and continued the exploration with the other features like chunk, functional word, SentiWordNet in Bangla [29], stemming cluster, Negative word list and Dependency tree feature. The feature extraction pattern for any Machine Learning task is crucial since proper identification of the entire features directly affect the performance of the system. Functional word, SentiWordNet (Bangla) and Negative word list is completely dictionary based. 2.3.3 Bangla Text Extraction from Natural Scene Images [30] In [30], they proposed scheme based on analysis of associated components for extraction of Devanagari and Bangla texts from camera captured scene images. A common feature of these two scripts is the presence of headline and the proposed scheme uses mathematical morphology processes for their extraction. As well, they consider some principles for robust sifting of text components from such scene images. They tested the algorithm on a repository of 100 scene images containing texts of Devanagari and or Bangla. A global binarization system like the well-known Otsu's method is typically not suitable for camera captured images since the gray-value histogram of such an image is not bi-modal. Binarization of such an image using a threshold value often leads to loss of textual information contrary to the background. 2.3.4 Sentiment Analysis on Bangla and Romanized Bangla Text [31] In this study [31] a significant textual dataset of both Bangla and Romanized Bangla texts have been delivered which is first of this kind and post-processed, several authorized, and ready for SA implementation and experiments. Additional, this dataset have been tested in Deep Recurrent model, exactly, Long Short Term Memory (LSTM), using two types of loss functions โ binary cross-entropy and categorical cross-entropy, and also some investigational pre-training were directed by using data from one authentication to pre-train the other and vice versa. They explored โ a) A Data set of 10,000 Bangla and Romanized Bangla text examples, where each sample was explained by two adult Bangla speakers b) Pre-processing the data in a way so that it is readily usable by researchers. c) Application of deep recurrent models on the Bangla and Romanized Bangla text corpus. d) Pre-train dataset of one label for another (and vice versa) to see if it gives better results. The dataset contains of three groups โPositive, Negative, and Ambiguous. Data were composed from numerous micro-blog sites, such as, Facebook, Twitter, YouTube and some online news portal, product review panels etc. The model is based on Recurrent Neural Networks (RNN) more exactly they used LSTM neural network and also used Kerasโ model-level library since it has all the important features to develop the deep learning model. 2.3.5 Bangla Text Summarization by Sentence Extraction [16] In this work [16], they have followed a meek and easy-to-implement method to Bangla single document text summarization since the refined summarization scheme involves resources for deeper semantic analysis. They</s>
|
<s>have explored the impact of thematic term feature and position feature on Bangla text summarization. They have compared the proposed method to the LEAD baseline which was defined for single document text summarization task. LEAD baseline considers the first ๐ words of an input article as a summary, where ๐ is a predefined summary length. They used a lightweight stemmer for Bengali that strips the suffixes using a predefined suffix list, on a โlongest matchโ basis, using the algorithm similar to that for Hindi [32]. After an input document is constructed and stemmed, the document is broken into a collection of sentences and the sentences are ranked based on the following features: Thematic term, Positional Value, Sentence length, Combining Parameters for Sentence Ranking. 2.4 The Realization of Extracting Keywords [33] Document-oriented methods therefore provide context free document features, enabling extra analytic methods such as those described in [34] and [35] that describe variations within a text stream over period. These document-oriented methods are suited to corpora that change, such as pools of published technical abstracts that grow over time or streams of news articles. Moreover, by working on a single document, these approaches intrinsically scale to massive collections and can be applied in several contexts to improve IR systems and investigation tools. Previous work on document-oriented methods of keyword extraction has joint natural language processing methods to recognize part-of-speech (POS) tags that are combined with supervised learning, machine-learning algorithms, or statistical tactics. 2.4.1 Rapid Automatic Keyword Extraction [33, 36, 37] Rapid Automatic Keyword Extraction (RAKE), an unsupervised, domain-independent, and language-independent technique for extracting keywords from separate documents. They delivered details of the algorithm and its configuration parameters, and present results on a benchmark dataset of technical abstracts, showing that RAKE is more computationally proficient than TextRank while accomplishing greater precision and analogous recall scores. They also described a new method for producing stoplists, which is used to configure RAKE for particular domains and corpora. They implemented RAKE to a corpus of news articles and describe metrics for evaluating the distinctiveness, essentiality, and generality of extracted keywords, allowing a system to recognize keywords that are vital or general to documents in the nonexistence of manual annotations. RAKE is grounded on commonly comprise multiple words but rarely contain standard punctuation or stop words, such as the function words and, the, and of, or other words with minimal lexical meaning. The input restrictions for RAKE comprise a list of stop words (or stoplist), a set of phrase delimiters, and a set of word delimiters. RAKE uses stop words and phrase delimiters to divide the document text into candidate keywords, which are series of content words as they occur in the text. Co-occurrences of words within these candidate keywords are significant and permitted to detect word co-occurrence without the application of a randomly sized sliding window. Word relations are thus measured in a manner that spontaneously adjusts to the style and content of the text, enabling adaptive and fine-grained measurement of word co-occurrences that</s>
|
<s>will be used to score aspirant keywords. They assessed several metrics for calculating word scores, based on the degree and occurrence of word vertices in the graph: 1) Word frequency, 2) Word degree and 3) Ratio of degree to frequency They followed the approach described in [36] using the testing set for assessment because RAKE does not require a training set. The Multilingual Rapid Automatic Keyword Extraction (mRake) is another version of Rake by [37]where the feature are-Automatic keyword extraction from text written in any language, No need to know language of text beforehand, No need to have list of stopwords and 26 languages are currently available, for the rest - stopwords are generated from provided text 2.4.2 Other Schemes [38, 39] Several schemes have also been examined in the field keyword handling. Such as Term frequency (TF) and keyword adjacency (KA), TextRank [36], TAKE [38], Swiftrank [39] etc. We have used TF, TextRank and mRake in our system for keyword generation so that we can get all frequently used words from the text. 2.5 Discussion All the models presented in this chapter have some pros and cons. However, for Bangla official documents there is no research have been done. In this circumstances, we propose a knowledge extraction model which will solve our problem. We also provide a natural language based model to find a knowledge. The detail of the proposed structure is presented in the fourth chapter after discussing the theoretical consideration in the next chapter. CHAPTER III Theoretical Consideration 3.1 Introduction The theoretical analysis is reviewed in this chapter. Some of the required knowledge to implement the system to get the result are presented here. By using the connection of the parts of the data we can extract the related information and knowledge [40] which is depicted in the Fig.3.1. Several techniques are used to extract information from the text which are described in the following section. Fig.3.1: Data, Information, Knowledge, Wisdom Chain 3.2 Word Density [11] The document text comprises a lot of sparse lines equivalent to table data, equations, authorsโ names and affiliations etc. Normally, when altering from PDF to text, structure of table text etc. is mislaid and the mathematical symbols in equations are not correctly transformed to text form. Hence, these need to be detached. In order to recognize these sparse lines, [11] used a word density measure that is stated as follows: ๐๐ค๐ =๐ฟ + ๐ (3.1) Here, ๐๐ค๐ = the word density of line ๐, ๐ฟ = the length of line ๐ in words, ๐ = the number of spaces in line ๐. Note that the word density of a usual text line is larger than 0.5, because they [9] used, ๐ = ๐ฟ โ 1. They clean out only those lines from the document having word density less than 0.5. The cleaned up text is then sent to a sentence segmenter. It splits the document text into its principal sentences and produces the sentence set ๐. 3.3 Similarity with Caption [11]</s>
|
<s>In [9], they uses information cues existing in the caption. It is a score assigned to each sentence grounded on its likeness with the caption. After elimination of stopwords from the caption sentence and stemming using Porterโs Algorithm [41], the subsequent keywords form a โqueryโ which offers cues about the information covered in the document-element is found. This query is then used to give Similarity Scores to all sentences in the document grounded on their similarity to the query. Okapi BM25 [42], [43] can be used as similarity measure, since it has been proved to be very beneficial in a wide diversity of IR jobs. Inverse Sentence Frequency: It is similar in function to inverse document frequency (IDF) as used in information retrieval and minimizes common terms. The second term in Okapi BM25 [42] represents the frequency of individual query term in sentence, regulated by sentence length and scaled. Later calculating the scores for all the sentences, the highest ranked sentences with the top scores are picked. 3.4 Naive Bayes Classifier [11] Naive Bayes classifiers have been formerly used fruitfully to extract sentences for document summarization [44], [45]. This process is meek, fast and can be comfortably adjusted for use in current digital libraries having lots of documents. It is described in the following. Let the set of sentences that are connected to the document-element d be ๐๐ and let ๐ be the set of all sentences in the document ๐ท. Given the features ๐น1, ๐น2, . . . , ๐น๐ for sentence ๐ โ ๐, the Bayesโ rule to compute the probability that s also belongs to ๐๐ , as follows: If ๐น1, ๐น2, . . . , ๐น๐ are the presumed features for sentence ๐ โ ๐, Bayesโ rule calculates the probability that ๐ belongs to ๐๐, as follows: ๐(๐ โ ๐ |๐น1, ๐น2, . . . , ๐น๐) =๐(๐น1,๐น2,...,๐น๐ | ๐ โ ๐๐)๐(๐ โ ๐๐)๐(๐น1,๐น2,...,๐น๐) (3.2) Where, ๐๐ = set of sentences that are related to the document-element ๐ ๐ = set of all sentences in the document ๐ท. Assuming independent features, the above equation can be written as: ๐(๐ โ ๐ |๐น1, ๐น2, . . . , ๐น๐) =โ ๐(๐น๐ ๐๐=๐ | ๐ โ ๐๐)๐(๐ โ ๐๐)โ ๐(๐น๐) ๐๐=๐ (3.3) The probabilities ๐(๐น๐ | ๐ โ ๐๐) and ๐(๐น๐) are not identified a priori but they can be guessed by calculating frequencies in the training set. This gives a meek Bayesian classification function that gives a probability score to individual sentence in the document. The top-scoring sentences can be known as connected to document elements. The scores for all the sentences in the document are standardized in the range [0โ1] in [9]. Here ๐(๐ โ ๐๐) is the identical for all sentences in the document and is therefore a constant. They are concerned in the relative values of sentence scores only and not the absolute values, so this constant is overlooked by [11]. 3.5 Sentence Selection [11] In overall, the sentence choosing problem can be outlined as follows by [9]:</s>
|
<s>let ๐๐ be the Utility measure of sentence ๐๐ that expresses whether it is beneficial to select the sentence or not. It is explained as: ๐๐ = ๐(๐) โ ๐(๐) (3.4) Here, ๐(๐) is a function that favors the selection of ๐๐ and ๐(๐) is another function contrasting the selection of ๐๐. Sentences for which utility > 0 are comprised in the final set. When ๐(๐) the similarity is between ๐๐ and the query and ๐(๐) determines the redundancy of ๐๐, ๐๐ becomes similar as Maximum Marginal Relevance [9]. Let the score of the ๐๐กโ sentence be score๐ and let all sentences be graded in decreasing order of their scores so that ๐ < ๐ implies score๐ โฅ score๐ . They [9] defined the Utility measure ๐๐ in the following: ๐๐ = score๐ โ (1 โ ๐๐ฅ๐โ๐พ(๐โ1) ) (3.5) 3.6 Confusion Matrix [46, 47, 48, 49] In the arena of machine learning and exactly in the statistical problem, a confusion matrix is known as an error matrix. It is a precise table layout that explains the conception of the performance of an algorithm. Each row of the matrix characterizes the occurrences in a predicted class while each column characterizes the occurrences in an actual class which is shown in Fig.4.1. Fig. 3.2: Confusion Matrix Where Condition positive (P) = the number of real positive cases in the data Condition negative (N) = the number of real negative cases in the data True positive (TP) = Equivalent with success True negative (TN) = Equivalent with right refusal False positive (FP) = Equivalent with wrong alarm, Type I error False negative (FN) = Equivalent with failure, Type II error Additionally, precision and recall is used to investigate a predictive model and compute these statistics over authentic or test dataset. Precision is considered over the entire predictions of the model. It is the proportion between the correct predictions and the total predictions. In further words, precision specifies how worthy the model at whatever it predicted is. So ๐๐๐๐๐๐ ๐๐๐ = ๐๐ +๐น๐ (3.6) However, ๐๐๐ข๐ ๐๐๐ ๐๐ก๐๐ฃ๐(๐๐) + ๐น๐๐๐ ๐ ๐๐๐ ๐๐ก๐๐ฃ๐(๐น๐) = ๐๐๐ก๐๐ ๐๐๐๐๐๐๐ก๐๐ ๐๐๐ ๐๐ก๐๐ฃ๐ ๐๐๐ก๐๐ ๐๐๐๐๐๐๐ก๐๐ ๐๐๐ ๐๐ก๐๐ฃ๐ is shown in the Fig.3.3. Fig.3.3: Confusion Matrix for Total Predicted Positive Now, ๐๐๐๐๐๐ ๐๐๐ = ๐๐๐ข๐ ๐๐๐ ๐๐ก๐๐ฃ๐๐๐๐ก๐๐ ๐๐๐๐๐๐๐ก๐๐ ๐๐๐ ๐๐ก๐๐ฃ๐ Recall is the Ratio of the correct predictions and the total number of correct items in the set. It is expressed as % of the total correct (positive) items correctly predicted by the model. In other words, recall indicates how good the model at picking the correct items is. Therefore, ๐
๐๐๐๐๐ = ๐๐ +๐น๐ (3.7) Again, ๐๐๐ข๐ ๐๐๐ ๐๐ก๐๐ฃ๐ (๐๐) + ๐น๐๐๐ ๐ ๐๐๐๐๐ก๐๐ฃ๐(๐น๐) = ๐ด๐๐ก๐ข๐๐ ๐๐๐ ๐๐ก๐๐ฃ๐ ๐ด๐๐ก๐ข๐๐ ๐๐๐ ๐๐ก๐๐ฃ๐ is shown in the Fig.3.4 Fig.3.4: Confusion Matrix for Actual Positive Now, ๐
๐๐๐๐๐ = ๐๐๐ข๐ ๐๐๐ ๐๐ก๐๐ฃ๐๐ด๐๐ก๐ข๐๐ ๐๐๐ ๐๐ก๐๐ฃ๐ Moreover, F1 Score is needed to poise between Precision and Recall and when there is an uneven class distribution (big amount of Actual Negatives). F1 Score is the weighted average of Precision and Recall. Therefore, this score takes together false positives and false negatives into</s>
|
<s>explanation [49]. F1 is typically more beneficial than accuracy, particularly for rough class distribution. Accuracy works great if false positives and false negatives have analogous cost. If the cost of false positives and false negatives are very dissimilar, it is preferable to focus at both Precision and Recall. ๐น1 = 2(๐
๐๐๐๐๐โ ๐๐๐๐๐๐ ๐๐๐ )๐
๐๐๐๐๐+๐๐๐๐๐๐ ๐๐๐ (3.8)3.7 Graph-Based Centrality and PageRank and TextRank [36, 50, 51] The elementary notion behind the PageRank [38] algorithm is that the prominence of a node within a graph can be decided by taking into account global information recursively calculated from the whole graph, with associates to high-scoring nodes contributing more to the score of a node than associates to low-scoring nodes. This significance can be used as a measure of centrality. PageRank gives to each node in a directed graph a numerical score between 0 and 1, known as its PageRank score (๐๐
), and defined as [36] ๐๐
(๐๐) = 1 โ ๐ + ๐ ร โ |๐๐ข๐ก(๐๐)| ๐๐
(๐๐) ๐๐๐ผ๐๐๐ (3.9) Where ๐ผ๐๐๐ = set of vertices that point to ๐๐, ๐๐ข๐ก(๐๐) = set of vertices pointed to by ๐๐ , and ๐ = damping factor, typically set to around 0.8 to 0.9 [52]. Using the likeness of a random surfer, nodes visited more often will be those with numerous links coming in from other commonly visited nodes, and the role of ๐ is to back up some probability for hopping to any node in the graph, thus stopping getting stuck in a disconnected part of the graph. Though formerly suggested in the background of ranking webpages, PageRank can be used more commonly to decide the significance of an object in a network. For example, TextRank [36] and LexRank [53] utilize PageRank for ranking sentences for the purpose of extractive text summarization. The underlying hypothesis for computing the prominence of a sentence is that sentences which are analogous to a big number of additional important sentences are central. Thus, by ranking sentences according to their centrality, the highest ranking sentences can then be mined for the summary. In TextRank and LexRank, individual sentence in a document or documents is characterized by a node on a graph. Nevertheless, unlike a web graph, in which edges are unweighted, edges on a document graph are weighted with a value signifying the likeness between sentences. The PageRank can be simply adjusted to deal with weighted undirected edges, expressed as: ๐๐
(๐๐) = 1 โ ๐ + ๐ ร โ ( ๐ค๐๐ ๐๐
(๐๐)โ ๐ค๐๐๐=1๐=1 (3.10) Where ๐ค๐๐ = similarity between ๐๐ and ๐๐ and In [50], they assumed that these weights are kept in a matrix ๐ = { ๐ค๐๐ }, which is referred as the โaffinity matrixโ. Here, the summations are over all objects in the graph. TextRank and LexRank put on a single instance of PageRank to the pool of sentences. A significant feature of TextRank is that it does not need profound linguistic knowledge, nor domain or language specific annotated corpora, which makes it extremely convenient to other domains, genres, or</s>
|
<s>languages. 3.8 Mixture Models and the EM Algorithm [50] The Mixture Model algorithm is exhibited as a linear combination of ๐ถ component densities ๐(๐|๐ฅ) in the form โ ๐๐ ๐(๐|๐ฅ), where the ๐๐ are called mixing coefficients, and characterize the prior probability of data point ๐ฅ having been produced from component ๐ of the mixture. Supposing that the parameters of individual component are denoted by a parameter vector ๐๐, the problem is to determine the values of the components of this vector, and this can be accomplished using the Expectation-Maximization algorithm [42]. Succeeding random initialization of the parameter vectors ๐๐ , m=1,โฆ.C, an Expectation step (E-step), followed by a Maximization step (M-step), are repeated until convergence. The E-step calculates the cluster membership probabilities. For example, supposing spherical Gaussian mixture components, these probabilities are calculated as: ๐(๐|๐ฅ๐) = ๐๐ ๐(๐ฅ๐|๐๐ , ๐๐)โ ๐๐ ๐(๐ฅ๐|๐๐ , ๐๐)๐=1..๐ถ, m=1,...C, (3.11) where ๐๐ , and ๐๐ are the present guesses of the mean and standard deviation, respectively, of component ๐. The denominator acts as a normalization factor, ensuring that 0 โค ๐(๐|๐ฅ๐) โค 1 and โ ๐(๐|๐ฅ๐) = 1๐ถ๐=1 . In the M-step, these probabilities are then used to re estimate the parameters. Over using the spherical Gaussian case- ๐๐ = โ ๐(๐|๐ฅ๐) ๐ฅ๐๐=1โ ๐(๐|๐ฅ๐)๐๐=1 m=1,...C, (3.12) ๐ฟ ๐2 = โ ๐(๐|๐ฅ๐) || ๐ฅ๐ โ ๐๐ ||๐=1โ ๐(๐|๐ฅ๐)๐๐=1 , m=1,2, ...C, (3.13) ๐๐ = โ ๐(๐|๐ฅ๐)๐=1 , m=1,2, ...C, (3.14) ๐(๐ฅโ๐, ๐ฟ) are called โlikelihoodsโ and in the case of Gaussians are just the value of the Gaussian with mean and variance ๐ฟ 2 calculated at point ๐ฅ. 3.9 Discussion In this chapter we present the theoretical analyses for the proposed structures. Without understanding the theory of the discussed models above it is very hard to understand our system which is described in the following section. CHAPTER IV Methodology 4.1 Introduction The amount of official Bangla files are produced by professionals, Judicial, and academician and is very difficult to find the previous or specific information. Therefore, our proposed system correctly detect the knowledge and show the result to the user with the following outcomes are: It extracts Information for Bangla legal documents where the user will find his/her desired information using a set of the various knowledge-based method in Bangla text. Automatically discover and extract the decisions and agendas from official documents and make an analysis and classification for a sample Dataset to detect knowledge. 4.2 Realization of the Method for Proposed Bangla Knowledge Extraction In this article, the process of extracting decisions, agenda and the query result of the user with keywords form Bangla PDFs are presented. Fig 4.1 demonstrates the high-level design of the recommended system. Firstly, the Bangla documents are processed to identify the agenda with the decision. Then, the extracted knowledge provides the date and the meeting number from the document which is presented chronologically. The algorithm for knowledge extraction is given the presented in Fig 4.2. In the algorithm there are two algorithms which are Extraction () and Decision</s>
|
<s>Extraction with features ().Extraction algorithm extracts all decisions and stored in the decision pool. From the decision pool the user query is found out by calling Decision Extraction with features algorithm. To identify the desired lines, we processed the document to get the pure text and then with the corresponding features we extracted the knowledge from the text and arrange the extracted information with the meeting date and meeting number. From these extracted decisions we extracted the keywords using different algorithm then form these data we have selected the most frequent keywords from the various algorithm. These keyword maintain a knowledge base with synonyms from the domain and mapped with user query. Then the user findings are shown as a result. Fig. 4.1: Design of Proposed Knowledge Extraction from Official Bangla Documents We have divided the task into major three parts as extraction of agenda and decisions text, ordering the documents chronologically and finding user query from the extracted decision pool. The algorithm of knowledge extraction from official Bangla documents is given below. Algorithm extraction () and Decision Extraction with features () are given in Fig.4.2 and Fig.4.4 Fig. 4.2: Proposed Knowledge Extraction Algorithm ๐ด๐๐๐๐๐๐กโ๐: ๐ฒ๐๐๐๐๐๐
๐๐ ๐ฌ๐๐๐๐๐๐๐๐๐ () 1. ๐น๐๐ ๐๐๐ ๐๐๐ ๐๐๐๐๐ ๐๐๐๐ ๐๐๐๐๐๐๐กโ๐ ๐ฌ๐๐๐๐๐๐๐๐๐ () ๐๐๐ ๐๐๐๐๐ก ๐๐๐ ๐ข๐๐ก 2. ๐ท๐๐๐๐ ๐๐๐ ๐๐๐๐ = ๐๐๐๐ ๐ ๐ ๐๐ก ๐๐ ๐๐๐๐๐ ๐๐๐ ๐ ๐๐๐ก๐๐๐๐ 3. ๐น๐๐๐ ๐ท๐๐๐๐ ๐๐๐ ๐๐๐๐ ๐๐๐๐ ๐๐๐๐๐๐๐กโ๐ ๐ซ๐๐๐๐๐๐๐ ๐ฌ๐๐๐๐๐๐๐๐๐ ๐๐๐๐ ๐๐๐๐๐๐๐๐ () 3. ๐๐๐๐๐ก ๐๐๐ ๐ข๐๐ก ๐ค๐๐กโ ๐๐๐๐๐ฃ๐๐๐ก ๐๐๐๐๐๐๐๐ก๐๐๐ ๐๐๐๐ ๐๐ก๐๐ 3 4. ๐ถ๐๐๐ ๐ ๐๐๐ฆ ๐๐๐ ๐๐๐๐๐ ๐๐๐ ๐ ๐๐๐ก๐๐๐๐๐ . The design for Extracting agenda and decisions and Algorithm for Extracting agenda and decisions are shown Fig 4.3 and Fig.4.4 respectively. For each documents the system extracts the required sentences are using different features described below. The system takes the PDF documents ๐ท๐, a set of a Bangla documents as input and a Query ๐ and returns ๐๐, a set of Bangla documents with the detected results respectively. The major parts of the proposed model are given in the following subsection. Fig. 4.3: Design of Agenda and Decision Extraction from Official Bangla Documents Fig. 4.4: Proposed Algorithm for Agenda and Decision Extraction from Bangla Documents Algorithm: Extraction () ๐ผ๐๐๐ข๐ก ๐ท๐๐๐ข๐๐๐๐ก๐ : ๐ท๐ = {๐ท1, ๐ท2, ๐ท3, โฆ ๐ท๐} ๐๐ข๐๐๐ฆ = ๐ ๐๐๐ ๐๐ข๐ก๐๐ข๐ก: ๐๐ = {๐1, ๐2, ๐3, โฆ ๐๐} 1. ๐น๐๐ ๐๐๐โ ๐ท โฌ ๐ท๐ a. ๐ โ ๐ธ๐ฅ๐ก๐๐๐๐ก ๐๐๐ฅ๐ก ๐๐๐ ๐๐๐๐๐ ๐๐๐ฅ๐ก (๐ท) b. ๐ โ ๐๐๐๐๐๐ ๐ค๐๐กโ ๐๐๐๐ค๐๐๐๐๐ ๐๐๐ ๐ (๐) c. ๐๐ โ ๐๐๐๐ก๐๐๐๐ ๐๐ฅ๐ก๐๐๐๐ก๐๐๐ (๐ท๐ก) ๐๐ = {๐1, ๐2, ๐2, โฆ ๐๐} ๐๐๐ d. ๐น๐๐ ๐๐๐โ ๐ โฌ ๐๐ ๐ถ๐๐๐๐ข๐๐๐ก๐, a. ๐น๐๐๐๐ข๐๐๐๐ฆ(๐), ๐๐๐๐๐ (๐)๐๐๐ ๐ฟ๐๐๐๐ก๐๐๐ (๐) ๐ = {๐ค1, ๐ค2, ๐ค2, โฆ ๐ค๐} b. ๐1 โ ๐๐๐ก๐โ ๐ด๐๐๐๐๐ (๐) ๐๐ ๐ (๐) c. ๐2 โ ๐น๐๐๐ ๐ต๐25(๐) d. ๐3 โ ๐๐๐๐ก๐๐๐๐ ๐๐๐๐โ๐ก ๐ค๐๐กโ ๐ e. ๐ โ ๐ถ๐๐๐๐๐๐ ๐1, ๐2, ๐3 ๐๐๐ ๐๐๐๐๐ฃ๐ ๐๐ข๐๐๐๐๐๐ก๐๐ ๐๐ ๐๐๐ฆ f. ๐๐๐ e. ๐๐ = {๐1, ๐2, ๐2, โฆ ๐๐} f. ๐ = ๐๐ g. ๐๐๐ 2. ๐๐๐๐๐ (๐๐ ) 3. ๐
๐๐ก๐ข๐๐ ๐๐ Fig 4.5. Shows the system for keyword extraction and finding user query from</s>
|
<s>the documents and Fig.4.6. Shows the algorithm respectively. The system takes the documents ๐ท๐๐ , a set of Bangla decision and Query ๐ and returns ๐๐๐ , a set of Bangla detected texts. For each query, the system collects the required lines with some semantics features explained in the following sub sections. Fig. 4.5: Design of Keyword Extraction and Finding User Query from Documents Fig. 4.6: Proposed Algorithm for User Query Extraction with Features from Decision Pool 4.3 Data Selection and Pre-processing The system takes the PDF documents ๐ท๐, a set of a Bangla documents as input and a Query ๐ and returns ๐๐, a set of Bangla documents with the detected results respectively. Features Extraction Algorithm: Decision Extraction with features () ๐ผ๐๐๐ข๐ก ๐ท๐๐๐๐ ๐๐๐: ๐ท๐๐ = {๐ท๐1, ๐ท๐2, ๐ท3, โฆ ๐ท๐๐} ๐๐ข๐๐๐ฆ = ๐ ๐๐๐ ๐๐ข๐ก๐๐ข๐ก: ๐๐๐ = {๐๐1, ๐๐2, ๐๐3, โฆ ๐๐๐} 4. ๐น๐๐ ๐๐๐โ ๐ท โฌ ๐ท๐๐ h. ๐๐ โ ๐ธ๐ฅ๐ก๐๐๐๐ก ๐๐๐ฆ๐ค๐๐๐ i. end 5. ๐๐ โ ๐๐๐๐ ๐๐๐ฆ๐ค๐๐๐ 6. ๐น๐๐ ๐๐๐โ ๐ โฌ ๐ท๐๐ a. ๐ถ๐๐๐๐ก๐ ๐๐๐๐ข๐ก๐ข๐๐๐ ๐ ๐๐ก ๐ค๐๐กโ ๐๐๐ฆ๐ค๐๐๐ b. ๐2 โ ๐๐๐ก๐โ ๐น๐๐๐ข๐ก๐ข๐๐ ๐ ๐๐ก c. ๐2 โ ๐๐๐ก๐โ ๐ d. ๐ โ ๐ถ๐๐๐๐๐๐ ๐1, ๐2, ๐๐๐ ๐๐๐๐๐ฃ๐ ๐๐ข๐๐๐๐๐๐ก๐๐ ๐๐ ๐๐๐ฆ e. ๐๐๐ j. ๐๐ = {๐1, ๐2, ๐2, โฆ ๐๐} k. ๐๐๐ = ๐๐ l. ๐๐๐ 7. ๐๐๐๐๐ (๐๐๐ ) 8. ๐
๐๐ก๐ข๐๐ ๐๐๐ 4.3.1 Selection of the Target Data For the selection of the data, we have collected the resolutions of the academic council of Khulna University of Engineering & Technology (KUET) as our target data for a case study where data are written in the Bangla doc files. These files should be written in a Unicode format. Otherwise the PDF files cannot be read by our process. The Unicode font must be embedded with the doc file before making the PDF format. These files have specific format with structured data where different agendas with its decisions are presented here from the meeting of academic council of KUET. We have experimented with 29 resolutions of Academic Councilโs meeting of KUET for agenda and decision extraction and for user query extraction. 4.3.2 Pre-processing the Data Inspired by Hassan [55] and Bhatia, et al., [12], we used PDFBox [23] to extract text and then cleaned the sentences like removing tables, web and email address and, references. Here the PDF should be made from a Unicode file and the font should be embedded with the file. After pre-processing the data is transformed to a specific format from where we can mine the required data. 1) For Bangla vowel marks, there are some broken words in the text. For this problem, we have made our own knowledge base for this specific domain. Such as โเฆชเงเฆฐเฆเฆฃเฆคเฆพโ will be โเฆชเงเฆฐเฆฃเฆฃเฆคเฆพโ. Broken words from the document then, searched and matched with our knowledge base words and the corresponding broken words with the correct one is then altered. We have made a knowledge base for 565 words from 12 documents from the target data. This works as a small database to retrieve the</s>
|
<s>words from the files. 2) The PDFBox prints the sequence of text in the order they appeared in the document [14], [9]. Thus all the line in the text is given a sequence number for measuring the location of the sentences. 3) All the sentences are then extracted using stopword in Bangla (|) and Removing different punctuations, the word frequency is calculated. 4) We gave a score to all sentences using the following equation where the word or term frequency in a sentence is summed. Sentence Score = โ ๐๐๐๐ ๐น๐๐๐๐ข๐๐๐๐ฆ ๐๐ ๐ ๐ ๐๐๐ก๐๐๐๐ (4.1) 5) Then sentence weight is calculated with query value [56]. ๐๐๐๐ก๐๐๐๐ ๐๐๐๐โ๐ก = ๐๐ข๐๐๐ฆ ๐ก๐๐๐ ๐ฃ๐๐๐ข๐ + โ ๐๐๐๐ ๐น๐๐๐๐ข๐๐๐๐ฆ ๐๐ ๐ ๐ ๐๐๐ก๐๐๐๐ (4.2) 4.4 Feature and Patterns Specifications for Decision Extraction In the PDF file, there is a word โเฆเฆฃ เฆพเฆเงเฆฏเฆธเงเฆเงเงโ which means โagendaโ and the point is discussed under these word with a number of the meeting and a number of the topics. There is a discussion immediate after the โagendaโ. A decision may appear or not after the discussion and if appeared it is written as โเฆธเฆธเฆฆเงเฆงเฆพเฆจเงเฆคเฆโ with a vowel mark โ:โ or โBisargaโ which means โdecisionโ and we will describe โเฆเฆฃ เฆพเฆเงเฆฏเฆธเงเฆเงเงโ as โagendaโ and โเฆธเฆธเฆฆเงเฆงเฆพเฆจเงเฆคเฆโ as โdecisionโ for the rest of the paper. The aim is to fetch the โagendaโ, โdecisionโ and the query words by the user, if it is discussed in the document. For this, we have used three features where a phrase or keyword is matching directly within the words in sentences. However, with the direct search like this which we named as Rule-based features or exact phrase rule features, we are not able to find the exact information we are looking for [11], [12]. Hence we have used the other two features. The features are discussed below: 4.4.1 Query-Based Features 1. The phrase โagendaโ is directly searched for extracting the agenda in a line. A set of a regular expression is used in this process. All the lines which contain the phrase, are fetched by the regular expression (regex). This is also true for decision extraction. So here we need content based features described later. However, in this means we only found out the words exactly matched the phrase and we did not get relevant information regarding our search. In this case, we have used the context based features for a better extraction. Example 4.1 in the Fig. 4.7 we find there are two agenda and by only using rule base features we will get only the first line from the text with agenda keyword which means will get only one line with agenda. It will not return the next list of the agenda and do not get the relevant information Fig. 4.7: Example 4.1 2. Similarly, โdecisionโ phrase or query word are found using regex from all the documents. Moreover, Sometimes the author of the document can write the โเฆธเฆธเฆฆเงเฆงเฆพเฆจเงเฆคเฆโ as โเฆธเฆธเฆฆเงเฆงเฆพเฆจเงเฆคโ. In this point only exact matching will not give us the perfect result,</s>
|
<s>because a line may contain this phrase as reference but not as a decision. Nevertheless, there are many lines which contains โdecisionโ of the โagendaโ under the โdecisionโ phrase and donโt contain the phrase. Thatโs why we need to search for content and context based features. And for the user query, the same above process is used to extract the information from the documents. Then the similarity of the extracted sentences are measured by content-based features. 3. Then, to calculate the similarity of the sentences with the query or โagendaโ or โdecisionโ, we compute the similarity scores to all sentences in the document based on their similarity to the query [11]. We have used Okapi BM25 [42], [43], [11] to know the similarity of the sentences, meanwhile, it is verified as a very effective procedure in various information retrieval method by expert [11]. The equation of Okapi BM25 is given below: If Q is the user query then the BM25 score of sentence S in document D is calculated as: BM25(Q, S) = โ {log๐ ๐๐ก(๐1+1)๐ก๐๐ก๐ ๐1((1โ๐)+๐ร(๐๐๐ฃ))+๐ก๐๐ก๐ (๐3+1)๐ก๐๐ก๐๐3+๐ก๐๐ก๐} ๐กโ๐ (4.3) Where N = Total Number of sentences in a PDF ๐ ๐๐ก = Number of sentences that contain the term t in query Q = 1 ๐ก๐๐ก๐ = Frequency of term t in sentence, S ๐ก๐๐ก๐= Frequency of term t in query, Q ๐๐ = Length of the sentence, S ๐๐๐ฃ = Average length of a sentence in the Document k1, k3 and, b are the constant values which is fixed to 2, 2 and 0.75 receptively [11]. The part of the (4.3), the term, ๐๐๐๐ ๐๐ก is known as inverse sentence frequency [11]. In the experiments, we have used a single query term, and for this reason the, query term is 1 here. We used this to measure the similarity of the extracted sentences and also the similarity between equation (4.2) and (4.3) to settle the context of the sentence. 4.4.2 Content-Based Features We found there are some definite phrases that are used frequently while describing a โdecisionโ on the โagendaโ. We have listed 77 phrases analyzing 12 documents manually and trained our model with these phrases given in Fig.4.8. We noticed that these phrases are placed at the end of the line of a decision-making sentence. These are decision making lines referring to make some remarks such as to permit some request or to order to do something or making a change in the system etc. which means all the phrases are verbs which point out to do something in the future or approve/order/suggest for a particular instance. However, we have tried to make a content-based feature with the single word or verb from the mentioned phrases, but there are many lines which contain a single word from these phrases in the discussion section of the documents which are not โdecisionโ taking sentences. For this reason, we used a phrase instead of a single verb. For an example, the verb โเฆนเฆฃเงโ is used in the many places in the documents not only in</s>
|
<s>the โdecisionโ line. So we used the phrase โเงเฆพเฆธเฆค เฆเฆฐเฆพ เฆนเฆฃเงโ, โเฆเฆฎเฆพ เฆธเฆฟเฆฃเฆค เฆนเฆฃเงโ, โเฆชเงเฆฐเฆชเงเฆฐเฆฐเฆฃ เฆเฆฐเฆฃเฆค เฆนเฆฃเงโ etc. instead of only โเฆนเฆฃเงโ. Fig. 4.8: Decision Making Phrases 4.4.3 Context-Based Features 1. All of the above similarity and matching processes only consider the content, even the rule or phrase mapping method. However, some sentences cannot be found by these procedures. There are many segments with numbering system in the documents which are listed as โagendaโ and โdecisionโ. In these cases, we need to identify the context of the sentences. To identify the nearby sentence which is contextually related with the content, we identified the location of the sentence from the sequence of the text. Such as, we found some sentences which do not contain the phrase but they are actually โagendaโ with relevant information or continuation of the โagendaโ on a different level. In this case, we have found out the position of the word in the sentence and if it is not the first word of the sentence, it is removed from the โagendaโ list. If there is a numbering system immediately after the โagendaโ word in the beginning of the sentence then we looked for the rest of the numbering sequence as an โagendaโ continuation. However, the โagendaโ listed as a number, cannot be fetched by rule-based or content based procedures. Example 4.2 The 3rd point with the agenda is not added to the list shown in the Fig. 4.9. We have used the position of the decision and agenda from the text to extract the knowledge. This also same for the decision extraction. Fig. 4.9: Example 4.2 2. If the sentence having the โdecisionโ is immediately before an โagendaโ and have some numbering sequences after the โdecisionโ sentence and contain the content-based features, then the sentence enlisted to the decision list. Nevertheless, there are many sentences which also contain โdecisionโ as a suggestion to the authority but not the โdecisionโ making sentence in our case. Moreover, if the โagendaโ has a numbering order, then the agenda would be more than one for a topic. So thus finding the location we have extracted the sentences. After extracting the numbering decision list then we checked the immediately preceding โdecisionโ containing sentence whether it contains specific phrases: โเฆธเฆฟเฆฎเงเฆจเฆฐเงเฆช เฆธเฆธเฆฆเงเฆงเฆพเฆจเงเฆค เฆเงเฆนเงเฆค เฆนเงโ or โเฆธเฆฟเฆฎเงเฆจเฆธ เฆธเฆฟเฆค เฆธเฆธเฆฆเงเฆงเฆพเฆจเงเฆค เฆเงเฆนเงเฆค เฆนเงโ etc. If it is found there then it is enlisted as โdecisionโ sentences. The extra sentences which do not preserve the above criteria we have excluded that from the list. Example 4.3 Here we can see that the position of decision is very important to get the information and it should be before the agenda as it is described above feature in Fig. 4.10. Fig. 4.10: Example 4.3 3. Additionally, there are many โ โ quotation marks with โdecisionโ word in a line immediately before the actual โdecisionโ sentences which actually suggests some decision to authority. We exclude this type of โdecisionโ sentences as they are not the ultimate decision taken by the authority.</s>
|
<s>However, if the there is a duplicate sentence then it is removed from the list. Example 4.4 The decision with the quotation marks is shown in the Fig. 4.11 which we have excluded from the list of decision. Fig. 4.11: Example 4.4 4.5 Ordering the Documents Chronologically All the files have a date at the beginning of the content with meeting number. We have extracted the date with the meeting number and named the file accordingly. The meeting number of the file is then sorted and presented with the extracted information of date and meeting number. Example 4.5 The chorological information is found from the text and stored with the file name which is shown in the Fig. 4.12. We have stored the date of the meeting. Fig. 4.12: Example 4.5 4.6 Processing of the Keywords from the Extracted Decision Pool In this section, the process of keywords extraction from the Bangla minutes of meeting of the academic council of KUET are presented. First of all from the processed Bangla documents we started to extract the keywords. We assumed the all the Bangla words are correctly extracted. Inspired by Bhatia [11], [12], the extracted set of decision is used and then the sentences are cleaned like removing stopwords. Here the domains specific stopwords are embedded with common stopwords. Then keywords extraction is done with different matured algorithms described as follows- 1. There are some common stop words in the text, Such as โเฆ
เฆฅเงเฆพโ, โเฆ
เฆฟเงเฆฏเฆพเงเงโ, โเฆ
เฆฃเฆฟเฆโ, โเฆ
เฆฃเฆฟเฆฃเฆโ, โเฆ
เฆจเงเฆคเฆคโ, โเฆ
เฆฟเฆฏโ etc. We have used 387 common Bangla stopwords and then listed 80 domain specific stopwords to the main stopwords list. Here are some example given in the Fig 4.13. 2. After stop words, elimination from the list of the decision, the term frequency-inverse document frequency, tfโidf [11] or TFIDF which reflect important word in a document or collection, is used to extract the keyword. Term frequency for the document: ๐ก๐๐ก,๐ = ๐๐ก,๐โ ๐๐กโฒ,๐๐กโฒ โ ๐ (4.4) Where ๐ is the document and ๐ก, is ta erm which means the number of times it occurs in ๐ and โ ๐๐กโฒ,๐๐กโฒ โ ๐ is the total number of words in ๐. The inverse document frequency is a measure of how much information the word provides [12]. ๐๐๐(๐ก, ๐ท) = log (4.5) Where N=|D| is the total number of documents and ๐๐ก is the number of documents where the term t appears. And the tfโidf is [12] ๐ก๐ โ ๐๐๐(๐ก, ๐, ๐ท) = ๐ก๐๐ก,๐. ๐๐๐(๐ก, ๐ท) (4.6) Fig. 4.13: Stopwords for the the Domain 3. A well-known natural language processing algorithm, Rapid Automatic Keyword Extraction (RAKE) can spontaneously extract keywords from documents [33]. We have used Multilingual Rapid Automatic Keyword Extraction (mRake) [37]. For the purposes to detect the keywords, if it used the entire documents in mRake its stopwords will be more specific. However, it is language independent. In this mRake the stopwords are generated from the text itself and the more text the more stopwords [37]. TextRank is described in the[36],[50],[52], creates a</s>
|
<s>graph of the words and relationships between them from a document, then finds the most important points of the words based on importance scores calculated from the entire words graph. We have used this algorithm to rank the sentences here. 4. Then we have selected the most common 20 keywords from these techniques. The most frequently extracted keywords are listed with the tf-idf and rake score, then most frequent keywords are listed from the documents. The two example is given in Fig. 4.14 as โpost-factoโ and โMEโ. Fig.4.14: Example of the Frequency of Two Words :Post Facto and Mechanical 4.7 Feature and Pattern Extraction for Decisions with User Query Then, the process of extracting the query knowledge from these decisions is performed. To extract the desired knowledge, we processed the decision store to get the important text and then with the corresponding features we extracted the knowledge from the text which is mapped with the user query. Fig 4.5. Shows the strategy of the proposed system for keyword extraction and finding user query from the documents. Fig.4.6. presents the algorithm. In the files, there are many keywords and we have selected the most common and high-frequency keywords from the keyword list as keywords knowledge base for keywords. If a user gives a query from these decision files, at first its look into the keywords knowledge base. The keywords knowledge base has three features specification on it. If the query is not there in knowledge base it will directly be searched for the matching words. The three features where a phrase or keyword is matching with the words in sentences is described below. However, we are not able to find the exact information we are looking for [11], [12] in the direct search. Hence we have used the other features. The features are discussed below. 4.7.1 Content-Based Features The phrase or user query is directly searched with a set of a regular expression where all the lines which contain the phrase, are fetched by the regular expression (regex).However, in this means we only found out the words exactly matched the phrase and we did not get relevant information regarding our search where only one feature is used. In this case, we have used a set of words called โpratyayโ with the query word from the keywords knowledge base. If the query word is โเฆเฆธเฆฎเฆเฆฟโ, so it from the 20 selected keywords. Therefore for every word from selected keywords, there are a set of words with the keyword. Like โเฆเฆธเฆฎเฆเฆฟเฆธเฆฎเงเฆนโ, โเฆเฆธเฆฎเฆเฆฟ-เฆเฆฐโ , โเฆเฆธเฆฎเฆเฆฟเฆฃเฆค โ etc. 4.7.2 Semantics Features We found there are some definite words with various kind of synonyms that are used frequently to express the same meaning which are described below. 1. However, in the decision files we found many words in English written in Bangla font. Such as โSessionalโ as โเฆชเงเฆฐเฆธเฆถเฆฟเฆพ โ and also as English word. We have tried to make a knowledge base with English word to Bangla font words. So that if the user query is</s>
|
<s>โเฆชเงเฆฐเฆธเฆถเฆฟเฆพ โ then it can find words with โSessionalโ also. In the 20 keyword knowledge base we have listed โเฆ
เฆธเฆกเฆฟเฆฟเฆฏเฆพเฆจเงเฆธโ, โเฆเงเฆฐเฆพเฆเฆฃเงเงเฆฟโ, โเฆชเงเฆฐเฆฎเฆเฆพเฆธเฆฟเฆเฆฏเฆพ โ , โเฆชเงเฆฐเฆฐเฆเฆเฆฃเงเฆถเฆฟโ , โเฆเฆธเฆฎเฆเฆฟโ, โเฆธเฆฅเฆเฆฐเงโ etc words. 2. Then, there are some English to Bangla words which are used simultaneously. Such as โเฆคเฆคเงเฆคเงเฆฌเงเงโ with theory, โเฆชเงเฆฐเฆเฆพเฆธ เฆฟ เฆชเงเฆฐเฆคเฆฏเฆพเฆนเฆพเฆฐโ with course withdrawal, committee with โเฆเฆธเฆฎเฆเฆฟโ, โเฆเฆฟเฆฟเฆพเฆคเงเฆคเฆฐโ with Post Facto, โเฆชเงเฆฐเฆฐเฆเฆเฆฃเงเฆถเฆฟโ with registration, โเฆชเงเฆฐเฆฎเฆเฆพเฆธเฆฟเฆเฆฏเฆพ โ with Mechanical . They are used as vice versa in all the documents. Therefore we also consider this. 3. Moreover, There are some words with short form and elaboration from. Such as โเฆชเงเฆฐเฆฎเฆเฆพเฆธเฆฟเฆเฆฏเฆพ โ with ME or โเฆเฆฎเฆโ, CASR with โเฆเฆเงเฆ เฆธเฆถเฆเงเฆทเฆพ เฆ เฆเฆฃเงเฆทเฆฃเฆพ เฆเฆธเฆฎเฆเฆฟโ etc. However, these are used simultaneously. We have listed these types of most common keywords for searching. 4. However, a lot of words are there with Bangla synonyms. Such as 'เฆธเฆฟเฆฎเงเฆจเฆธ เฆธเฆฟเฆค', 'เฆธเฆฟเฆฃเฆฎเงเฆจ', 'เฆธเฆฟเฆฃเฆฎเงเฆจเฆพเฆเงเฆค' , 'เฆธเฆฟเฆฎเงเฆจเงเฆธเฆฃ เฆฟเฆค', 'เฆธเฆฟเฆฎเงเฆจเฆฐเงเฆช' all are same meaning but they are using randomly in the documents. Moreover Bangla words sometimes uses different spelling for same word such as 'เฆเฆฟเฆฟเฆพเฆคเงเฆคเฆฐ' and 'เฆเฆฟเฆฟเฆพเฆฃเฆคเงเฆคเฆพเฆฐ' etc. For the 20 keywords we have made these knowledge base. 5. We have also classified the same types of words which shows the decision is very significant and they emphasize on the topics. Such as 'เฆเฆฐเงเฆฐเง', 'เฆ
เฆธเฆฟเฆเฆคเฆฐ' , 'เฆฆเงเฆฐเฆคเง', ' เฆธเฆคเฆเฆฟ', 'เฆเฆธเฆฐเฆฎเฆพเฆฟเฆพ', 'Compulsory', 'เฆธเงเฆฅเฆธเฆเฆค', 'เงเฆนเง ','เฆ
เฆเงเฆเงเฆเฆพเฆฐเฆฟเฆพเฆฎเฆพ', 'เฆฏเฆคเฆถเงเฆเงเฆฐ', 'เฆเฆฃ เฆพเฆฐเฆญเฆพเฆฃเง', 'เฆถเงเฆเงเฆฐ' ,'เฆเงเฆฐเงเฆคเงเฆฌเฆชเงเฆฃ เฆฟ' and therefore added in the knowledge base. However, so far we have applied many semantic conditions and did not consider the surroundings clues of the sentence. Hence we need Context based features to find out the exact context of the knowledge. Moreover the Bangla WordNet [57] can be used for Bangla similar words but here we can see from the above discussion that there are many English word written in Bangla font which is pronounced as English. Therefore only Bangla to Bangla meaning and English to English meaning extraction will not give an accurate result. 4.7.3 Context-Based Features In our previous sections, we did not include natural language processing and only by the location of the sentence, we have extracted the knowledge. Here we have considered the context and its connection to the required sentences. Some sentences refer another sentence as a reference or they are connected to a definite topic. From all the decision files in our working domain from the data set, we found five types of connecting words which are given in Fig. 4.15. And discussed below. 1. A connection word list which indicates the specific topics previously described. 2. A connection word list which indicates person/s after a sentence 3. A word list of conditionally connecting words immediately after the sentence used in the dataset. 4. A list of words which immediately talks about future or past with the sentence 5. An explanation list which explains the previous line. These words are immediately searched in the consecutive next two sentences if the query word is found in the sentence. However, if the query words are found in</s>
|
<s>these connection sentences, then the previous sentence is extracted with the current sentence. These connection words are mostly the first word of the sentence. However, they can be anywhere in the sentence according to the context. So we have considered these words are location independent in the sentence. 4.7.4 Classify the Documents with Keywords All the files with the decision are categorized with the 20 keywords and stored in the files with a meeting date. If the user query is related to these keywords then the information stored in the files is used to extract the knowledge. From the set of total documents, we have extracted the decision making lines. From these lines, we have extracted most frequent 20 keywords in of tf-idf and RAKE. Then with these 20 keywords all feature extraction is done which is described above. Then we have categorized 20 different topics with the 20 keywords. Example of Seven word set with occurrences are given in Fig. 4.16. Fig. 4.15: Connection Word List Fig. 4.16. Example of Occurrence of Words 4.8 Conclusions In this chapter we elaborate our proposed models within a specific domain of minutes of meeting of academic council of KUET. We effectively demonstrated a system which automatically discovers and extracts the decisions and agendas from the pool of official Bangla digital documents and allow extraction, detection, and analysis of the documents in the domain. However the finding user query technique can be used in the pure text file. Furthermore, both strategy can be used in any kind of official minutes of meeting with specific features. CHAPTER V Results and Discussions 5.1 Experimental Setup In this chapter, we present the experimental results along with the analyses of the proposed schemes. We have analyzed the results of the proposed schemes with precision and recall. Since the process is divided into two parts: โagendaโ and โdecisionโ detection, and user query detection, therefore we experimented these two. All experiments is done on a windows machine having Intel Core i5 2.40GHz processor, 4 GB RAM, and we used Java Python as a programing language to implement our algorithm and Weka [17] as an implementation tool for classification. We have used nltk, matplotlib, scipy, numpy, sklearn, textblob and other python packages. Naive-Bayes Classifier and Gaussian is used to classify the result. For installing pip, we need to download get-pip.py. After that, in the command prompt window and we had to run python get-pip.py 5.2 Performance Analysis of the Structure The result of the proposed system is discussed in the following sub section. 5.2.1 Extraction of Agenda and Decisions Text Analysis For result analysis of Extraction of agenda and decisions text, we have experimented with 29 resolutions of Academic Councilโs meeting of KUET to extract the data using the described methods in Chapter IV. The Naive Bayes model [10] can manage millions of digital documents efficiently and formerly used effectively by many researchers for extracting sentences and classification [44], [45], [11]. It is defined as below: If ๐น1, ๐น2, .</s>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.