text
stringlengths
41
31.4k
<s>M. (2008). An analysis of statistical models and featuresfor reading difficulty prediction. In Proceedings of the Third Workshop on Innovative Use of NLP forBuilding Educational Applications, (pp. 71–79). Association for Computational Linguistics.Islam, Z., Mehler, A., Rahman, R., and Texttechnology, A. (2012). Text readability classification of textbooksof a low-resource language. In Proceedings of the 26th Pacific Asia Conference on Language,Information, and Computation.Kemper, S. (1983). Measuring the inference load of a text. Journal of Educational Psychology, 75(3), 391.Kincaid, J. P., Fishburne Jr, R. P., Rogers, R. L., and Chissom, B. S. (1975). Derivation of new readabilityformulas (automated readability index, fog count and flesch reading ease formula) for navy enlistedpersonnel. Technical report, DTIC Document.Kintsch, W., & Van Dijk, T. (1978). Toward a model of text comprehension and production. PsychologicalReview, 85(5), 363.Klare, G. (1963). The mesaurement of readability. Ames: Iowa State University Press.Landauer, T., Foltz, P., & Laham, D. (1998). An introduction to latent semantic analysis. Discourse Processes,25(2–3), 259–284.Learning, R. (2001). The atos readability formula for books and how it compares to other formulas. Madison:School Renaissance Institute.Liu, X., Croft, W., Oh, P., and Hart, D. (2004). Automatic recognition of reading levels from user queries. InProceedings of the 27th annual international ACM SIGIR conference on Research and development ininformation retrieval, (pp. 548–549). ACM.Manning, C. D., Raghavan, P., & Schütze, H. (2008). Introduction to information retrieval, volume 1.Cambridge: University Press Cambridge.McLaughlin, G. (1969). Smog grading: A new readability formula. Journal of Reading, 12(8), 639–646.McNamara, D., Louwerse, M., McCarthy, P., & Graesser, A. (2010). Coh-metrix: Capturing linguistic featuresof cohesion. Discourse Processes, 47(4), 292–330.Miltsakaki, E., & Troutt, A. (2007). Read-x: Automatic evaluation of reading difficulty of web text. InProceedings of E-Automatic evaluation of reading difficulty of web text. In Proceedings of ELearn.Educ Inf TechnolMontgomery, D., Peck, E., and Vining, G. (2007). Introduction to linear regression analysis, volume 49.Wiley.Oakland, T., & Lane, H. (2004). Language, reading, and readability formulas: Implications for developing andadapting tests. International Journal of Testing, 4(3), 239–252.Petersen, S. E., & Ostendorf, M. (2009). A machine learning approach to reading level assessment. ComputerSpeech & Language, 23(1), 89–106.Rabin, A., Zakaluk, B., and Samuels, S. (1988). Determining difficulty levels of text written in languages otherthan english. Readability: Its past, present & future. Newark DE: International Reading Association, (pp.46–76).Rosch, E. (1978). Principles of categorization. Fuzzy grammar: a reader (pp. 91–108).Schwarm, S. and Ostendorf, M. (2005). Reading level assessment using support vector machines andstatistical language models. In Proceedings of the 43rd Annual Meeting on Association forComputational Linguistics, (pp. 523–530). Association for Computational Linguistics.Sherman, L. (1893). Analytics of literature: A manual for the objective study of english poetry and prose.Boston: Ginn.Si, L., & Callan, J. (2003). A semisupervised learning method to merge search engine results. ACMTransactions on Information Systems (TOIS), 21(4), 457–491.Smola, A. J., & Schölkopf, B. (2004). A tutorial on support vector regression. Statistics and Computing, 14(3),199–222.Stenner, A. (1996). Measuring reading comprehension with the lexile framework.Taft, M. (2004). Morphological decomposition and the reverse base frequency effect. Quarterly Journal ofExperimental Psychology Section A, 57(4), 745–765.vor der Brück, T., Helbig, H., Leveling, J.,</s>
<s>& Kommunikationssysteme, I. (2008). The Readability CheckerDelite: Technical Report. FernUniv., Fak. für Mathematik und Informatik.Zar, J. (1998). Spearman rank correlation. Encyclopedia of Biostatistics.Educ Inf Technol A study of readability of texts in Bangla through machine learning approaches Abstract Introduction Importance of text readability Importance of text readability in educational purposes The context of Bangla Issue of usage variations and diglossia in Bangla Difference between Bangla and english Related works Classical methods Cognitively motivated methods Methods involving statistical language modelling Work done in Bangla Data preparation Text selection Applying existing english readability measures on Bangla texts User study Participants Procedure Computational models for text readability prediction Model development by regression Classification using support vector machines (SVM) Support vector regression (SVR) Readability prediction system Conclusion and perspective References</s>
<s>Bangla Toxic Comment Classification (Machine Learning and Deep Learning Approach)Proceedings of the SMART–2019, IEEE Conference ID: 46866 8th International Conference on System Modeling & Advancement in Research Trends, 22nd–23rd November, 2019 College of Computing Sciences & Information Technology, Teerthanker Mahaveer University, Moradabad, India62 Copyright © IEEE–2019 ISBN: 978-1-7281-3245-7Abstract—Toxic comment classification problem is a popular classification problem nowadays. There are many attempts in English but it’s rare in Bangla language. We tried to build a classifier for Bangla language. We tried different approach to find the optimized classifier with better accuracy and optimized for log-loss, hamming-loss. As this is a multi-level problem, we used binary relevance methods for binary classifiers.Keywords: Bangla Toxic Comment, Machine Learning, Deep Learning, Binary Relevance, MultinomialNB, Classifier Chain, GausseanNB, Label Powerset, MLkNN, BP-MLL Neural NetworkI. IntroductionToxic comments are comments that irritate people and spread hates among community. So, to keep the environment clean, there needs a regulation over online conversation. We were first introduced with toxic comment classification problem at www.kaggle.com by Jigsaw/Conversation AI [8]. They provided a huge dataset. Inspired by that, we decided to do the same with Bangla language. But the problem was the dataset. We build a dataset taking comments from Facebook pages posts. Our dataset has seven columns. One for feature and six for labels. The labels are representing the six different forms of toxic comments. The feature is comment’s texts and the labels are toxic, severe toxic, obscene, threat, insult, identity hate.We basically used Binary Relevance method for MultinomialNB, which is well known for classification with discrete features, SVM, GausseanNB, that is specially used when the features have continuous value, Classifier chain with MultinomialNB, which is a problem transformation method, that transform a multi-label classification problem in one or more single-label classification problems so that existing single-label classification algorithms such as SVM and Naïve Bayes can be used. We also used Label Powerset with MultinomialNB, another problem transformation method, MLknn(Multi-Label k-Nearest Neighbor) and at last BP-MLL(Backpropagation for Multi-Label Learning).We first divide our feature and labels and converted labels as array, then from feature, removed Punctuation. Then tokenized our feature or comment text by CountVectorizer and removed Bangla stop words. We collected Bangla stop words from a GitHub Inc. repository [10]. Then we split our dataset into desired ratio and were good for implementation. After implementing all classifier, we visualized our results and compared between different classifier. The work flow is shown in the Figure 1.Fig. 1: Flow Chart of Work ProcedureII. Paper ReviewDeep learning and shallow approaches do a good job in this regard. Betty van Aken et al. [1] showed this.Dataset they used looks like Table 1 based on number of occurrences.Table 1: Dataset of [1]Class Number of occurrencesClean 201,081Toxic 21,984Obscene 12,140Insult 11,304Identity Hate 2,117Severe Toxic 1,968Threat 689Comparison of precision, recall, F1-measure, and ROC AUC on Wikipedia dataset is shown in Table 2.Bangla Toxic Comment Classification (Machine Learning and Deep Learning Approach)A.N.M. JuBaer1, Abu Sayem2 and Md. Ashikur Rahman31,2,3Dept. of CSE, Daffodil International University, Dhaka, BangladeshE-mail: 1jubaer15-7850@diu.edu.bd, 2abu15-7682@diu.edu.bd, 3ashikur15-7723@diu.edu.bdAuthorized licensed</s>
<s>use limited to: Western Sydney University. Downloaded on July 26,2020 at 11:54:39 UTC from IEEE Xplore. Restrictions apply. Bangla Toxic Comment Classification (Machine Learning and Deep Learning Approach) Copyright © IEEE–2019 ISBN: 978-1-7281-3245-7 63Table 2: Result of Wikipedia DatasetModel WikipediaP R F1 AUCCNN(FastText) 0.730 0.860 0.776 0.981LSTM(Glove) 0.740 0.840 0.777 0.980Bidirectional LSTM (Glove) 0.740 0.840 0.777 0.981Bidirectional GRU Attention (FastText) 0.740 0.870 0.783 0.983Logistic Regression (char-ngrams) 0.740 0.840 0.776 0.975Ensemble 0.740 0.880 0.791 0.983There is an analysis on sarcasm detection of Debanjan Ghosh et al. [3], which is pretty similar to our work. They basically used SVM with discrete features (SVMbl), Long Short-Term Memory Networks. Their results are shown in Table 3.Table 3: Result of Different Classifier on Sarcasm DetectionExperiment P(S) R(S) F1(S) P(NS) R(NS) F1(NS)SVMrbl 65.55 66.67 66.10 66.10 64.96 65.52SVMc+rbl 63.32 61.97 62.63 62.77 64.10 63.50LSTMr 67.90 66.23 67.10 67.02 68.80 67.93LSTMc + LSTMr 66.19 79.49 72.23 74.33 59.40 66.03LSTMconditional 70.03 76.92 73.32 74.41 67.10 70.56LSTMrasLSTMcas+LSTMrasLSTMcaw+LSTMraw+s69.4566.9065.9070.9482.0574.3570.1973.7069.8870.3076.8070.5968.8059.4061.5369.4566.9965.75Another work related to toxic comment classification problem is of Spiros V. Georgakopoulos et al. [4]. In this paper, they used CNN to classify toxic text. Their encoding model is shown in Figure 2.Fig. 2: Encoding MethodologyThey mainly depended on Convolutional Neural Network for the toxic comment classification. The CNN work flow of them is shown in Figure 3.Fig. 3: CNN Work FlowThe result of various classifier of them [4] is shown in Table 4.Accuracy Specificity FDRMean STD Mean Std Mean StdCNNfix 0.912 0.002 0.917 0.006 0.083 .007CNNrand 0.895 0.003 0.906 0.015 0.092 0.017kNN 0.697 0.008 0.590 0.016 0.335 0.010LDA 0.808 0.005 0.826 0.010 0.179 0.009NB 0.719 0.005 0.776 0.012 0.250 0.010SVM 0.811 0.007 0.841 0.012 0.167 0.012III. DatasetWe took data from Facebook only, from different pages. We collect comments as data from pages. Then we labeled every single comment according to its meaning. Our labels are toxic, severe_toxic, obscene, threat, insult, identity_hate.Sample data label is visualized in Table 5.Table 5: Data Label Visualizationtit0 0 0 0 0 0 01 1 0 0 0 0 02 0 0 0 0 0 03 1 0 0 0 1 04 0 0 0 0 0 0Authorized licensed use limited to: Western Sydney University. Downloaded on July 26,2020 at 11:54:39 UTC from IEEE Xplore. Restrictions apply. 8th International Conference on System Modeling & Advancement in Research Trends, 22nd–23rd November, 2019 College of Computing Sciences & Information Technology, Teerthanker Mahaveer University, Moradabad, India64 Copyright © IEEE–2019 ISBN: 978-1-7281-3245-7Some sample examples from our dataset are shown below,“আমার দেখা একজন নিরহংকার প্েোরর ক্রিকেট থেকে বিদাে নিতে চলেছেন... যিনি ছে খেলেও হাসেন আবার উইকেট পাইলেও হাসেন,,,, কমবেশি সবারই একজন খুব ভাল প্েোরর ।। আজ তার আন্তজাতিকক ...# ODI এর শেষ ম্যাচ ভাল থাকবেন লিজেন্ড ।“Example 1: 1st Sample Comment of Dataset “দেশ উন্েনেরর জোোরে ভাসতেছে”Example 2: 2nd Sample Comment from DatasetExample 3 “কুত্তার বাচ্াা ” Example 3: 3rd Sample from DatasetA. Dataset Creation MethodologyFrom Example 1, we can see that this comment is a non-toxic comment. Not only non-toxic, it is a clean comment. So, we put zeros to all</s>
<s>labels.Example 2 is a bit toxic, so we put one to toxic label and zeros to others.But, the third one, Example 3 violating toxic, severe toxic, obscene rules at the same time. So, we put one to those labels and zero to others.By following this technique, we got our own dataset to work on.The comments in dataset based on length is shown in Figure 4.Fig. 4: Length based Comment NumbersHistogram of comments based on length is shown Figure 5.Fig. 5: Histogram of Comments (length based)IV. Work ProcedureIn this part we will talk about the methods we used in our work flow. Binary Relevance is a problem transformation technique that allows single-label classifier to perform on a multi-label problem. Such one has a n-labeled problem. BR transforms it into n single labeled problem and give opportunity to perform well known single labeled classifier on them.A. Binary Relevance Method with MultinomialNB ClassifierMultinomialNB is one of the three types of Naïve Bayes classifier. It is perfect specially for classifying with discrete features. It works well with integer value.The basic of MultinomialNB is equation 1:𝑃𝑃 (𝑡𝑡 |𝑐𝑐) = 𝑇𝑇𝑐𝑐𝑡𝑡∑ 𝑇𝑇𝑐𝑐𝑡𝑡𝑡𝑡′𝑒𝑒𝑉𝑉β0 + β1 * x1 + β2 * x2 + … + βn * xn = 0 𝑃𝑃(𝑋𝑋|𝑌𝑌) = P(Y|X) (P(X))P(Y) (1)MultinomialNB takes tokens of a document given. It concerns about frequency of term or token.However, it also works with fractional value, like from tf-idef. We did BR MukltinomialNB from scratch. Accuracy of our BR MultinomialNB classifier is 52.30%.B. Binary Relevance with SVM ClassifierAfter converting our multi-labeled problem into multiple single labeled problem, we fed them to Support Vector Machine. It is a supervised learning method which is constraint followed and conduct optimized. Means it takes into account the separate hyperplanes and pick up line that maximize the separating margin. Hyperplane is always n-1 dimensional if the model is n dimensional. Any hyperplane can be written mathematically like equation 2.𝑃𝑃 (𝑡𝑡 |𝑐𝑐) = 𝑇𝑇𝑐𝑐𝑡𝑡∑ 𝑇𝑇𝑐𝑐𝑡𝑡𝑡𝑡′𝑒𝑒𝑉𝑉β0 + β1 * x1 + β2 * x2 + … + βn * xn = 0 𝑃𝑃(𝑋𝑋|𝑌𝑌) = P(Y|X) (P(X))P(Y) (2)We implemented it from sklearn. We got accuracy of 30.76%. This doesn’t satisfy us.Authorized licensed use limited to: Western Sydney University. Downloaded on July 26,2020 at 11:54:39 UTC from IEEE Xplore. Restrictions apply. Bangla Toxic Comment Classification (Machine Learning and Deep Learning Approach) Copyright © IEEE–2019 ISBN: 978-1-7281-3245-7 65C. BR Method with MultinomialNB (from scikit-learn)Again, we did the same MultinomialNB, but this time from sklearn. Accuracy we got is 52.30%, that is same as before.D. BR Method with GausseanNBGaussian method is another one of the three Naïve Bayes based classifier. It implements the Gaussian Naïve Bayes algorithm for classification. It depends on the Bayes Theorem.The bayes theorem is shown in equation 3.𝑃𝑃 (𝑡𝑡 |𝑐𝑐) = 𝑇𝑇𝑐𝑐𝑡𝑡∑ 𝑇𝑇𝑐𝑐𝑡𝑡𝑡𝑡′𝑒𝑒𝑉𝑉β0 + β1 * x1 + β2 * x2 + … + βn * xn = 0 𝑃𝑃(𝑋𝑋|𝑌𝑌) = P(Y|X) (P(X))P(Y) (3)Where X is the hypothesis and Y is the evidence.We got accuracy of 49.23% from BR Method with GaussianNB.E. Classifier Chain with MultinomialNB</s>
<s>ClassifierClassifier chain is another problem transformation method. We used MultinomialNB on single-labeled problem provided by classifier chain. Accuracy is 52.30%.F. Label Powerset with MultinomialNB ClassifierLabel powerset is a problem transformation approach that transforms a multi-label to a multi-class problem. After that we implement MultinomialNB on that. 58.46% is the highest accuracy we got implementing Label Powerset with MultinomialNB classifier.G. MLkNN with k =2It is a kNN classification method adapted for multi-label classification. MLkNN builds uses k-NN, finds nearest examples to a test class and uses Bayesian inference to select assigned labels. We got 58.46% for accuracy.H. BP-MLL Neural NetworksThis is Back propagation Multi Label Learning. We used keras for this purpose. There are two layers of filter 4 and 8 in this Neural Network. Accuracy we got using BP-MLL is 60.00%, highest among others.V. ResultsImplementing different classifiers, we can see that the accuracy of BP-MLL Neural network is the highest one. This can be seen from the Figure 6, which is showing the accuracy of different classifier in bar chart.Fig. 6: Accuracy Bar Chart of Different ClassifierFrom Figure 7, we can see that the hamming loss of BP-MLL Neural Network is low. That means, it gives the most accurate prediction among other models.Fig. 7: Hamming-loss Bar Chart of ClassifiersAgain, from Figure 8, the Log-Loss is low for BP-MLL Neural Network. That means, it is most related with the actual results. The diversity from actual results is low here.Fig. 8: Log-loss Bar Chart of Different ClassifiersAuthorized licensed use limited to: Western Sydney University. Downloaded on July 26,2020 at 11:54:39 UTC from IEEE Xplore. Restrictions apply. 8th International Conference on System Modeling & Advancement in Research Trends, 22nd–23rd November, 2019 College of Computing Sciences & Information Technology, Teerthanker Mahaveer University, Moradabad, India66 Copyright © IEEE–2019 ISBN: 978-1-7281-3245-7So, based on the whole discussion, it is clear that BP-MLL Neural Network works well in Bangla Toxic Comment Classification problem. Any system, implemented by BP-MLL Neural Network will give better result. So, Bangla toxic comment in various community can be detected by implementing BP-MLL Neural Network and a system can be developed which will omit the detected toxic comment to make the in-community relationship better.VI. Future ScopeThe accuracy can be improved in future. The stemming could make a dramatic change in the accuracy here. Important thing is, as this is the first attempt to such work with Bangla Language, there is a dataset already created by us for future research. So, people who are interested in work with Bangla NLP, can use the dataset.. For furthermore work, this dataset can be a part of bigger thing too.VII. Conclusion In this work we showed different approaches to classify Bangla toxic comments. Among them some worked fine. The Neural Network worked best among all of classifier. In Bangla language, the NLP related work is very few. To prevent the hate spreading in online Bangla community, this classifier can perform a vital role. Any system implementing the BP-MLL Neural Network, can perform good in detecting toxic comments in</s>
<s>Bangla online conversation. People who are toxic in comments, can be warned even at extreme, can be banned from community. As a result, the community environment with each another will be clean.References[1] Challenges for Toxic Comment Classification: An In-Depth Error Analysis by Betty van Aken, Julian Risch, Ralf Krestel, Alexander Löser, Beuth University of Applied Sciences Berlin, Germany, 2018[2] Deep Learning for User Comment Moderation John Pavlopoulos, Prodromos MAlakasiotis, Ion Androutsopoulos 2017[3] The Role of Conversation Context for Sarcasm Detection in Online Interactions Debanjan Ghosh, Alexander Richard Fabbri, Smaranda Muresan, USA, 2017[4] Convolutional Neural Networks for Toxic Comment Classification by Spiros V. Georgakopoulos, Sotiris K. Tasoulis, Aristidis G. Vrahatis, Vassilis P. Plagianakos, University of Thessaly Lamia, Greece, 2018[5] Multi-label Text Classification Based on Sequence Model by Wenshi Chen , Xinhui Liu , Dongyu Guo2 , and Mingyu Lu, Dalian Maritime University, Dalian, China, 2019[6] Text Representation in Multi-label Classification: Two New Input Representations by Rodrigo Alfaro and Hector Allende, Chile, 2011[7] Sentiment Analysis in Twitter using Machine Learning Techniques by Neethu M S, Rajasree R, College of Engineering Trivandrum, 695016, India, 2013[8] @misc{kaggle_2017, title={Toxic Comment Classification Challenge}, url={https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge}, journal={Kaggle}, year={2017}, month={Dec}}[9] @misc{baghel_2018, title={Toxic Comment Classification}, url={https://medium.com/@nupurbaghel/toxic-comment-classification-f6e075c3487a}, journal={Medium}, publisher={Medium}, author={Baghel, Nupur}, year={2018}, month={Jun}}[10] @misc{stopwords-iso, title={stopwords-iso/stopwords-bn}, url={https://github.com/stopwords-iso/stopwords-bn/blob/master/package.json}, journal={GitHub}, author={Stopwords-Iso}}Authorized licensed use limited to: Western Sydney University. Downloaded on July 26,2020 at 11:54:39 UTC from IEEE Xplore. Restrictions apply.</s>
<s>Computer and Information Science; Vol. 11, No. 4; 2018 ISSN 1913-8989 E-ISSN 1913-8997 Published by Canadian Center of Science and Education Topic Modelling in Bangla Language: An LDA Approach to Optimize Topics and News Classification Mustakim Al Helal1 & Malek Mouhoub1 1 Department of Computer Science, University of Regina, Regina, Saskatchewan, Canada Correspondence: Malek Mouhoub, Department of Computer Science, University of Regina, Regina, Saskatchewan, Canada S4S 0A2. E-mail: mouhoubm@uregina.ca Received: September 20, 2018 Accepted: October 9, 2018 Online Published: October 31, 2018 doi:10.5539/cis.v11n4p77 URL: https://doi.org/10.5539/cis.v11n4p77 Abstract Topic modeling is a powerful technique for unsupervised analysis of large document collections. Topic models have a wide range of applications including tag recommendation, text categorization, keyword extraction and similarity search in the text mining, information retrieval and statistical language modeling. The research on topic modeling is gaining popularity day by day. There are various efficient topic modeling techniques available for the English language as it is one of the most spoken languages in the whole world but not for the other spoken languages. Bangla being the seventh most spoken native language in the world by population, it needs automation in different aspects. This paper deals with finding the core topics of Bangla news corpus and classifying news with similarity measures. The document models are built using LDA (Latent Dirichlet Allocation) with bigram. Keywords: topic modeling, classification, natural language processing 1. Introduction During the last decade, the amount of data generated by people made history. Indeed, roughly 2.5 quintillion bytes of data is produced daily according to the study of DOMO and ninety percent of the data in the world has been created in the last two years alone [1]. As the amount of available data is increasing tremendously, it becomes difficult to access and process the data we are looking for. To retrieve the underneath meaning of the data automated process is required in an efficient way. Topic modeling is a frequently used text-mining tool for discovering the hidden semantic structures in a text body. It provides a convenient way to analyze big unclassified text. A topic contains a cluster of words that frequently occurs together. The topic modeling approach can connect words with similar meanings and distinguish between uses of words with multiple meanings [2]. Unlike English, very few modeling tools have been developed for other languages. However, the internet content is only 51.2% in English [3] and this percentage might decrease over the coming years. Therefore, it is becoming essential to develop similar tools for other languages. Bangla has become one of the most popular languages in the world after the announcement to observe February 21st as International Mother Language Day annually by UNESCO on November 17th, 1999 [4]. Over time, there is a good number of Bangla news portals, blogs, eBooks, web pages, search engines, … etc. Although the content is rich enough, the research in Bangla is not frequent due to insufficient datasets, unorganized grammar rules which is the core challenge to work with Bangla. Considering these challenges, we</s>
<s>have created our own corpus and proposed the first ever topic modeling tool for Bangla. We are confident that this tool will be very useful to many Bangla speaking users. Several research works have been conducted with LDA for categorizing unclassified texts. The most recent of these works is a generative LDA model designed to categorize texts corpus in English [5]. An empirical result on application of the designed model was presented in text modelling, collaborative filtering and text classification. In this paper, each document consists of a mixture of topics. In this regard, a model is proposed to extract topics from a news corpus in Bangla. In [5], Blei evaluated a topic model with perplexity. Traditionally, perplexity has been used many times as an evaluation process for the extracted topics but it was found that it does not correlate with human annotations at times [6]. In [7], Blei worked with LDA to categorize research papers. This research considered several journals archived by JSTOR which is an organization for indexing journals in different fields. The objective was to find similar articles for a scientist out of millions of journals, conference papers, etc. This is also one kind of categorization of texts using LDA. Finding the underlying meaning of the huge amount of science journals was the main objective of this paper. In [7], the author cis.ccsenet.org Computer and Information Science Vol. 11, No. 4; 2018 also discussed about the effective way on how to approximate the posterior with mean field variation methods. A formal text mining approach was proposed in [8]. Here, the authors worked with Wikipedia and twitter data. Two different perspectives were explored with these two data sets. From the Wikipedia data, a document topic model was achieved aiming to a topic wise document search [8]. On the other hand, with the twitter data a user topic model was explored to identify user¢¢s interest in twitter data. This idea can also be implemented on newspaper data to explore the news trends over a certain time. In this paper, similarity measure for the Wikipedia data was also calculated and demonstrated for different articles against a selected article. Another trend finding work on topic modelling with LDA was done in [9] where the goal is to investigate the research development and current trends from a collection of scholarly articles. A whole picture of LDA over the past 20 years has therefore been illustrated and the paper is more of a survey on LDA applications. Despite these past efforts, little has been done with LDA in Bangla language. Bangla having a completely different grammatical structure and stemming techniques, it is challenging to identify topics from this language. In [10], a text summarization technique was developed particularly for Bangla language. However, it is a heuristic model and LDA was not used for this text summarization. Two different approaches, namely Abstractive and Extractive, were discussed in the paper. However, the reported research deals with the extractive method only. A set of Bangla text</s>
<s>analysis rules were developed based on a heuristic approach [10]. It uses a sentence scoring method to achieve the summarization goal. Although this paper is one of the very few papers that talks about text summarization in Bangla language, the power of topic modelling was not used in any means. Our goal is to apply the LDA and see how it works for news categorization in Bangla. 2. Data Preprocessing Collecting the data for Bengali language has always been a challenge due to scarcity of resources and unavailability of publicly available corpus. Although various research works have been going on with Bangla, none of those datasets were made available for the public. The dataset that we used is a news corpus. It is collected from one of the most used and popular Bangla newspaper called “ÅThe Daily Prothom Alo”. We designed a crawler with Python library called Beautiful Soup. The data that we collected has 7134 news with many different categories. The crawler scrapped all the news from January 1, 2018 to March 31st, 2018. The daily Prothom Alo has an online archive that is crawled each daybover the mentioned period of time. The news data are collected in a CSV file which is illustrated in Figure 1. Figure 1. The news corpus: CSV file Once the dataset is ready then we start the preprocessing which consists of the tasks of Tokenization, removal of stop words and creation of the Bigram. From the news, each word is tokenized and put into an array. Let us consider an example of Bangla word tokenization for a given sentence. Here, each word in the sentence is tokenized and from the implementation point of view these tokenized words are then appended into a Python list for our purpose. Like English, Bengali language has a lot of stop words. These are also the connecting words just like prepositions and conjunctions. However, Bangla being brought up into the NLP world quite recently, there is still no fully established list of stop words. Consequently, we used the stop words list from [11] and enhanced it with our program. In this regard, a list of 438 stop words are used. For the task of topic modelling with Bangla, Bigram creation is an important part. A bigram is a sequence of two adjacent tokens in the corpus occurring frequently [12]. A probability is then calculated for these words to occur one after another. If they have a threshold value then these word pairs are combined together and put into a new token in the dictionary. Basically, bigrams cis.ccsenet.org Computer and Information Science Vol. 11, No. 4; 2018 are n-grams where n = 2. A conditional probability (Wn given Wn−1) is calculated for bigrams as follows. (1) 3. Proposed Model As we have our news corpus ready to train the topic modelling techniques, we formalize the proposed model on how to train the Bangla news corpus to get the best result in terms of topic modelling. In this Section, we</s>
<s>describe the proposed model in a step by step process. Our main goal from this research is to find a way to extract the topics from our news corpus. A methodology is proposed to find out the right topic a news belongs to. This way each news can be classified in its right category. The proposed model is illustrated in Figure 2. Figure 2. The proposed model This is the basic structure on how the model works. Once we have the dictionary ready with the preprocessing already been performed on it, we apply the LDA algorithm. We have trained 7134 news. The dictionary is already set up in the preprocessing phase. So this whole dictionary goes into the model and it extracts a number of topics. However, LDA does not know how many topics it has to extract. We propose a coherence [13] based method to understand the optimal number of topics. From that experiment, we feed the right number of topics as a hyperparameter in our training model. One problem with LDA is that it can get over fitted if too many topics are extracted. So, finding the right number of topics is very important. Before we train our model and run the LDA algorithm, we run a coherence model with roughly 200 topics just to explore the graph. As we know that it is not possible to have 200 topics with about 7134 news, we just set the value to check the gradual coherence movement across different topics and found that it gets to the peak at around topic 47. So, we took that number and fed it into the algorithm. This way, the model does not under fit or neither over fits. Once we get the model trained with our corpus, we evaluate it through experiments. We have performed a cosine similarity check between different news. Some news were similar while some others were different. So, it was expected to have more similarity score between similar news and less similarity between news about different agenda. We achieved those scores for the trained LDA model. However, cosine similarity can also be achieved from Doc2Vec model. That is why we used the Doc2Vec model just to compare the cosine similarity score between LDA and Doc2Vec and gain an insight on how both models work in terms of cosine similarity. A comparative evaluation with other variations of LDA has also been performed and reported in the next Section. 4. Experimentation The goal of the first experiment that we perform is to understand the number of topics that we need to infer from the trained model. LDA itself cannot understand the optimal number of topics. We performed an experiment to understand the optimal number of topics. This number depends on the dataset and the main research goal. Our purpose is to infer topics from an online newspaper with about 7,134 news instances. When too many topics are inferred from a LDA model, it may get over fitted which is not expected</s>
<s>at all. On the other hand, extracting too few topics does not make sense too. A coherence based value is considered for understanding the right number of topics. We have experimented the model with 200 topics along with the aggregated coherence value for these topics as shown in Figure 3. As can be seen, the model reaches its peak value at 47, that we consider as the optimal number of topics. cis.ccsenet.org Computer and Information Science Vol. 11, No. 4; 2018 4.1 Similarity Measure Since a trained LDA model already groups topics in terms of their keywords, we undergo through an experiment to explore the cosine similarity measure from our trained LDA model. We feed a couple of document pairs each time and see the cosine similarity value. However, similarity measure can also be performed with a Doc2Vec model. Consequently, we compare the similarity scores from both the LDA and the Doc2Vec. These scores are shown in Table 1. As we can see, each time a pair of documents are fed into both of these models. For example, doc 1 and doc 2 are two highly related documents. They are both talking about a news on the Myanmar Rohingya issue in Bangladesh. As a human interpreter, someone will judge these two news as a highly related pair. LDA cosine similarity gives this pair a 95.15% similarity which would have been almost close to the human interpreter. On the other hand, Doc2Vec performed poorly and gave only a 68.54% similarity which demonstrates its poor performance. Document 1916 and document 1060 are talking about Technology. In this case, LDA performs better than the Doc2Vec again. Now, let us see how these models work on dissimilar news. It is expected that two dissimilar news are likely to have a low cosine similarity score. In this regard, we performed the similarity check for document 5 and document 9 where one is about sports and the other news is about foreign affairs respectively. LDA gave only a 19.07% match where Doc2Vec reported a score of 50.01%. This is a winning situation for LDA in all respects. Figure 3. Coherence based on number of topics Table 1. Showing cosine similarity score between different models Document Pairs LDA Doc2Vec(doc5, doc9) 19.07% 50.91%(doc5, doc6) 71.63% 72.55%(doc271, doc272) 68.68% 60.61%(doc1, doc2) 97.15% 68.54%(doc1, doc513) 72.45% 30.31%(doc 1916, doc 1060 80.99% 37.91%4.2 Classifying News We have gone through a document classification experiment according to topics. As we have a trained LDA model and also extracted the topics, we wanted to go further with the first ever news classification in Bangla language with LDA. Therefore, we propose a method for classifying news with our LDA model. At first, we extract a document vs topic matrix. In this matrix, each term is tagged with a probability to belong to a given topic. Let us illustrate this idea with a simple example. Let assume we have a document D = “Dogs and cats are cute” and eventually become Dpreprocessed = [“dogs”, “cats”, “cute”].</s>
<s>As a human interpreter, we can easily understand that this is a document with the topic ÅAnimal. We may also consider that we have topics k1 and k2. However, the matrix for document vs term probability distribution will look like the following: cis.ccsenet.org Computer and Information Science Vol. 11, No. 4; 2018 where p1, p2,..., p6 are all the probabilities. 0, 1 and 2 are the word indexes from our example sentence. For each word, we have: (2) In our proposed method, we extract the mean of words for each topic. To make the example more generic, let us assume we have n terms and m topics. Our matrix will be as follows: So the mean topic for each each word is calculated with the following equations: Finally, the document belongs to the topic having the largest probability value. 4.3 Topic Extraction Figures 4 and 6 report some topics extracted from the newspaper in the English translation. As we can see, each topic consists of 10 words related to the topic. Each word has a corresponding probability and for each topic, all the probabilities of the words will sum up to one. Also, for each topic we are showing 10 highest probable words. However, some words might have no sense to that topic but most of them relevant. Some topics are a bit mixed and the meaning can be different but most the other topics make sense and can be classified as a category from the newspaper. However, the topics are not tagged automatically. By seeing the words grouping for each topic, the tags are made manually since LDA will only provide with the group of words/terms known as topics. With each word, we got a probability in a descending order. These probabilities sum to one. 4.4 Document Classification In this section, we will visualize how the proposed model works for the document classification tasks. We will feed the model some random news and graphically explore how well the model can predict its topic. This is basically a topic vs document distribution. In Figure 5, we have fed a news about a movie. Topic 36 is the most relevant topic for this news outperforming the other topics in terms of their probability score. Since the news is about a movie, we get a relevant answer from our model. It gives us a topic consisting of words related to movies. The first word is Cinema with the highest probability. This topic is illustrated in Figure 4. cis.ccsenet.org Computer and Information Science Vol. 11, No. 4; 2018 Figure 4. Word clusters for topic: Movie In the second experiment, we feed a news about Donald Trump talking about the USA and immigrants facts. As reported in Fig. 7, it is more likely to belong to the topic consisting of the word “Trump” itself with the highest probability and leading to all the other immigration and the USA related terms. This experiment reflects the fact that we can classify Bangla news successfully with this</s>
<s>model. The topic for Trump news is illustrated in Figure 6. 5. Conclusion and Future Work We have demonstrated how topic modelling can be extended with the Bangla language in a large scale. Bangla having a strong online presence over the past few years, there is a lot more to do with it regarding topic modelling and other NLP tasks. Online Bengali libraries can use this tool as a recommender system. This work can also be extended with a trending topic scopes which is yet to be explored for the Bangla news and media text data. Trending topics can play a vital role in predicting the corruption rate over different districts in Bangladesh. Moreover, it can be stretched to use in public sentiment analysis for prediction over diverse aspects through print and news media. In this research, we did not use LSI (a variation of LDA) which can be considered in the future work. Different similarity measures can also be explored for document classification. Figure 5. Document topic distribution for movie news Figure 6. Word clusters for topic: Trump cis.ccsenet.org Computer and Information Science Vol. 11, No. 4; 2018 Figure 7. Document Topic distribution for Trump news References Abujar, Sheikh, et al. (2017). A Heuristic Approach of Text Summarization for Bengali Documentation.” 8th International Conference on Computing, Communication and Networking (8th ICCCNT), 2017 8th International Conference on. IEEE. 2017. Alghamdi, R., & Khalid, A. (2015). A survey of topic modeling in text mining. Int. J. Adv. Comput. Sci.Appl.(IJACSA), 6(1). Blei, David M. (2012). Probabilistic topic models. Communications of the ACM, 55(4), 77-84. Blei, David M., Andrew Y. Ng, & Michael, I. J. (2003). Latent dirichlet allocation. Journal of machine learning research, 3, 993-1022 Domo.com. (2018). Press Release - How Much Data Does The World Generate Every Minute? Retrieved June 23, 2018, from https://www.domo.com/news/press/how-much-data-does-the-world-generate-every-minute GitHub. (2018). stopwords-iso/stopwords-bn. Retrieved June 23, 2018, from https://github.com/stopwords-iso/stopwords-bn Mahmood, A. (2018). Literature Survey on Topic Modeling. Retrieved June 23, 2018, from https://www.eecis.udel.edu/ vijay/fall13/snlp/lit-survey/TopicModeling-ASM.pdf Markroxor.github.io. (2018). gensim news classification. Retrieved June 23, 2018, from https://markroxor.github.io/gensim/static/notebooks/gensim news classification.html Newman, David, et al. (2010). Automatic evaluation of topic coherence. Human Language Technologies: The 2010 An- nual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics. Tong, Z., & Haiyi, Z. (2106). A text mining research based on LDA topic modelling. Jodrey School of Computer Science, Acadia University, Wolfville, NS, Canada, 10, 201-210. Wallach, Hanna M. (2006). Topic modeling: beyond bag-of-words. Proceedings of the 23rd international conference on Machine learning. ACM. Wikipedia contributors. (2108). International Mother Language Day.” Wikipedia, the Free Encyclopedia. Wikipedia, the Free Encyclopedia, 1 Apr. 2018. Web. 13 Apr. 2018. Wikipedia contributors. (2108). Languages used on the Internet.” Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 13 Apr. 2018. Web. 13 Apr. 2018. Copyrights Copyright for this article is retained by the author(s), with first publication rights granted to the journal. This is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/) /ASCII85EncodePages</s>
<s>false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Dot Gain 20%) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.4 /CompressObjects /Tags /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /CMYK /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams false /MaxSubsetPct 100 /Optimize true /OPM 1 /ParseDSCComments true /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo true /PreserveFlatness true /PreserveHalftoneInfo false /PreserveOPIComments true /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Apply /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 300 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 1.50000 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] /ColorImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 300 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] /GrayImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 1200 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile () /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /ARA <FEFF06270633062A062E062F0645002006470630064700200627064406250639062F0627062F0627062A002006440625064606340627062100200648062B062706260642002000410064006F00620065002000500044004600200645062A064806270641064206290020064406440637062806270639062900200641064A00200627064406450637062706280639002006300627062A0020062F0631062C0627062A002006270644062C0648062F0629002006270644063906270644064A0629061B0020064A06450643064600200641062A062D00200648062B0627062606420020005000440046002006270644064506460634062306290020062806270633062A062E062F062706450020004100630072006F0062006100740020064800410064006F006200650020005200650061006400650072002006250635062F0627063100200035002E0030002006480627064406250635062F062706310627062A0020062706440623062D062F062B002E0635062F0627063100200035002E0030002006480627064406250635062F062706310627062A0020062706440623062D062F062B002E> /BGR <FEFF04180437043f043e043b043704320430043904420435002004420435043704380020043d0430044104420440043e0439043a0438002c00200437043000200434043000200441044a0437043404300432043004420435002000410064006f00620065002000500044004600200434043e043a0443043c0435043d04420438002c0020043c0430043a04410438043c0430043b043d043e0020043f044004380433043e04340435043d04380020043704300020043204380441043e043a043e043a0430044704350441044204320435043d0020043f04350447043004420020043704300020043f044004350434043f0435044704300442043d04300020043f043e04340433043e0442043e0432043a0430002e002000200421044a04370434043004340435043d043804420435002000500044004600200434043e043a0443043c0435043d044204380020043c043e0433043004420020043404300020044104350020043e0442043204300440044f0442002004410020004100630072006f00620061007400200438002000410064006f00620065002000520065006100640065007200200035002e00300020043800200441043b0435043404320430044904380020043204350440044104380438002e> /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e9ad88d2891cf76845370524d53705237300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc9ad854c18cea76845370524d5370523786557406300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /CZE <FEFF005400610074006f0020006e006100730074006100760065006e00ed00200070006f0075017e0069006a007400650020006b0020007600790074007600e101590065006e00ed00200064006f006b0075006d0065006e0074016f002000410064006f006200650020005000440046002c0020006b00740065007200e90020007300650020006e0065006a006c00e90070006500200068006f006400ed002000700072006f0020006b00760061006c00690074006e00ed0020007400690073006b00200061002000700072006500700072006500730073002e002000200056007900740076006f01590065006e00e900200064006f006b0075006d0065006e007400790020005000440046002000620075006400650020006d006f017e006e00e90020006f007400650076015900ed007400200076002000700072006f006700720061006d0065006300680020004100630072006f00620061007400200061002000410064006f00620065002000520065006100640065007200200035002e0030002000610020006e006f0076011b006a016100ed00630068002e> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000620065006400730074002000650067006e006500720020007300690067002000740069006c002000700072006500700072006500730073002d007500640073006b007200690076006e0069006e00670020006100660020006800f8006a0020006b00760061006c0069007400650074002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200076006f006e002000640065006e0065006e002000530069006500200068006f006300680077006500720074006900670065002000500072006500700072006500730073002d0044007200750063006b0065002000650072007a0065007500670065006e0020006d00f60063006800740065006e002e002000450072007300740065006c006c007400650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000410064006f00620065002000520065006100640065007200200035002e00300020006f0064006500720020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f00730020005000440046002000640065002000410064006f0062006500200061006400650063007500610064006f00730020007000610072006100200069006d0070007200650073006900f3006e0020007000720065002d0065006400690074006f007200690061006c00200064006500200061006c00740061002000630061006c0069006400610064002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /ETI <FEFF004b00610073007500740061006700650020006e0065006900640020007300e4007400740065006900640020006b00760061006c006900740065006500740073006500200074007200fc006b006900650065006c007300650020007000720069006e00740069006d0069007300650020006a0061006f006b007300200073006f00620069006c0069006b0065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740069006400650020006c006f006f006d006900730065006b0073002e00200020004c006f006f0064007500640020005000440046002d0064006f006b0075006d0065006e00740065002000730061006100740065002000610076006100640061002000700072006f006700720061006d006d006900640065006700610020004100630072006f0062006100740020006e0069006e0067002000410064006f00620065002000520065006100640065007200200035002e00300020006a00610020007500750065006d006100740065002000760065007200730069006f006f006e00690064006500670061002e000d000a> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f00620065002000500044004600200070006f0075007200200075006e00650020007100750061006c0069007400e90020006400270069006d007000720065007300730069006f006e00200070007200e9007000720065007300730065002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /GRE <FEFF03a703c103b703c303b903bc03bf03c003bf03b903ae03c303c403b5002003b103c503c403ad03c2002003c403b903c2002003c103c503b803bc03af03c303b503b903c2002003b303b903b1002003bd03b1002003b403b703bc03b903bf03c503c103b303ae03c303b503c403b5002003ad03b303b303c103b103c603b1002000410064006f006200650020005000440046002003c003bf03c5002003b503af03bd03b103b9002003ba03b103c42019002003b503be03bf03c703ae03bd002003ba03b103c403ac03bb03bb03b703bb03b1002003b303b903b1002003c003c103bf002d03b503ba03c403c503c003c903c403b903ba03ad03c2002003b503c103b303b103c303af03b503c2002003c503c803b703bb03ae03c2002003c003bf03b903cc03c403b703c403b103c2002e0020002003a403b10020005000440046002003ad03b303b303c103b103c603b1002003c003bf03c5002003ad03c703b503c403b5002003b403b703bc03b903bf03c503c103b303ae03c303b503b9002003bc03c003bf03c103bf03cd03bd002003bd03b1002003b103bd03bf03b903c703c403bf03cd03bd002003bc03b5002003c403bf0020004100630072006f006200610074002c002003c403bf002000410064006f00620065002000520065006100640065007200200035002e0030002003ba03b103b9002003bc03b503c403b103b303b503bd03ad03c303c403b503c103b503c2002003b503ba03b403cc03c303b503b903c2002e> /HEB <FEFF05D405E905EA05DE05E905D5002005D105D405D205D305E805D505EA002005D005DC05D4002005DB05D305D9002005DC05D905E605D505E8002005DE05E105DE05DB05D9002000410064006F006200650020005000440046002005D405DE05D505EA05D005DE05D905DD002005DC05D405D305E405E105EA002005E705D305DD002D05D305E405D505E1002005D005D905DB05D505EA05D905EA002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E05D005DE05D905DD002005DC002D005000440046002F0058002D0033002C002005E205D905D905E005D5002005D105DE05D305E805D905DA002005DC05DE05E905EA05DE05E9002005E905DC0020004100630072006F006200610074002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E> /HRV (Za stvaranje Adobe PDF dokumenata najpogodnijih za visokokvalitetni ispis prije tiskanja koristite ove postavke. Stvoreni PDF dokumenti mogu se otvoriti Acrobat i Adobe Reader 5.0 i kasnijim verzijama.) /HUN <FEFF004b0069007600e1006c00f30020006d0069006e0151007300e9006701710020006e0079006f006d00640061006900200065006c0151006b00e90073007a00ed007401510020006e0079006f006d00740061007400e100730068006f007a0020006c006500670069006e006b00e1006200620020006d0065006700660065006c0065006c0151002000410064006f00620065002000500044004600200064006f006b0075006d0065006e00740075006d006f006b0061007400200065007a0065006b006b0065006c0020006100200062006500e1006c006c00ed007400e10073006f006b006b0061006c0020006b00e90073007a00ed0074006800650074002e0020002000410020006c00e90074007200650068006f007a006f00740074002000500044004600200064006f006b0075006d0065006e00740075006d006f006b00200061007a0020004100630072006f006200610074002000e9007300200061007a002000410064006f00620065002000520065006100640065007200200035002e0030002c0020007600610067007900200061007a002000610074007400f3006c0020006b00e9007301510062006200690020007600650072007a006900f3006b006b0061006c0020006e00790069007400680061007400f3006b0020006d00650067002e> /ITA <FEFF005500740069006c0069007a007a006100720065002000710075006500730074006500200069006d0070006f007300740061007a0069006f006e00690020007000650072002000630072006500610072006500200064006f00630075006d0065006e00740069002000410064006f00620065002000500044004600200070006900f900200061006400610074007400690020006100200075006e00610020007000720065007300740061006d0070006100200064006900200061006c007400610020007100750061006c0069007400e0002e0020004900200064006f00630075006d0065006e007400690020005000440046002000630072006500610074006900200070006f00730073006f006e006f0020006500730073006500720065002000610070006500720074006900200063006f006e0020004100630072006f00620061007400200065002000410064006f00620065002000520065006100640065007200200035002e003000200065002000760065007200730069006f006e006900200073007500630063006500730073006900760065002e> /JPN <FEFF9ad854c18cea306a30d730ea30d730ec30b951fa529b7528002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b4f7f75283057307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a306b306f30d530a930f330c8306e57cb30818fbc307f304c5fc59808306730593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020ace0d488c9c80020c2dcd5d80020c778c1c4c5d00020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /LTH <FEFF004e006100750064006f006b0069007400650020016100690075006f007300200070006100720061006d006500740072007500730020006e006f0072011700640061006d00690020006b0075007200740069002000410064006f00620065002000500044004600200064006f006b0075006d0065006e007400750073002c0020006b00750072006900650020006c0061006200690061007500730069006100690020007000720069007400610069006b007900740069002000610075006b01610074006f00730020006b006f006b007900620117007300200070006100720065006e006700740069006e00690061006d00200073007000610075007300640069006e0069006d00750069002e0020002000530075006b0075007200740069002000500044004600200064006f006b0075006d0065006e007400610069002000670061006c006900200062016b007400690020006100740069006400610072006f006d00690020004100630072006f006200610074002000690072002000410064006f00620065002000520065006100640065007200200035002e0030002000610072002000760117006c00650073006e0117006d00690073002000760065007200730069006a006f006d00690073002e> /LVI <FEFF0049007a006d0061006e0074006f006a00690065007400200161006f00730020006900650073007400610074012b006a0075006d00750073002c0020006c0061006900200076006500690064006f00740075002000410064006f00620065002000500044004600200064006f006b0075006d0065006e007400750073002c0020006b006100730020006900720020012b00700061016100690020007000690065006d01130072006f00740069002000610075006700730074006100730020006b00760061006c0069007401010074006500730020007000690072006d007300690065007300700069006501610061006e006100730020006400720075006b00610069002e00200049007a0076006500690064006f006a006900650074002000500044004600200064006f006b0075006d0065006e007400750073002c0020006b006f002000760061007200200061007400760113007200740020006100720020004100630072006f00620061007400200075006e002000410064006f00620065002000520065006100640065007200200035002e0030002c0020006b0101002000610072012b00200074006f0020006a00610075006e0101006b0101006d002000760065007200730069006a0101006d002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken die zijn geoptimaliseerd voor prepress-afdrukken van hoge kwaliteit. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d00200065007200200062006500730074002000650067006e0065007400200066006f00720020006600f80072007400720079006b006b0073007500740073006b00720069006600740020006100760020006800f800790020006b00760061006c0069007400650074002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002000730065006e006500720065002e> /POL <FEFF0055007300740061007700690065006e0069006100200064006f002000740077006f0072007a0065006e0069006100200064006f006b0075006d0065006e007400f300770020005000440046002000700072007a0065007a006e00610063007a006f006e00790063006800200064006f002000770079006400720075006b00f30077002000770020007700790073006f006b00690065006a0020006a0061006b006f015b00630069002e002000200044006f006b0075006d0065006e0074007900200050004400460020006d006f017c006e00610020006f007400770069006500720061010700200077002000700072006f006700720061006d006900650020004100630072006f00620061007400200069002000410064006f00620065002000520065006100640065007200200035002e0030002000690020006e006f00770073007a0079006d002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f0062006500200050004400460020006d00610069007300200061006400650071007500610064006f00730020007000610072006100200070007200e9002d0069006d0070007200650073007300f50065007300200064006500200061006c007400610020007100750061006c00690064006100640065002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /RUM <FEFF005500740069006c0069007a00610163006900200061006300650073007400650020007300650074010300720069002000700065006e007400720075002000610020006300720065006100200064006f00630075006d0065006e00740065002000410064006f006200650020005000440046002000610064006500630076006100740065002000700065006e0074007200750020007400690070010300720069007200650061002000700072006500700072006500730073002000640065002000630061006c006900740061007400650020007300750070006500720069006f006100720103002e002000200044006f00630075006d0065006e00740065006c00650020005000440046002000630072006500610074006500200070006f00740020006600690020006400650073006300680069007300650020006300750020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e00300020015f00690020007600650072007300690075006e0069006c006500200075006c0074006500720069006f006100720065002e> /RUS <FEFF04180441043f043e043b044c04370443043904420435002004340430043d043d044b04350020043d0430044104420440043e0439043a043800200434043b044f00200441043e043704340430043d0438044f00200434043e043a0443043c0435043d0442043e0432002000410064006f006200650020005000440046002c0020043c0430043a04410438043c0430043b044c043d043e0020043f043e04340445043e0434044f04490438044500200434043b044f00200432044b0441043e043a043e043a0430044704350441044204320435043d043d043e0433043e00200434043e043f0435044704300442043d043e0433043e00200432044b0432043e04340430002e002000200421043e043704340430043d043d044b04350020005000440046002d0434043e043a0443043c0435043d0442044b0020043c043e0436043d043e0020043e0442043a0440044b043204300442044c002004410020043f043e043c043e0449044c044e0020004100630072006f00620061007400200438002000410064006f00620065002000520065006100640065007200200035002e00300020043800200431043e043b043504350020043f043e04370434043d043804450020043204350440044104380439002e> /SKY <FEFF0054006900650074006f0020006e006100730074006100760065006e0069006100200070006f0075017e0069007400650020006e00610020007600790074007600e100720061006e0069006500200064006f006b0075006d0065006e0074006f0076002000410064006f006200650020005000440046002c0020006b0074006f007200e90020007300610020006e0061006a006c0065007001610069006500200068006f0064006900610020006e00610020006b00760061006c00690074006e00fa00200074006c0061010d00200061002000700072006500700072006500730073002e00200056007900740076006f00720065006e00e900200064006f006b0075006d0065006e007400790020005000440046002000620075006400650020006d006f017e006e00e90020006f00740076006f00720069016500200076002000700072006f006700720061006d006f006300680020004100630072006f00620061007400200061002000410064006f00620065002000520065006100640065007200200035002e0030002000610020006e006f0076016100ed00630068002e> /SLV <FEFF005400650020006e006100730074006100760069007400760065002000750070006f0072006100620069007400650020007a00610020007500730074007600610072006a0061006e006a006500200064006f006b0075006d0065006e0074006f0076002000410064006f006200650020005000440046002c0020006b006900200073006f0020006e0061006a007000720069006d00650072006e0065006a016100690020007a00610020006b0061006b006f0076006f00730074006e006f0020007400690073006b0061006e006a00650020007300200070007200690070007200610076006f0020006e00610020007400690073006b002e00200020005500730074007600610072006a0065006e006500200064006f006b0075006d0065006e0074006500200050004400460020006a00650020006d006f0067006f010d00650020006f0064007000720065007400690020007a0020004100630072006f00620061007400200069006e002000410064006f00620065002000520065006100640065007200200035002e003000200069006e0020006e006f00760065006a01610069006d002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f00740020006c00e400680069006e006e00e4002000760061006100740069007600610061006e0020007000610069006e006100740075006b00730065006e002000760061006c006d0069007300740065006c00750074007900f6006800f6006e00200073006f00700069007600690061002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d002000e400720020006c00e4006d0070006c0069006700610020006600f60072002000700072006500700072006500730073002d007500740073006b00720069006600740020006d006500640020006800f600670020006b00760061006c0069007400650074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /TUR <FEFF005900fc006b00730065006b0020006b0061006c006900740065006c0069002000f6006e002000790061007a006401310072006d00610020006200610073006b013100730131006e006100200065006e0020006900790069002000750079006100620069006c006500630065006b002000410064006f006200650020005000440046002000620065006c00670065006c0065007200690020006f006c0075015f007400750072006d0061006b0020006900e70069006e00200062007500200061007900610072006c0061007201310020006b0075006c006c0061006e0131006e002e00200020004f006c0075015f0074007500720075006c0061006e0020005000440046002000620065006c00670065006c0065007200690020004100630072006f006200610074002000760065002000410064006f00620065002000520065006100640065007200200035002e003000200076006500200073006f006e0072006100730131006e00640061006b00690020007300fc007200fc006d006c00650072006c00650020006100e70131006c006100620069006c00690072002e> /UKR <FEFF04120438043a043e0440043804410442043e043204430439044204350020044604560020043f043004400430043c043504420440043800200434043b044f0020044104420432043e04400435043d043d044f00200434043e043a0443043c0435043d044204560432002000410064006f006200650020005000440046002c0020044f043a04560020043d04300439043a04400430044904350020043f045604340445043e0434044f0442044c00200434043b044f0020043204380441043e043a043e044f043a04560441043d043e0433043e0020043f0435044004350434043404400443043a043e0432043e0433043e0020043404400443043a0443002e00200020042104420432043e04400435043d045600200434043e043a0443043c0435043d0442043800200050004400460020043c043e0436043d04300020043204560434043a0440043804420438002004430020004100630072006f006200610074002004420430002000410064006f00620065002000520065006100640065007200200035002e0030002004300431043e0020043f04560437043d04560448043e04570020043204350440044104560457002e> /ENU (Use these settings to create Adobe PDF documents best suited for high-quality prepress printing. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.) /Namespace [ (Adobe) (Common) (1.0) /OtherNamespaces [ /AsReaderSpreads false /CropImagesToFrames true /ErrorControl /WarnAndContinue /FlattenerIgnoreSpreadOverrides false /IncludeGuidesGrids false /IncludeNonPrinting false /IncludeSlug false /Namespace [ (Adobe) (InDesign) (4.0) /OmitPlacedBitmaps false /OmitPlacedEPS false /OmitPlacedPDF false /SimulateOverprint</s>
<s>/Legacy /AddBleedMarks false /AddColorBars false /AddCropMarks false /AddPageInfo false /AddRegMarks false /ConvertColors /ConvertToCMYK /DestinationProfileName () /DestinationProfileSelector /DocumentCMYK /Downsample16BitImages true /FlattenerPreset << /PresetSelector /MediumResolution /FormElements false /GenerateStructure false /IncludeBookmarks false /IncludeHyperlinks false /IncludeInteractive false /IncludeLayers false /IncludeProfiles false /MultimediaHandling /UseObjectSettings /Namespace [ (Adobe) (CreativeSuite) (2.0) /PDFXOutputIntentProfileSelector /DocumentCMYK /PreserveEditing true /UntaggedCMYKHandling /LeaveUntagged /UntaggedRGBHandling /UseDocumentProfile /UseDocumentBleed false>> setdistillerparams /HWResolution [2400 2400] /PageSize [612.000 792.000]>> setpagedevice</s>
<s>Bangla Interrogative Sentence Identification from Transliterated Bangla SentencesInternational Conference on Bangla Speech and Language Processing(ICBSLP), 21-22 September, 2018Bangla Interrogative Sentence Identification fromTransliterated Bangla SentencesMd Montaser HamidComputer Science andEngineeringShahjalal University ofScience and TechnologySylhet, Bangladeshmontaserhamid13@gmail.comTanvir AlamComputer Science andEngineeringShahjalal University ofScience and TechnologySylhet, Bangladeshtanviralam997@gmail.comSabir IsmailComputer Science andEngineeringStony Brook UniversityNew York, United Statessabir.ismail@stonybrook.eduMd Forhad RabbiComputer Science andEngineeringShahjalal University ofScience and TechnologySylhet, Bangladeshfrabbi-cse@sust.eduAbstract—In this paper, we propose a method to identifyBangla interrogative sentences from transliterated Bangla sen-tences. All over the internet, we generate a huge number ofBangla interrogative sentences and they are mostly written usingtransliteration. In transliterated Bangla, identifying interrogativesentences possesses great challenges. The question marks at theend of the interrogative sentences are not used in transliter-ated Bangla profoundly especially on social media. Moreover,people often make interrogative sentences without using Banglaquestion words when writing in transliterated form. To findthe solution, we discuss the rule-based approach, supervisedlearning approach and a deep learning approach. In the rule-based approach, we design a set of rules based on grammar anddata analysis. For employing supervised learning, we use machinelearning techniques such as Support Vector Machine, k-NearestNeighbors, Multilayer Perceptron and Logistic Regression andachieved accuracies of 91.43%, 75.98%, 92.11% and 91.68% re-spectively. To apply deep learning, we implement a ConvolutionalNeural Network. This approach provides a decent result withan accuracy of 84.64% and dignifies the scope of ConvolutionalNeural Network as an ideal model for Bangla natural languageprocessing.Keywords—Interrogative sentence identification, transliteratedBangla, Convolutional Neural NetworkI. INTRODUCTIONA transliterated form of language is defined as the form inwhich alphabet of one language is used for representing an-other language [1]. For example, ”Ami bari jabo” is a Banglatransliterated sentence using the Latin alphabet. The use oftransliteration is huge in many languages. The Latin alphabetis used in western and eastern European languages extensively.The Latin alphabet is also used in Turkey, Vietnam, Somaliaand in east African Swahili Language [2].We use the transliterated form to write Bangla in varioussectors. The maximum use of the transliterated Bangla sen-tences is in social media, like Facebook, Instagram and twitter.Another sector where this transliterated form is used exten-sively is in chatting applications, like Messenger, WhatsApp,Viber etc. We also use this form to write in online blogs andweb portals.In all these sectors mentioned earlier, we are generating aconsiderable number of interrogative sentences every day. Theidentification of interrogative sentences is very much importantfor various reasons. In data analytics, this identification canplay a significant role. Service providers can study the clientbehavior, expectations, demands, queries from the interroga-tive sentences posted by the clients in the form of Facebookstatus, comments, tweets, online blog posts, live chats etc. Thisidentification is also important for the development of smartassist applications, medical applications, question-answeringbased applications, user-interactive applications, chatbot pro-grams, 24/7 question-answering services and so on.In transliterated Bangla, the interrogative sentences comein various forms. In many cases these sentences differ togreat extent from the traditional grammatical format of theinterrogative sentences. Many people do not use questionmark at the end of the interrogative sentences. This commonpractice makes the challenge of identification very hard. Itis found that almost 30% of the online questions do notcontain any question</s>
<s>mark at the end [3]. One important thingis that the question mark at the end of the sentence doesnot necessarily express questioning property all the time. Forinstance, Tomar eto boro spordha?. This sentence expressesexclamation rather than questioning property. Another exampleis that people often omit the Bangla question words (ke,kokhon, kar etc.) while writing interrogative sentences. Forinstances, Apni jacchen tahole?, Apni shotti chole jaben? etc.There are some sentences where the presence of question worddoes not indicate the questioning property rather the questionwords may act as a linker. Example of such sentences takenfrom our dataset are given bellow:• Ami kokhon bari jabo ta ekhono janina• ki kori tate karo kichu ashe jay na• Tini janena tara ki karone asheni• khela koto tarikhe hobe ta ekhono jana jay niIn this paper, for the identification of interrogative sentences,we have the rule based approach, supervised learning basedapproach and the deep learning approach. In the rule based ap-proach we have followed the rules of grammar and some ruleswhich we have made by observing and analyzing the dataset.The drawback of this approach is that due to variations in sen-978-1-5386-8207-4/18/$31.00 ©2018 IEEEtences no set of rules can identify the interrogative sentencesaccurately and efficiently. For better accuracy and efficiency,we have used supervised learning. In supervised learning, Wehave used the State Vector Machine(SVM), Logistic Regres-sion, Multilayer Perceptron(MLP) and k-Nearest Neighbors(k-NN) algorithm for the classification and identification. Lastly,we have used Convolutional Neural Network(CNN) for thedeep learning approach.II. RELATED WORKSThough, the topic of text classification and identification iscommon for English language, related works regarding thistopic is sparse in Bangla language. In English, the work ondetection and retrieval of questions is done based on thecontents generated from blogs, web portals, twitter and emails.A good number of work has been done for question detectionfrom Community Question Answering(CQA). Deep learningfor the classification of sentences is also highly practiced.The question and query detection are mostly used and ana-lyzed for developing online question answering Services. Themost famous services are: Apple’s Siri, Microsoft’s Cortana,IBM Watson, Wolfram Alpha and Google search engine. Theydetect and extract answer by following IR (Information Re-trieval)based approach, knowledge-based approach and hybridapproach [4].Rule based approach is the most time-worn process ofdetecting questions. Efron et al. [5] implemented a rule-basedapproach which is able to detect and analyze the questionsasked in microblogging environment like twitter. The authorshave made a taxonomy of questions from huge collections ofquestions taken from twitter.Wang et al. [6] applied a learning-based approach basedon lexical and syntactic features that are employed to detectquestions retrieved from Community Based Question Answer-ing Services (CQA). Sequential patterns of the sentences aremined to detect the questions. The sentences are decomposedinto a stream of tokens. In sequential pattern extraction, thePart-of-Speech (POS) tag of all the tokens of a sentence areused. Support Vector Machine (SVM) algorithm is used forthe classification of the questions and non-questions. Li et al.[7] used another learning based approach that is employed toidentify questions from twitter. The prefix span algorithm isused for mining the frequent question patterns and SVM isalso employed to differentiate the questions.For designing question</s>
<s>answering systems, deep learningis a very aspiring and efficient way. With the formation ofRecurrent Neural Networks(RNN), it is possible to analyzelonger text [8]. Here, RNN models and end-to-end memorynetworks are used for designing question answering system.By forming Convolutional Neural Network of one layer ofConvolution with the help of word2vec and static vectors, itis possible to design a state-of-the-art language independentsentence classifier and question classifier [9], which performsbetter than most of the classifiers with the help of tuning thehyperparameters.Razzaghi et al. [10] employed machine learning tech-niques SVM and Naive Bayes to detect Frequently AskedQuestion(FAQ). Information Gain (IG), ChiSquared AttributeEvaluation( Cfs), and CfsSubset (Chi) methods are used forfeature selections. Syntactic features, question words, semanticfeatures, bag of words are used for forming the set of features.Banerjee et al. [11] used multiple models such as NaiveBayes, Kernel Naive Bayes, rules induction and decision treeclassifiers to classify Bangla questions. They have achieved anaccuracy of 91.65%.Wang et al. [12] used Naive Bayes and Support vectorMachines for sentiment classification. They have employeddifferent variants, methods and features varying from thetraditional approaches to improve the sentiment and topicclassification.Yin et al. [13] compared CNN and RNN for different naturallanguage processing tasks and depicted the basis of selectionbetween these two neural networks.Kalchbrenner et al. [14] made a dynamic CNN for languageindependent sentence modelling without any dependence ofparse tree. They tested their model on various tasks likesentiment predictions, question classification.Liu et al. [15] designed multi-task systems and associatedmulti-task learning concepts with the RNN. They have usedRNN with multi task frameworks to classify texts.III. METHODOLOGYIn the transliterated form of Bangla sentences, a lot ofthings are taken under consideration. In this form, concretegrammatical rules or hand-picked rules for the identificationwill not be good enough for obtaining better accuracy andefficiency. The variations and informalities in the sentencesare extremely high therefore a learning-based approach is amust for addressing the problem in a greater extent. For thispurpose, we have used supervised learning and deep learning.A. Rule Based ApproachRule based approach is the conventional way to address thesolution of the problem. With the combination of grammarand in-depth analysis of the dataset, we have designed a set ofrules for the identification purpose. We have done some featureextraction to design the rules. The analysis behind the designof these rules are discussed in section IV. The position of theBangla question words serves great importance in our rule-based approach. Therefore, we have made a Bangla questionword list which contains 20 words. These question words arelisted in Figure 1.Ki, Keno, Kivabe, Kothay, Koto, Kar, Kon, Kobe,Kisher, Kokhon, Ke, Kemon, Koy, Ke Ke, Kake, Kara,Kader, Koi, Koyta, Kotha.Fig. 1. Bangla Question WordsAs the question mark is not that much significant and oftengets omitted, we have excluded the presence of question markas a rule. The rules are as follows:• Rule 1: A Bangla question word is present as the first orlast word of a sentence.• Rule 2: A Bangla question word is present as the secondword of the sentence and the first word is the subject orobject of the sentence.• Rule 3: A Bangla question</s>
<s>word is present just beforethe last word of the sentence.• Rule 4: The word ”Naki” is considered as a questionword and the sentence with this word follows the previousrules.The findings and results of this approach have given us theinsights that the designing and implementation of rule-basedapproach is not pragmatic. Thus we have deviated ourselvesfrom this approach and given our focus on the learning-basedapproaches. The findings of this approach are discussed insection V.B. Supervised Learning ApproachAs mentioned earlier the identification can’t be done withaccuracy and efficiency by only following a set of hand-pickedrules. For better performance a learning-based approach ismandatory. We have employed the following supervised ma-chine learning techniques for the identification of interrogativeand non-interrogative sentences:• Support Vector Machine (SVM)• Logistic Regression(LR)• Multilayer Perceptron(MLP)• k-Nearest Neighbors (k-NN)These machine learning algorithms are very common for textclassification and identification problems in Natural LanguageProcessing(NLP). Our main challenge was to employ them ina dataset for transliterated Bangla sentences and extracting theideal features. We have used lexical features from the datasetsfor all the machine learning techniques employed here. Theresults of this approach are described in section V.C. Deep Learning ApproachFor classification and identification, using deep learningis a modern and dynamic approach. For employing deeplearning we have used Convolutional Neural Network (CNN)for classification. We have tried to replicate the model ofCNN described in [9]. We have followed this tutorial [16]for implementing the CNN.We need to embed the words for CNN. The first layer ofthe network is used to embed the words into low-dimensionalvectors. This layer is a learning step. The embedded words actas a lookup table. The second layer is used for convolution.Embedded words are used for performing convolutions. Theresult of this convolution layer is max-pooled to get the featurevector and dropout regularization.In the implemented network the value of the dimensionalembedding is 128. The filter size is taken as 3,4 and 5 whichmeans that the convolutional filter will cover 3,4 and 5 wordsrespectively. All the sentences are padded to have the samelength which is 59. The number of filters per filter size inthe network is 128. The batch size is 64 and dropout is 0.5.There are two classes (interrogative and non-interrogative) inthe output layer of this network and the hyperparameters ofthe CNN are not tuned for the test set.IV. DATA ANALYSISA. Making CorporaThe dataset we have worked on has been formed byextracting 44,538 comments from cricket-based Facebookpublic groups using a web application [17].1) Making of Primary Corpus for Rule-Based Approach:The comments we have extracted from facebook groupscontain various types of sentences. We have excluded thecomments written in English language and in Bangla alphabetand made a dataset and named it as the Primary Corpus. Fromthis corpus we have excluded the sentences with the questionmark and made the Interrogative Mega Corpus. From thiscorpus we have taken the unique distinct sentences without anysimilarities between each other and formed the InterrogativeCorpus and the rest of the primary corpus is named as theOther Mega Corpus. We have used these corpora for the rule-based approach. The information about the corpora is reportedin TABLE</s>
<s>I.2) Making Corpus for Learning Based Approaches: Foremploying supervised learning, we have scrutinized the pri-mary corpus and omitted a large number of sentences whichwere very raw in nature and had anomalous content. We havemanually picked out the interrogative and non-interrogativeBangla transliterated sentences from the primary corpus toform our Cricket Domain Corpus. Implementing SVM, k-NN,MLP and logistic regression classifiers on this corpus, we haveobtained good results. In this experiment, the test dataset andthe training dataset have the same type of contents which aremainly cricket related comments. From the result, we haverealized that the result was influenced by the common domainof the training dataset and test dataset. Therefore, we tried tointroduce data from another domain in our dataset. We havecollected data for designing a chatbot for university admissionTABLE IMAKING CORPORACorpus Number of Sen-tencesTotal Number ofWordsPrimary Corpus 145,009 4,29,883Interrogative Mega Cor-pus4,624 23,785Other Mega Corpus 25,259 121,224Interrogative Corpus 700 3,073test for our another project. This university admission testdataset contains queries, questions and the corresponding an-swers regarding university admission test which are frequentlyasked by the applicants. This dataset follows proper andstandard form of transliterated sentences. We have taken thequeries and the questions as the interrogative sentences andthe answers as the non-interrogative sentences and formeda new dataset. We have named this dataset as UniversityAdmission Corpus. After adding this corpus with the CricketDomain Corpus we have got our Mixed Domain Corpus. Theinformation about these three corpora is reported in TABLEII.B. Analyzing Corpora1) Interrogative Corpus Analysis: This corpus is the mostimportant corpus for our experiments in rule-based approach.In this corpus, We have calculated that the average numberof words is 4.7, the average length of words is 4 and averagenumber of letters is 19 per sentences.We have also calculated the position of the question wordsof Figure 1 in the interrogative corpus. This calculation isreported in TABLE III. We have also seen that the word”Naki” is used as the question word in 24 sentences. It isto be noted that 149 sentences out of 700 sentences from thiscorpus do not contain any Bangla question words.2) Other Mega Corpus Analysis: In this corpus all thesentences without question mark remain present. Like theinterrogative corpus, We have calculated that the averagenumber of words is 5, the average length of words is 4 andaverage number of letters is 11 per sentences.V. EXPERIMENTS & RESULTSBy applying 3 different approaches we have achieved dis-tinct results. The findings of our experiments are discussed inthis section.A. Evaluation of Rule Based ApproachFrom the set of rules prescribed in section III.A, we havetested the interrogative corpus. The main basis of this approachis the position of the Bangla question words in the sentence aswritten in TABLE III. We exclude the number of occurrenceof question words as the last word of the sentences from theTABLE IIALL CORPORACorpus TotalSentencesInterrogativeSentencesNon-InterrogativeSentencesCricket DomainCorpus8797 1704 7093UniversityAdmissionCorpus2993 2434 559Mixed DomainCorpus11790 4138 7652TABLE IIIQUESTION WORD POSITIONPosition of Bangla Ques-tion Word in SentencesNumber of Sen-tencesPercentage ofSentences with theQuestion Word1st Word 112 16%Last word 171 24.43%2nd Word 167 23.86%3rd Word 37 5.29%4th Word 4 0.57%5th Word 4 0.57%6th Word 2 0.29%7th Word</s>
<s>1 0.14%8th Word 1 0.14%Just Before the Last Word 52 7.43%other positions. Then we find the mean position of the Banglaquestion words in the sentence which is 1.90. At first, we havetested the corpus according to the rule 1, then gradually weintegrated the other rules and observe the combined effects ofthe other rules in the improvement of accuracy. The evaluationis given in TABLE IV.From this level of accuracies, from a known small datasetwe can say the rule-based approach is not satisfactory. As thenumber of rules can’t be made concrete due to the diversityand varieties of sentences. From TABLE III, we have seen thatthe maximum likely position of Bangla question word is thelast word of the sentence. In the sentence Tader jete bollamkothay ar tara gelo kothay, the question word Kothay is at thelast position but the sentence is not an interrogative sentencewhatsoever. But according to our rule-based approach, thissentence will be regarded as an interrogative sentence asthe question word Kothay is present as the last word of thesentence.From this, we say that the rule-based approach is notTABLE IVRULE BASED APPROACH EVALUATIONMethod Accuracy %Rule 1 40.42Rule 1 + Rule 2 64.29Rule 1 + Rule 2 + Rule 3 71.71Rule 1 + Rule 2 + Rule 3 + Rule 4 75.14pragmatic and there should be a learning based approach foraddressing the identification and classification.B. Evaluation of Supervised Learning ApproachWe have analyzed the Cricket Domain Corpus and theMixed Domain Corpus with the help of SVM, k-NN, MLPand logistic regression classifiers. At first, we have used theCricket Domain Corpus where the training and test set havecome from the same domain. Then we have taken the MixedDomain Corpus where the training and test set differs interms of subject domain.1) Result of Cricket Domain Corpus: To evaluate theCricket Domain Corpus, we have taken 30% of the data asthe test set and the rest as the training set. The distribution ofthe training and test dataset is represented in Table V.Though the domain is same, it has huge variations as it istaken from real people. We have labeled the test set and thetraining set then we have found out the accuracy for SVM,k-NN, MLP and logistic regression classifiers. The trainingtime was really small. The accuracy level we have observedis described in Table VI.2) Result of the Mixed Domain Corpus: The Mixed Do-main Corpus contains data from two domains, one is cricketanother is hand picked data for university admission testrelated queries. We have employed the same four classifiersfor this corpus. This time admission test based dataset was thetraining set and the cricket based dataset was the test set. Theresult we have found from this corpus is stated in Table VI.3) Assessment of the results: We observe that the accuracyof the Cricket Domain Corpus differs from the accuracy of theMixed Domain Corpus. It happened due to the change of thedomain of the training dataset in the Mixed Domain Corpus.As the training dataset(admission dataset) contains formalsentences and sentences with standard form of transliteration,it can not handle all the real time variations</s>
<s>of the sentencesof the Cricket Domain Corpus, as a result the accuracy labeldrops. With a standard form of test dataset our model ofclassification will work swiftly but with the standard testdataset the real time variations of the sentences will be ignored.TABLE VDISTRIBUTION OF THE CRICKET DOMAIN CORPUS FOR SUPERVISEDLEARNINGDataset Number of In-terrogative Sen-tencesNumber of Non-InterrogativeSentencesTraining Dataset 1193 4965Test Dataset 511 2128TABLE VIRESULT OF SUPERVISED LEARNING APPROACHClassifier Accuracy forCricket DomainCorpusAccuracy for MixedDomain CorpusSVM 90.36% 82.64%k-NN 80.66% 80.59%MLP 90.36% 74.66%Logistic Regression 90.18% 79.79%C. Evaluation of the Deep Learning ApproachTo implement CNN, we have used our Mixed DomainCorpus as this corpus contains the maximum number ofsentences. 10% of the data is taken as the test dataset andother 10% of the data is taken as the validation set. 80% ofthe corpus is taken for the training dataset. The distributionof the test set, validation set and training set is described inTABLE VII.Interrogative and Non-Interrogative are the two outputclasses of our network. For the validation dataset we haveobserved an accuracy of 85.77%. For the test dataset, theaccuracy of CNN is 84.64%. If we use pretrained wor2vecvectors for the embedding matrices and a training dataset withmore sentences we can gain more accuracy from this approach.D. Classifying Sentences Using Our CNN ModelUsing CNN, We have become successful in classifying thefollowing sentences. I indicates interrogative class and NIindicates the non-interrogative class.• ajke khela kokhon hobe bolte parben -I• apnar desher bari kothay -I• apni koto din dhore ei kaj korchen -I• amra shobai besh valo achi -NI• apni ki koren ta diye amader kichu ashe jay na -NI• shobaike diye ki ar shobkichu korano jay -NI• apnar naam ta bole jabe ki -NI• tader bashay kothay sheta ki tumi jano -IWe can even classify sentences like ”ajka kala kokhan”, ”ameajka kala dekba” which are extreme cases of transliteratedform of Bangla considering the popular standard way ofTABLE VIIDISTRIBUTION OF THE MIXED DOMAIN CORPUS FOR CNNDataset Number of In-terrogative Sen-tencesNumber of Non-InterrogativeSentencesTraining Dataset 3312 6122Validation Dataset 413 765Test Dataset 413 765TABLE VIIIACCURACY COMPARISON BETWEEN THE SUPERVISED LEARNING ANDTHE DEEP LEARNING APPROACHESApproach Accuracy for the TestDatasetSVM 91.43%k-NN 75.98%MLP 92.11%Logistic Regression 91.68%CNN 84.64%spelling. From this result of classification, we can say thatCNN can identify transliterated Bangla sentences efficiently.E. Evaluation of All ApproachesAll our three approaches show significant and insightfulresults. To compare the accuracies of the supervised learningapproach to the deep learning approach, we have tested allthe four classifiers of the supervised learning approach withthe test dataset of the deep learning approach. For trainingthe classifiers, we have merged the validation and the trainingdataset of the Mixed Domain Corpus of the CNN. This ex-periment gives proper comparison between all the approachesas the test dataset is same for all the approaches. We haveexcluded the rule-based approach from this experiment as ithas no futuristic aspect. The result of this experiment is shownin Table VIII.The results of the supervised learning and the deep learningapproaches dignifies the scope of machine learning techniquesand the deep neural network as the potential solution foridentifying transliterated Bangla interrogative sentences.VI. CONCLUSIONIn this paper, we</s>
<s>have discussed the challenges and diffi-culties in the process of designing a system to identify Banglainterrogative sentences from transliterated Bangla sentences.As less works have been done in this area the problemhas remained complex and vast. We have introduced threeapproaches to address the solution of this problem. The resultsof the experiments demonstrate that the rule based approachis not suitable for identification purpose. Supervised learningapproach have given us insightful results which gives the ideathat the applied classifiers can do the work of identificationof the sentences with decent accuracy. But this process hasalso shown sensitivity with the domain of the training andtest dataset. With the implementation of deep learning ap-proach, we have successfully demonstrated the efficiency ofConvolutional Neural Network (CNN) in identification andclassification of Bangla transliterated interrogative sentences.We are currently working on employing Recurrent Neural Net-work(RNN), another state-of- the-art artificial neural networkmodel for the identification purpose. We are also trying todevelop a more diversified dataset with various and criticalexamples of both interrogative and other form of transliteratedsentences. We are looking forward to designing question-answering system for transliterated Bangla using the insightof this paper. The datasets and the codes are available at:https://goo.gl/wa1PqY .REFERENCES[1] “What is transliteration?,” [Online]. Available:http://www.translitteration.com/what-is-transliteration/en/. [Accessed:June 8, 2018].[2] “Index of languages by writing system,” [Online]. Available:http://www.omniglot.com/writing/langalph.htm. [Accessed: June 8,2018].[3] C. Gao, C.-Y. Lin, Y.-I. Song, and Y. Sun, “Finding question-answerpairs from online forums,” in The 31st Annual International ACM SIGIRConference 20-24 July 2008, Singapore. ACM, 2008, pp.467-474.[4] D. Jurafsky and J. H. Martin, Speech and Language Processing: AnIntroduction to Natural Language Processing, Computational Linguisticsand Speech Recognition, 2nd ed., New Jersey: Prentice Hall, 2009.[5] M. Efron and M. Winget, “Questions are content: a taxonomy of ques-tions in a microblogging environment,” in Proceedings of the AmericanSociety for Information Science and Technology 22 - 27 October, 2010,Pittsburgh, Pennsylvania, vol. 47. ACM, 2010.[6] K. Wang and T.-S. Chua, “Exploiting salient patterns for question de-tection and question retrieval in community-based question answering,”in Proceedings of the 23rd International Conference on ComputationalLinguistics 23 - 27 August, 2010, Beijing, China. ACM, 2010, pp. 1155-1163.[7] B. Li, X. Si, M. R. Lyu, I. king, and E. Y.Chang, “Question identificationon twitter,” in Proceedings of the 20th ACM international conferenceon Information and knowledge management 24 - 28 October, 2011,Glasgow, Scotland, UK. ACM, 2011, pp. 2477-2480.[8] “Question Answering Using deep learning,” 2016. [Online]. Avail-able: https://cs224d.stanford.edu/reports/StrohMathur.pdf. [Accessed:May 13, 2018].[9] Y. Kim, “Convolutional neural networks for sentence classification,”arXiv:1408.5882, 2014.[10] F. Razzaghi, H. Minaee, and A. A. Ghorbani, “Context free frequentlyasked questions detection using machine learning techniques,” in Inter-national Conference on Web Intelligence 13-16 October, 2016, USA.IEEE, 2016.[11] S. Banerjee and S. Bandyopadhyay, “An empirical study of combiningmultiple models in bengali question classification,” in Proceedings of In-ternational Joint Conference on Natural Language Processing (IJCNLP),Japan, 2013, pp. 892-896.[12] S. Wang and C.D.Manning, “Baselines and bigrams: simple, goodsentiment and topic classification,” in proceedings of ACL, 2012, 90-94.[13] W. Yin, K. Kann, M. Yu, and H. Schtze, “Comparative study of CNNand RNN for natural language processing,” arXiv:1702.01923v1, 2017.[14] N. Kalchbrenner, E. Grefenstette, and P. Blunsom, “A convolutionalneural network for</s>
<s>modelling sentences,” arXiv:1404.2188, 2014.[15] P. Liu, X. Qiu, and X. Huang, “Recurrent neural network for textclassification with multi-task learning,” arXiv:1605.05101v1, 2016.[16] “Implementing a CNN for Text Classification in TensorFlow,” [On-line]. Available: http://www.wildml.com/2015/12/implementing-a-cnn-for-text-classification-in-tensorflow/. [Accessed: August 8, 2018].[17] “Facebook Groups Analytics & Management Tools,” [Online]. Avail-able: https://grytics.com/. [Accessed: June 9, 2018].</s>
<s>out.pdfPACLIC 28!309Readability of Bangla News Articles for ChildrenZahurul Islam and Rashedur RahmanAG TexttechnologyInstitut für InformatikGoethe-Universität Frankfurtzahurul@em.uni-frankfurt.de, kamol.sustcse@gmail.comAbstractMany news papers publish articles for chil-dren. Journalists use their experience and in-tuition to write these. They might not aware ofreadability of articles they write. There is noevaluation tool or method available to deter-mine how appropriate these articles are for thetarget readers. In this paper, we evaluate dif-ficulty of Bangla news articles that are writtenfor children.1 IntroductionNews is the communication of selected informationon current events (Shirky, 2009). This communi-cation is shared by various mediums such as print,online and broadcasting. A newspaper is a printedpublication that contains news and other informativearticles. There are many newspapers that are alsopublished online. Due to the rapid growth of internetuse, it is very common that more people read newsonline nowadays than before. Newspapers try to tar-get certain audience through different topics and sto-ries. Children are also in their target audience. Thistarget group is their future reader.Nowadays children also read news online. Onethird of children in developed countries such asNetherlands, United Kingdoms and Belgium browseinternet for news (De Cock, 2012; De Cock andHautekiet, 2012). Another study by Livingstone etal. (2010) showed that one fourth of the British chil-dren between age of nine and nineteen look for newson the internet. The ratio could be similar in otherdeveloped countries where most of the citizen haveaccess over the internet.The number of internet users also increasing indeveloping countries such as Bangladesh and In-dia. According to the English Wikipedia1, morethan thirty three million people in Bangladesh useinternet and many of them read news online. Alsothe Alexa index2 shows that three Bangla news sitesare in the list of ten most visited websites fromBangladesh.All newspapers contain a variety of sections.These sections are based on different news topics.Some of the them are specific to children. The newsfor children will vary linguistically and cognitivelythan news for adults. This characteristic is similarto the websites dedicated for children. De Cock andHeutekiet (2012) observed difficulties for children tonavigate these websites. Readability of the texts isone of the reasons. There is no specific guideline forwriting texts for this target group. Journalists usetheir experience and intuition while writing. How-ever, a text that is very easy to understand for anadult reader could be very difficult for a child. Thisdifficulty motivate children readers to skip the news-paper in future.The readability of a text relates to how easilyhuman readers can process and understand a text.There are many text related factors that influence thereadability of a text. These factors include very sim-ple features such as type face, font size, text vocab-ulary as well as complex features like grammaticalconciseness, clarity, underlying semantics and lackof ambiguity. Nielsen (2010) recommended fontsize of 14 for young children and 12 for adults.1http://en.wikipedia.org/wiki/Internet in Bangladesh2http://en.wikipedia.org/wiki/Alexa InternetCopyright 2014 by Zahurul Islam and Rashedur Rahman28th Pacific Asia Conference on Language, Information and Computation pages 309–317PACLIC 28!310Readability classification, is a task of mappingtext onto a scale of readability levels. We explore thetask of automatically classifying documents basedon their different readability levels. As an input, thisfunction operates on</s>
<s>various statistics relating to dif-ferent text features.In this paper, we train a readability classificationmodel using a corpus compiled from textbooks andfeatures inherited from our previous works Islamet al. (2012; 2014) and features from Sinha et al.(2012). Later we use the model to classify Banglanews articles for children from different well-knownnews sources from Bangladesh and West Bengal.The paper is organized as follows: Section 2 dis-cusses related work. Section 3 describes cognitivemodel of children in terms of readability followedby an introduction of the training corpus and newsarticles in Section 4. The features used for classifica-tion are described in Section 5, and our experimentsand results in Section 6 are followed by a discussionin Section 7. Finally, we present our conclusions inSection 8.2 Related WorkMost of the text readability research works use textsfor adult readers. Only few numbers of related workavailable that only focus on texts for children. DeBelder and Moens (2010) perform a study that trans-fers a complex text into a simpler text so that thetarget text become easier to understand for children.They have focused on two types of simplification:lexical and syntactic. Two traditional readability for-mulas: Flesch-Kincaid (Kincaid et al., 1975) andDale-Chall (Dale and Chall, 1948; Dale and Chall,1995) are used to measure reading difficulty. DeCock and Heutekiet (2012) performed a usabilitystudy to analyze websites for children. The studyuses texts from different websites published in En-glish and Dutch. The usability experiment showsthat previous knowledge of children play an impor-tant role to read and understand texts. They haveused Flesch-Kincaid (Kincaid et al., 1975) to deter-mine the difficulty level of English texts and a vari-ation of the same formula for Dutch texts.Both of the related work mentioned above use tra-ditional readability formulas to measure text diffi-culty. However these traditional formulas have sig-nificant drawbacks. These formulas assume thattexts do not contain noise and the sentences are al-ways well-formed. However this is not the case al-ways. Traditional formulas require significant sam-ple sizes of text, they become unreliable for a textthat contains less than 300 words (Kidwell et al.,2011). Si and Callan (2001), Peterson and Osten-dorf (2009) and Feng et al. (2009) show that thesetraditional formulas are not reliable. These formu-las are easy to implement, but have a basic inabil-ity to model the semantic of vocabulary usage in acontext. The most important limitation is that thesemeasures are based only on surface characteristicsof texts and ignore deeper properties. They ignoreimportant factors such as comprehensibility, syntac-tical complexity, discourse coherence, syntactic am-biguity, rhetorical organizations and propositionaldensity of texts. Longer sentences are not alwayssyntactically complex and counting the number ofsyllables of a single word does not show word dif-ficulty. That is why, the validity of these traditionalformulas for text comprehensibility is often suspect.Two recent works on Bangla texts use two of thesetraditional formulas. Das and Roychudhury (2004;2006) show that readability measures proposed byKincaid et al. (1975) and Gunning (1952) work wellfor Bangla. However, the measures were tested onlyfor seven documents, mostly novels.Since there are not many linguistic tools availablefor Bangla, researchers are exploring language in-dependent and surface features to measure difficultyof Bangla</s>
<s>texts. Recently, in our previous works,we proposed a readability classifier for Bangla usinginformation-theoretic features (Islam et al., 2012; Is-lam et al., 2014). We have achieved an F-Scoreof 86.46% by combining these features with somelexical features. Sinha et al. (2012) proposed tworeadability models that are similar to classical read-ability measures for English. They conducted a userexperiment to identify important structural param-eters of Bangla texts. These measures are basedon the average word length (WL), the number ofpoly-syllabic words and the number of consonant–conjuncts. According to their experimental results,consonant–conjuncts plays an important role in textsin terms of readability.From the beginning of research on text read-ability, researchers proposed different measures forPACLIC 28!311English (Dale and Chall, 1948; Dale and Chall,1995; Gunning, 1952; Kincaid et al., 1975; Senterand Smith, 1967; McLaughlin, 1969). Many com-mercial readability tools use traditional measures.Fitzsimmons et al. (2010) stated that the SMOG(McLaughlin, 1969) readability measure should bepreferred to assess the readability of texts on healthcare.Due to recent achievements in linguistic dataprocessing, different linguistic features are nowin the focus of readability studies. Islam etal. (2012) summarizes related work regardinglanguage model-based features (Collins-Thompsonand Callan, 2004; Schwarm and Ostendorf, 2005;Aluisio et al., 2010; Kate et al., 2010; Eickhoff etal., 2011), POS-related features (Pitler and Nenkova,2008; Feng et al., 2009; Aluisio et al., 2010; Fenget al., 2010), syntactic features (Pitler and Nenkova,2008; Barzilay and Lapata, 2008; Heilman et al.,2007; Heilman et al., 2008; Islam and Mehler,2013), and semantic features (Feng et al., 2009; Is-lam and Mehler, 2013). Recently, Hancke et al.(2012) found that morphological features influencethe readability of German texts.Due to unavailability of linguistic resources forBangla, we did not explore any of the linguisticallymotivated features. We have inherited features fromIslam et al. (2012; 2014) and Sinha et al. (2012),these features achieve reasonable classification ac-curacy.Children’s reading skills is influenced by theircognitive ability. The following section describeschildren’s cognitive model and text readability.3 Text Readability and ChildrenChildren start building their cognitive skills from anearly age. They use their cognitive skills to per-form different tasks in different environments. Kali(2009) stated that children refine their motor skillsand start to be involved in different social gameswhen they are 5 to 6 years of age. From age of 6 to 8,children start to expand their vision beyond their im-mediate surroundings. Children from 8 to 12 yearsof age acquire the ability to present different entitiesof the world using concepts and abstract represen-tations. Children become more interested in socialinteractions in their teenage years.Children learn to recognize alphabets prior theydeveloped motor skills. This lead to develop theirreading skills. Reading skills require two processes:word decoding and comprehension. Word decod-ing is a process of identifying a pattern of alpha-bets. Children must have the knowledge about theseand their patterns. For example: it is impossibleto recognise any word from any language withoutknowledge of alphabets of that language. A pat-tern of alphabets carry a semantic in their cognitiveknowledge.Comprehension is a process of extracting mean-ing from a sequence of words. The sequence ofwords follow an order. It could be impossible forchildren to understand a sentence where the order ofthe words is random.</s>
<s>Therefore, word order playsan important role in text comprehension. Readingis different than understanding a picture, it extractsmeaning from words that are separated by whitespaces. The comprehension process is also influ-enced by the memory system.The cognitive system of humans contains threedifferent memories: sensory memory, working mem-ory and long-term memory (Rayner et al., 2012).The sensory store contains raw, un-analyzed infor-mation very briefly. The ongoing cognitive pro-cess takes place in working memory and the long-term memory is the permanent storehouse of knowl-edge about the world (Kali, 2009). Older childrensometimes are better where they simply retrieve aword from their memory while reading. A youngerchildren might have to sound out of a novel wordspelling. However they are also able to retrieve someof the familiar words. Children derive meaning of asentence by combining words to form propositionsthen combine them get the final meaning. Somechildren might struggle to recognize words whichmake them unable to establish links between words.Children without this problem able to recognizewords and derive meaning from a whole sentence.Generally, older children are better reader due totheir working memory capacity where they can storemore of a sentence in their memory as they are ableto identify propositions in the sentence (De Beni andPalladino, 2000). Older children are able to com-prehend more than younger children because of rec-ognizing ability and more working memory (Kali,2009). They also know more about the world andPACLIC 28!312skilled to use appropriate reading strategies.In summary, children become skilled reader astheir working memories develop over time, extractpropositions and combine them to understand themeaning of a sentence.4 DataThe goal of this study is to asses difficulty of newsarticles that are aimed for children. The reading abil-ity of children is very different than adult readers.The preceding section describes cognitive develop-ments of children in terms of readability. A childrenwho is 10 years old will have different reading ca-pability than a children who is 15 years of old. Thatis why, a corpus that is categorized by the ages ofchildren would be an ideal resource as training cor-pus. Duarte and Weber (2011) proposed differentcategories of children based on their ages. The cate-gorized list is relevant with our study. However, ourcategorized list is still different than their one. Thecorpus is categorized as following age ranges:• early elementary: 7� 9 years old• readers: 10� 11 years old• old children: 12� 13 years old• teenagers: 14� 15 years old• old teenagers: 16� 18 years old• adults: above 18 years oldIn this paper, we train a model using support vec-tor machine (SVM). This technique requires a train-ing corpus. We compile the training corpus fromtextbooks that have been using for teaching in dif-ferent school levels in Bangladesh. The followingsubsections describe the training corpus and chil-dren news articles.4.1 Training CorpusThe training corpus targets top four age groups de-scribed above. Textbooks from grade two to gradeten are considered as sources for corpus compila-tion. Generally, in Bangladesh children start goingto schools when they are 6 years of old and finish thegrade ten when they are fifteen (Arends-Kuenningand Amin, 2004). In our previous studies, Islam etClasses Docs Avg. DL</s>
<s>Avg. SL Avg. WLVery easy 234 88.28 7.46 5.27Easy 113 150.46 9.09 5.27Medium 201 197.08 10.35 5.47Difficult 113 251.30 12.19 5.66Table 1: The Training Corpus.al. (2012; 2014), we compile the corpus from thesame source. However, the latest version is morecleaned and contains more documents. It containstexts from 54 textbooks. Table 1 shows the statisticsof average document length (DL), average sentencelength (SL) and average word length (WL). Text-books were written using ASCII encoding which re-quired to be converted into Unicode. The classifica-tion distinguishes four readability classes: very easy,easy, medium and difficult. Documents of (school)grade two, three and four are included into the classvery easy. Class easy covers texts of grade five andsix. Texts of grade seven and eight were subsumedunder the class medium. Finally, all texts of gradenine and ten are belong to the class difficult.4.2 News ArticlesThe goal of this paper is observing children news ar-ticles in Bangla on the basis of difficulty levels. Asan Indo-Aryan language Banga is spoken in South-east Asia, specifically in present day Bangladesh andthe Indian states of West Bengal, Assam, Tripuraand Andaman and on the Nicobar Islands. Withnearly 250 million speakers (Karim et al., 2013),Bangla is spoken by a large speech community.However, due to lack of linguistic resources Banglais considered as a low-resourced language.We collected children news articles from fourpopular news sites from Bangladesh and one fromWest Bengal. The sites are: Banglanews243, Bd-news244, Kaler kantho5, Prothom alo6 and Ichch-hamoti7. Banglanews24, Bdnews24 and Ichch-hamoti publish online only. In contrast, Kalerkan-tho and Prothomalo publish as printed newspapersand online. These newspapers publish weekly fea-tured articles for children. We have collected 50 fea-3www.banglanews24.com4www.bangla.bdnews24.com5www.kalerkantho.com6www.prothomalo.com7http://www.ichchhamoti.in/PACLIC 28!313tured articles from each of the sites and pre-processin similar way as the training corpus. However, thenews articles are already written in Unicode andcover different topics ranges from family, society,science and history to sports. Table 2 shows differ-ent statistics of news articles.News sites Average DL Average. SL Average WLBanglanews24 50.14 9.48 5.04Bdnews24 62.66 9.82 4.91Kaler kantho 53.08 8.90 4.89Prothom alo 47.92 9.15 4.89Ichchhamoti 105.50 11.86 4.66Table 2: Statistics of news articles.5 Feature SelectionA limited number of related works available thatdeal texts from Bangla. All of them are lim-ited into traditional readability formulas, lexical andinformation-theoretic features. Any of features donot require any linguistic pre-processing. The fol-lowing subsections describe feature selection in de-tail.5.1 Lexical FeaturesWe inherited a list of lexical features from our pre-vious study Islam et al. (2014). Lexical features arevery cheap to compute and shown useful for differ-ent text categorizing tasks. Average SL and aver-age WL are two of most used features for readabil-ity classification. Recently, Learning (2001) showedthat these are the two most reliable measures thataffect readability of texts. The average SL is a quan-titative measure of syntactic complexity. In mostcases, the syntax of a longer sentence is difficult thanthe syntax of a shorter sentence. However, childrenof a lower grade level are not aware of syntax. Along word that contains many syllables is morpho-logically complex and leads to comprehension prob-lems (Harly, 2008). Generally, most of the frequentwords are shorter in length. These frequent wordsare more</s>
<s>likely to be processed with a fair degreeof automaticity. This automaticity increases read-ing speed and free-memory for higher level meaningbuilding (Crossley et al., 2008).Our previous study, Islam et al. (2014) also listeddifferent type token ratio (TTR) formulas. The TTRindicates lexical density of texts, a higher value ofit reflects the diversification of the vocabulary of atext. The diversification causes difficulties for chil-dren. In a diversified text, synonyms may be usedto represent similar concepts. Children face difficul-ties to detect relationship between synonyms (Tem-nikova, 2012).5.2 Information-Theoretic featuresNowadays, researchers exploring uncertainty basedfeatures from the field of information theory to mea-sure complexity in natural languages (Febres et al.,2014). Information theory studies statistical laws ofhow information can be optimally coded (Cover andThomas, 2006). The entropy rate plays an importantrole in human communication in general (Genzeland Charniak, 2002; Levy and Jaeger, 2007). Therate of information transmission per second in a hu-man speech conversation is roughly constant, that is,transmitting a constant number of bits per second ormaintaining a constant entropy rate. The entropy ofa random variable is related to the difficulty of cor-rectly guessing the value of the corresponding ran-dom variable. In our previous studies, Islam et al.(2012; 2014) and Islam and Mehler (2013) use dif-ferent information-theoretic features for text read-ability classification. Our hypothesis was that thehigher the entropy, the less readable the text alongthe feature represented by the corresponding ran-dom variable. We have inherited seven information-theoretic features from our previous studies.5.3 Readability Models for BanglaRecently, Sinha et al. (2012) proposed few com-putational models that are similar to the traditionalEnglish readability formulas. A user study was per-formed to evaluate their performance. We also in-herited two of their best performing models:Model3 = �5.23+1.43⇤AWL+ .01⇤PSW (1)Model4 = 1.15+ .02⇤JUK� .01⇤PSW30 (2)In their models, they use structural parame-ters such as average WL, number of jukta-akshars(JUK) or consonant-conjuncts, number of polysyl-labic words (PSW). The PSW30 shows that normal-ized value of PSW over 30 sentences.PACLIC 28!314Features Accuracy F-ScoreModel 3 56.61% 49.13%Model 4 56.38% 52.51%Together 66.27% 65.67%Table 3: Performance of Bangla readability models pro-posed by Sinha et al. (Sinha et al., 2012).In this paper, we use 20 features to generate fea-ture vectors for the classifier. The following sec-tion describes our experiments and results on train-ing corpus and news articles.6 Experiments and ResultsIn order to find the best performing training model,we use 20 features from Islam et al. (2012; 2014)and Sinha et al. (2012). Note that hundred data setswere randomly generated where 80% of the corpuswas used for training and remaining 20% for evalua-tion. The weighted average of Accuracy and F-scoreis computed by considering results of all data sets.We use the SMO (Platt, 1998; Keerthi et al., 2001)classifier model implemented in WEKA (Hall et al.,2009) together with the Pearson VII function-baseduniversal kernel PUK (Üstün et al., 2006).6.1 Training ModelThe traditional readability formulas that were pro-posed for English texts do not work for Bangla texts(Islam et al., 2012; Islam et al., 2014; Sinha et al.,2012). That is why, we did not explore any of thetraditional formulas.At first we build a classifier using two readabilitymodels from Sinha</s>
<s>et al(2012). The output of thesemodels are used as input for the readability classifier.Table 3 shows the evaluation results. The classifica-tion accuracy is little over than 66%. In our previ-ous study Islam et al. (2014) found better classifi-cation accuracy using these features. However, thecorpus is slightly different. The latest version of thecorpus contains more documents for easy readabil-ity class. The classifier miss-classifies documentsfrom this class mostly. The classifier labeled manyof the documents from this readability class as veryeasy. Miss-classification of documents from otherreadability classes are also observed.Table 4 shows the performance of features pro-posed in our previous study Islam et al. (2014).Features Accuracy F-ScoreAverage SL 61.53% 55.21%TTR (sentence) 47.32% 41.31%TTR (document) 53.84% 52.61%Average DW (sentence) 54.69% 55.28%Number DW (document) 62.56% 60.12%Avg. WL 44.63% 40.82%Corrected TTR 59.38% 54.31%Köhler TTR 54.61% 49.61%Log TTR 47.49% 43.30%Root TTR 60.76% 52.49%Deviation TTR 52.32% 47.83%Word prob. 60.76% 54.49%Character prob. 50.00% 47.13%WL prob. 51.58% 46.40%WF prob. 52.30% 47.80%CF prob. 60.76% 52.18%SL and WL prob. 62.30% 59.74%SL and DW prob. 66.92% 63.09%18 features proposed by Islam et al. (2014) 85.60% 84.46%Table 4: Performance of features proposed by Islam et al.(2014).The classification accuracy also drops. The clas-sifier also suffer to classify documents from easyreadability class correctly. However, information-transmission based features (i.e., SL and WL prob.and SL and DW prob.) are the best performing fea-tures. Therefore, a text with higher average SL be-come more difficult when it contains more difficultwords or more longer words.The classification F-Score rises to 87.87 when wecombine features from Islam et al. (2014) and Sinhaet al. (Sinha et al., 2012).6.2 News Articles ClassificationTotal 250 children news articles are collected as can-didate news articles for classification. We considerthe whole training corpus in order to build a train-ing model. The training model is used to classifythe candidate news articles. Among all articles, 160articles are labeled as very easy and 18 articles aseasy. Only 2 articles are labeled as difficult and re-maining 60 articles are labeled as medium. Figure 1shows classification results. More than 60% of newsarticles from newspapers are classified as very easy.However, the amount drops below 20% for the ar-ticles from Icchamoti children magazine. Also arti-cles labeled as difficult belong to this magazine. Theevaluation shows that, among all of the newspapers,news from Banglanews24 are more suitable for chil-dren. Most of articles from that site belong to veryPACLIC 28!315Very easy Easy Medium Di�cult100Readability ClassesBanglanews24 Bdnews24 Icchamoti Kaler kantho Prothom aloFigure 1: Classification of Bangla news articles for chil-dren.easy and easy readability class.Apart from the classification of children news ar-ticles we are also interested in behavior of differentfeatures in classified articles. The following sectiondescribes from interesting observation we notice.7 ObservationArticles from Ichchhamoti has the lowest averageWL. But, have higher values for average DW andaverage SL. Two of the articles from this site are la-beled as difficult. This labeling could be influencedby average DW and average SL. Documents fromtraining corpus have higher average WL.Among the lexical features different TTRs havebeen considered to measure text difficulty (Islam etal., 2014). An article with a higher TTR value sup-posed to be difficult that</s>
<s>an article with a lower TTRvalue (See Section 5.1). However, we observe dif-ferent behavior of TTR formulas. Figure 2 showsthe behaviour of different TTR formulas in classi-fied articles. The average TTR value of articles fromvery easy readability class is higher than the averageTTR value of articles from higher difficulty classes.Article length could be the reason of this irregular-ity. Articles from higher difficulty classes are longerand contain more words.We also observed that some articles which havelower average SL, but labeled as medium. In con-trast, some articles that have higher average SL, butlabeled as very easy or easy. We randomly choosesuch articles and observe average SL. The averageFigure 2: Observation of different TTR formulas in clas-sified news articles.SL of articles belong to medium is 7.40 and the av-erage SL of articles belongs to easy or very easy is12.08. However, articles that are labeled as mediumhave higher average word entropy than articles thatare labeled as easy or very easy. This shows that dif-ferent type of features should be considered togetherto build a readability classifier.8 ConclusionIn this paper, our goals was to examine the difficultylevels of news articles targeting children. There-fore we build a readability classifier that is able toclassify the corresponding news articles into dif-ferent difficulty levels. Children news articles arecognitively and linguistically different than articlesfor adult readers. A readability classifier trainedon a textbooks corpus is able to classify these ar-ticles. Although linguistically motivated featurescould capture linguistic properties of news articles.Lexical features and features related to informationdensity also have good predictive power to iden-tify text difficulties. The classification results showthat candidate articles are appropriate for children.This study also validate that features in our previousstudy Islam et al. (2014) and features proposed bySinha et al. (Sinha et al., 2012) are useful for Banglatext readability analysis.There are many languages in the world which lacka readability measurement tool. A readability clas-sifier for these language could be built by using thefeatures proposed in our previous study Islam et al.PACLIC 28!316(2014).9 AcknowledgmentsWe like to thank Prof. Dr. Alexander Mehlerfor arranging money to travel the conference. Wealso like to thank the anonymous reviewers fortheir helpful comments. This work is funded bythe LOEWE Digital-Humanities project at Goethe-University Frankfurt.ReferencesRa Aluisio, Lucia Specia, Caroline Gasperin, and Car-olina Scarton. 2010. Readability assessment for textsimplification. In NAACL-HLT 2010: The 5th Work-shop on Innovative Use of NLP for Building Educa-tional Applications.Mary Arends-Kuenning and Sajeda Amin. 2004. Schoolincentive programs and childrens activities: Thecase of bangladesh. Comparative Education Review,48(3):295–317.Regina Barzilay and Mirella Lapata. 2008. Modelinglocal coherence: An entity-based approach. Computa-tional Linguistics, 21(3):285–301.Kevyn Collins-Thompson and James P Callan. 2004. Alanguage modeling approach to predicting reading dif-ficulty. In HLT-NAACL.Thomas M. Cover and Joy A. Thomas. 2006. Elementsof Information Theory. Wiley-Interscience, Hoboken.Scott A Crossley, Jerry Greenfield, and Danielle S McNa-mara. 2008. Assessing text readability using cogni-tively based indices. Tesol Quarterly, 42(3):475–493.Edgar Dale and Jeanne S. Chall. 1948. A formula forpredicting readability. Educational Research Bulletin,27(1):11–20+28.Edgar Dale and Jeanne S. Chall. 1995. ReadabilityRevisited: The New Dale-Chall Readability formula.Brookline Books.Sreerupa Das and Rajkumar Roychoudhury. 2004. Test-ing level of readability in bangla novels of</s>
<s>bankimchandra chattopodhay w.r.t the density of polysyllabicwords. Indian Journal of Linguistics, 22:41–51.Sreerupa Das and Rajkumar Roychoudhury. 2006. Read-abilit modeling and comparison of one and two para-metric fit: a case study in bangla. Journal of Quanta-tive Linguistics, 13(1).Jan De Belder and Marie-Francine Moens. 2010. Textsimplification for children. In Prroceedings of theSIGIR workshop on accessible search systems, pages19–26.Rossana De Beni and Paola Palladino. 2000. Intrusionerrors in working memory tasks: Are they related toreading comprehension ability? Learning and Indi-vidual Differences, 12(2):131–143.Rozane De Cock and Eva Hautekiet. 2012. Chil-drens news online: Website analysis and usabilitystudy results (the united kingdom, belgium, and thenetherlands). Journalism and Mass Communication,2(12):1095–1105.Rozane De Cock. 2012. Children and online news:a suboptimal relationship. quantitative and qualitativeresearch in flanders. E-youth: Balancing between op-portunities a risks.Sergio Duarte Torres and Ingmar Weber. 2011. Whatand how children search on the web. In Proceedings ofthe 20th ACM international conference on Informationand knowledge management, pages 393–402. ACM.Carsten Eickhoff, Pavel Serdyukov, and Arjen P. de Vries.2011. A combined topical/non-topical approach toidentifying web sites for children. In Proceedingsof the fourth ACM international conference on Websearch and data mining.Gerardo Febres, Klaus Jaffé, and Carlos Gershenson.2014. Complexity measurement of natural and arti-ficial languages. Complexity.Lijun Feng, Noémie Elhadad, and Matt Huenerfauth.2009. Cognitively motivated features for readabilityassessment. In Proceedings of the 12th Conference ofthe European Chapter of the ACL.Lijun Feng, Martin Janche, Matt Huenerfauth, andNoémie Elhadad. 2010. A comparison of featuresfor automatic readability assessment. In The 23rd In-ternational Conference on Computational Linguistics(COLING).PR Fitzsimmons, BD Michael, JL Hulley, and GO Scott.2010. A readability assessment of online parkinsonsdisease information. The Journal of the Royal Collegeof Physicians of Edinburgh, 40:292–296.Dimitry Genzel and Eugene Charniak. 2002. Entropyrate constancy in text. In Proceedings of the 40stMeeting of the Association for Computational Linguis-tics (ACL 2002).Robert Gunning. 1952. The Technique of clear writing.McGraw-Hill; Fourh Printing Edition.Mark Hall, Eibe Frank, Geoffrey Holmes, BernhardPfahringer, Peter Reutemann, and Ian H. Witten.2009. The WEKA data mining software: an update.ACM SIGKDD Explorations, 11(1):10–18.Julia Hancke, Sowmya Vajjala, and Detmar Meurers.2012. Readability classification for german using lex-ical, syntactic and morphological features. In 24th In-ternational Conference on Computational Linguistics(COLING), Mumbai, India.Trevor A. Harly. 2008. The Psychology of Language.Psychology Press, Taylor and Francis Group.PACLIC 28!317Michael Heilman, Kevyn Collins-Thompson, and Max-ine Eskenazi. 2007. Combining lexical and grammat-ical features to improve readavility measures for firstand second language text. In Proceedings of the Hu-man Language Technology Conference.Michael Heilman, Kevyn Collins-Thompson, and Max-ine Eskenazi. 2008. An analysis of statistical modelsand features for reading difficulty prediction. In Pro-ceedings of the Third Workshop on Innovative Use ofNLP for Building Educational Applications (EANL).Zahurul Islam and Alexander Mehler. 2013. Automaticreadability classification of crowd-sourced data basedon linguistic and information-theoretic features. In14th International Conference on Intelligent Text Pro-cessing and Computational Linguistics.Zahurul Islam, Alexander Mehler, and Rasherdur Rah-man. 2012. Text readability classification of text-books of a low-resource language. In Proceedings ofthe 26th Pacific Asia Conference on Language, Infor-mation, and Computation.Zahurul Islam, Md Rashedur Rahman, and AlexanderMehler. 2014. Readability classification of banglatexts. In Computational Linguistics and IntelligentText Processing, pages 507–518. Springer.Robert V. Kali. 2009. Children and Their Development.Pearson Education.MA Karim, M Kaykobad,</s>
<s>and M Murshed. 2013. Tech-nical Challenges and Design Issues in Bangla Lan-guage Processing. IGI Global.Rohit J. Kate, Xiaoqiang Luo, Siddharth Patwardhan,Martin Franz, Radu Florian, Raymond J. Mooney,Salim Roukos, and Chris Welty. 2010. Learning topredict readability using diverse linguistic features. In23rd International Conference on Computational Lin-guistics (COLING 2010).S.S. Keerthi, S. K. Shevade, C. Bhattacharyya, andK. R. K. Murthy. 2001. Improvements to Platt’s SMOalgorithm for SVM classifier design. Neural Compu-tation, 13(3):637–649.Paul Kidwell, Guy Lebanon, and Kevyn Collins-Thompson. 2011. Statistical estimation of wordacquisition with application to readability predic-tion. Journal of the American Statistical Association,106(493):21–30.J. Kincaid, R. Fishburne, R. Rodegers, and B. Chissom.1975. Derivation of new readability formulas for Navyenlisted personnel. Technical report, US Navy, BranchReport 8-75, Cheif of Naval Training.Renaissance Learning. 2001. The ATOS readability for-mula for books and how it compares to other formulas.Madison, WI: School Renaissance Institute.Roger Levy and T. Florian Jaeger. 2007. Speakers opti-mize information density through syntactic reduction.Advances in neural information processing systems,pages 849–856.Sonia Livingstone, Leslie Haddon, Anke Görzig, andKjartan Ólafsson. 2010. Risks and safety for childrenon the internet: the uk report. Politics, 6(2010):1.G. Harry McLaughlin. 1969. SMOG grading – a newreadability formula. Journal of Reading, 12(8):639–646.Jakob Nielsen. 2010. Children’s websites: Usability is-sues in designing for kids. Jakob Nielsens Alertbox.Sarah E. Petersen and Mari Ostendorf. 2009. A machinelearning approach to reading level assesment. Com-puter Speech and Language, 23(1):89–106.Emily Pitler and Ani Nenkova. 2008. Revisiting read-ability: A unified framework for predicting text qual-ity. In Proceedings of the Conference on EmpiricalMethods in Natural Language Processing (EMNLP).John C. Platt. 1998. Fast training of support vector ma-chines using sequential minimal optimization. MITPress.Keith Rayner, Alexander Pollatsek, Jane Ashby, andCharles Clifton Jr. 2012. Psychology of Reading. Psy-chology Press.Sarah E. Schwarm and Mari Ostendorf. 2005. Read-ing level assessment using support vector machinesand statistical language models. In the Proceedingsof the 43rd Annual Meeting on Association for Com-putational Linguistics(ACL 2005).R.J. Senter and E. A. Smith. 1967. Automated read-ability index. Technical report, Wright-Patterson AirForce Base.Clay Shirky. 2009. Here comes everybody: How changehappens when people come together. Penguin UK.Luo Si and Jamie Callan. 2001. A statistical model forscientific readability. In Tenth International Confer-ence on Information and Knowledge Management.Manjira Sinha, Sakshi Sharma, Tirthankar Dasgupta, andAnupam Basu. 2012. New readability measures forbangla and hindi texts. In COLING (Posters), pages1141–1150.Irina Temnikova. 2012. Text Complexity and Text Sim-plification in the Crisis Management Domain. Ph.D.thesis, University of Wolverhampton.B. Üstün, W.J. Melssen, and L.M.C. Buydens. 2006.Facilitating the application of support vector regres-sion by using a universal Pearson VII function basedkernel. Chemometrics and Intelligent Laboratory Sys-tems, 81(1):29–40.</s>
<s>Daffodil International University LibraryDigital Institutional RepositoryComputer Science and Engineering Undergraduate Project Report2018-05Bangla News Classification UsingMachine LearningAhmad, MostakDaffodil International Universityhttp://hdl.handle.net/20.500.11948/2636Downloaded from http://dspace.library.daffodilvarsity.edu.bd, Copyright Daffodil International University LibraryBANGLA NEWS CLASSIFICATION USING MACHINE LEARNING Mostak Ahmad ID: 142-15-3800 Fayjun Nahar Mishu ID: 142-15-3665 S. M. Shakib Limon ID: 142-15-3842 This Report Presented in Partial Fulfillment of the Requirements for the Degree of Bachelor of Science in Computer Science and Engineering Supervised By Md. Riazur Rahman Senior Lecturer Department of CSE Daffodil International University Co-Supervised By Ahmed Al Marouf Lecturer Department of CSE Daffodil International University DAFFODIL INTERNATIONAL UNIVERSITY DHAKA, BANGLADESH MAY 2018 ©Daffodil International University iii DECLARATION We hereby declare that, this project has been done by us under the supervision of Md. Riazur Rahman, Senior Lecturer, Department of CSE Daffodil International University. We also declare that neither this project nor any part of this project has been submitted elsewhere for award of any degree or diploma. Supervised by: Md. Riazur Rahman Senior Lecturer Department of CSE Daffodil International University Submitted by: Mostak Ahmad ID: 142-15-3800 Department of CSE Daffodil International University Fayjun Nahar Mishu ID: 142-15-3665 Department of CSE Daffodil International University S. M. Shakib Limon ID: 142-15-3842 Department of CSE Daffodil International University ©Daffodil International University iv ACKNOWLEDGEMENT First we express our heartiest thanks and gratefulness to almighty Allah for His divine blessing makes us possible to complete the final year project successfully. We really grateful and wish our profound our indebtedness to Md. Riazur Rahman, Senior Lecturer, Department of CSE Daffodil International University, Dhaka. Deep Knowledge and keen interest of our supervisor in the active learning model design influenced to carry out this project. His endless patience, scholarly guidance, continual encouragement, constant and energetic supervision, constructive criticism, valuable advice, reading many inferior drafts and correcting them at all stage have made it possible to complete this project. We would like to express our heartiest gratitude to Dr. Syed Akhter Hossain, Head, Department of CSE, for his kind help to finish our project and also to other faculty members and the staffs of CSE department of Daffodil International University. We would like to thank our entire course mate in Daffodil International University, who took part in this discuss while completing the course work. Finally, we must acknowledge with due respect the constant support and patients of our parents. ©Daffodil International University v ABSTRACT What is the distance between countries to countries from south to north, east to west in the earth? If you find the answer of this question according to the perspective of the present time, you will see that, actually, there is no distance at all. At present, people get the all sorts of news happening around the world instantly within couple of seconds. And it has become possible only because of virtual news portal. That is true that the online news portals are publishing news on live, but it is disappointing that users do not like all sorts of news published in the news portal. At that time,</s>
<s>it has become need to a platform that can easily identify the user’s choice on news and publish only according to their choice. To classify the news by the user’s choice needs to analysis the news text. Lots of works has been done in English news by this time but there have very limited works on Bangla news. These make us inspired to do a research project on this topic though Bangla is one of the 8 major spoken languages around the world. In our research project, we deal with Bangla news collected from Prothom Alo newspaper. From preprocessing the news text, we try to do all sorts of procedures to classify the news text using Machine Learning classifier, “Naive Bayes classifier”. Finally, we develop a user interface to take the news text and show the class of that news. ©Daffodil International University vi TABLE OF CONTENTS CONTENTS PAGE Board of examiners ii Declaration iii Acknowledgements iv Abstract v List of Figures viii List of Tables ix CHAPTER CHAPTER 1: INTRODUCTION 1-4 1.1 Introduction 1 1.2 Objectives 2 1.3 Motivation 2 1.3 Rationale of the Study 3 1.4 Research Questions 3 1.5 Expected Output 3 1.6 Report Layout 4 CHAPTER 2: BACKGROUND 5-9 2.1 Introduction 5 2.2 Related Works 5 2.3 Research Summary 9 2.4 Challenges CHAPTER 3: RESEARCH METHODOLOGY 10-15 3.1 Introduction 10 ©Daffodil International University vii 3.2 Research Subject and Instrumentation 10 3.3 Data Collection Procedure 10 3.4 Data Pre Processing 10 3.5 Work Flow of Identifying News Category 11 3.6 Implementation Requirements 15 CHAPTER 4: EXPERIMENTAL RESULTS AND DISCUSSION 16-29 4.1 Introduction 16 4.2 Raw Data 16 4.3 Cleaning Raw Data 16 4.4 Creating Input File 17 4.5 Excluded Words Removal 17 4.6 Features Selection and Extraction 18 4.7 Building Model and Fit dataset for classifier 18 4.8 Expected Result 19 4.9 Accuracy of Model 20 4.10 Summary 29 CHAPTER 5: SUMMARY, CONCLUSION, RECOMMENDATION AND IMPLICATION FOR FUTURE RESEARCH 30-30 5.1 Summary of the Study 30 5.2 Conclusions 30 5.3 Recommendations 30 5.4 Implication for Further Study 30 REFERENCES 31-32 APPENDIX PLAGIARISM REPORT SCREENSHOT ©Daffodil International University viii LIST OF FIGURES FIGURES PAGE NO Figure 2.2.1: List of predefined categories. 7 Figure 2.2.2: Classification procedure N-gram. 8 Figure 3.5.1: Show the excluded Bangla word. 11 Figure 3.5.2: Proposed working flow chart for classification. 13 Figure 3.5.3: Classification process flowchart. 114 Figure 4.2.1: Experimental raw data. 16 Figure 4.2.2: Tab separated Bangla text. 17 Figure 4.5.1: Bangla removed excluded text. 18 Figure 4.7.1: Dataset chart ratio. 18 Figure 4.8.1: Graphical user interface. 19 Figure 4.8.2: Experimental output of Bangla news class. 19 Figure 4.8.3: Shows the experimental output of another Bangla news class. 20 Figure 4.9.1: Error Rate of K Value 23 ©Daffodil International University ix LIST OF TABLES FIGURES PAGE NO Table 2.2.1: Different n-grams for the word” ” (spaces are shown with”_”). Table 4.9.1: Confusion Matrix for Naïve Byes Table 4.9.2: Naïve Byes Classified news type. Table 4.9.3: Precision, recall, F1-Score for Naive Byes. Table 4.9.4:</s>
<s>Confusion Matrix for K–Nearest Neighbors. Table 4.9.5: K –Nearest Neighbors Classified news type. Table 4.9.6: Precision, Recall, F1-Score for K –Nearest Neighbors. Table 4.9.7: Confusion Matrix for Decision Tree Table 4.9.8: Decision Tree classified news type. Table 4.9.9: Precision, Recall, F1-Score for Decision Tree. Table 4.9.10: Confusion Matrix for Random Forest. Table 4.9.11: Random Forest classified news type. Table 4.9.12: Precision, Recall, F1-Score for Random Forest. Table 4.9.13: Confusion Matrix for Support Vector Machine. Table 4.9.14: Support Vector Machine classified news type. Table 4.9.15: Precision, Recall, F1-Score for Support Vector Machine. Table 4.9.16: Compare Precision of all classifier Table 4.9.17: Compare Recall of all classifier Table 4.9.18: Compare f1-score of all classifier. Table 4.9.19: Compare algorithms accuracy. ©Daffodil International University 1 CHAPTER 1 INTRODUCTION 1.1 Introduction Today, we are living in such a universe where there are no limits among the nations. Maybe, an occurrence may have been happened thousands miles far away however the truth of the present world is-it takes not as much as seconds to spread this news all through the world. We can get or read a large number of news from wherever on the planet with the favors of web, PC and in addition with the cutting edge innovation. For this situation, News Portals are mindful to spread the news quickly through the web. Bangla news entryways are not falling behind. In nowadays, there are heaps of Bangla news entryway exist in the social or virtual life. These news gateways are constantly mindful to the new events happened of our environment. As indicated by our squire's point of view, these news entry are extremely such a great amount of caution to distribute moment hot and selective news to their comparing entryway. Some Bangladeshi news portals are:  Daily Prothom Alo.  Ittefaq.  Samakal.  Dainik Amader Shomoy.  Daily Naya Diganta.  Jai Jai Din.  Bangladesh Pratidin etc. These news entries are persistently distributing the state-of-the-art news. Our venture is managing this Bangla news. We built up an exceptionally basic site that can distinguish or recognize the class of news that has given by a client. Before building up this site, we considered on its hypothetical ideas. By concentrate different sorts of papers identified with this work, we made the techniques utilizing machine learning ways to deal with arrange the news articles. ©Daffodil International University 2 1.2 Objectives:  To study how to classify or categorize Bangla news using some classifier algorithm.  To develop a platform that will be able to detect the category of given Bangla news.  To visualize some analytical analysis of Bangla News classification classified by classifier algorithms. 1.3 Motivation We see that the news entries distribute a wide range of news. That implies, all out news is being distributed in these news entryways. Be that as it may, all individuals don't incline toward a wide range of news. A few people want to peruse sports news most than political news. A few people get a kick out</s>
<s>of the chance to peruse political news than alternate news. A few people get a kick out of the chance to peruse amusement news. It really relies upon one's decision. In any case, now and then, it has turned out to be such a great amount of exhausting to see the news that, really, isn't favored by the client. The news entry turns into the most proficient in the event that it demonstrates the news as indicated by the particular client's decision. In any case, for this, the main errand is to distinguish the news class. We discover bunches of undertakings on news order in English. In any case, there is extremely poor work on Bangla. In the event that Bangla news order gets some exploration chips away at it, it can be utilized as a part of such of numerous genuine applications. Other than this, we see that the present world is such a great amount of concentrating on proposal framework. Clients expect everything that the better things will be prescribed to them by the framework. To make a framework to be proposal skilled must be able to take choice without anyone else's input. To take choice independent from anyone else must need the information mining ability. These made us intrigued to do such sort of research based work. Our work is completely related with machine learning methods and has a few information mining strategies as well. ©Daffodil International University 3 1.4 Rationale of the Study It is no doubt there are lots of works on Natural Language Processing (NLP) in English and these approaches or the processes are being used in many automated system as well as robotics system. But, Natural Language Processing on Bangla is very rare. To develop more automated application or make much more efficient of Machine Learning approaches in Bangla, there has no alternative to work with Bangla text. This made us to be interested to work with this Bangla News classification. In the present time, we see that the notepad editors are much more intellectual. These have some features like auto corrections, grammar checking, auto suggestion etc. These features are the outcome of the blessing of Natural Language Processing. But these features are mostly seen for English. Such kind of features is very rare for Bangla text. These, actually, take us to work with Bangla news as well as Text. 1.5 Research Question  Can we collect row data of Bangla News?  Can we pre-process the row data to be used for the Machine Learning approaches?  Can Multinomial Naïve Bayes Classifier algorithm be used on the pre-processed data?  Can the Machine Learning process correctly detect or identify the category of the given Bangla dataset? 1.6 Expected Output Expected result of this exploration based undertaking is to construct a calculation or making a total productive strategy that will order given Bangla news as for the assembled model of prepared dataset. ©Daffodil International University 4 1.7 Report Layout The report will be</s>
<s>followed as follows Chapter 1 provides the summary of this research based project. Introductory discussion is the key term of this first chapter. Apart from, what motivated us to do such a research based project is explained well in this chapter to. The most important part of this chapter is the Rationale of the Study. Then, what are the research questions and what is the expected outcome is discussed in the last section of this chapter. Chapter 2 covers the discussion on what already done in this domain before. Then the later section of this second chapter shows the scope arisen from their limitation of this field. And very last, the root obstacles or challenges of this research are explained. Chapter 3 is nothing but the theoretical discussion on this research work. To discuss the theoretical part of the research, this chapter elaborates the statistical methods of this work. Besides, this chapter shows the procedural approaches of the Machine Learning classifier- Multinomial Naïve Bayes. And in the last section of this chapter, to validate the model as well as to show the accuracy label of the classifier, confusion matrix analysis is being presented. Chapter 4 is related with the outcome of the whole research and the project. Some experimental pictures are presents in this chapter to make realize the project. Chapter 5 is based on conclusion topics of the project. This chapter is responsible to show the whole project report adhering to recommendation. The chapter is closed by showing the limitations of our works that can be the future scope of others who want to work in this field. ©Daffodil International University 5 CHAPTER 2 BACKGROUND 2.1 Introduction This chapter mirrors the related works that effectively done by a few specialists in the past time in this field. Also, giving an unmistakable clarification of this, this part will indicate what the impediments of these works were and in conclusion, this section depicts extent of our exploration and also its difficulties. 2.2 Related Works It is the matter of sorrow that very few works on this field has accomplished by this time though in the present time, working on this field is increasing day by day. There are enough resources for English language [5] as there has been done many works in this field for English. Recently, not only in Bangla language, but also in other language as like Chinese[17] ,Indonesian[7,8], Hindi[4,9], Urdu[10], Arabic[3] English-Hindi[6] and so on, are being included on Natural Language Processing related works. There are being enriched with resources day by day after doing more research works on this field. Some related works relates to our research work are given below with a short description. Analysis of N-Gram based text categorization for Bangla in a newspaper corpus The desire objective for any order is only building an arrangement of models by utilizing preprocessed datasets. This datasets are the core of such of undertaking. At that point, the datasets are being partitioned into two sections preparing dataset and</s>
<s>testing dataset. At last, these two sub datasets are utilized to manufacture the model. The intension of building such sort of model is to anticipate the class of various s. An exploration group from BRAC University chipped away at such a point. They essentially works in view of N-Gram based order [1]. Text categorization is considered as the grouping. It implies that it is in charge of consequently appointing into some predefined classifications or grouping as for the given sections. The ©Daffodil International University 6 objective of arrangement alludes to this naturally characterize reports into the classes that are predefined and these procedure is being done based on their substance. In their proposal, there principle center was that examination if n-gram based classification can be connected on Bangla. In addition, they likewise break down the execution of their work. What is an N-gram? When something is done on N-gram, first inquiry naturally raised that what, really, alludes to the N-gram. In the event that in no time saying, N-gram is only the sub-grouping. It is the sub-grouping of n-things in any given arrangement. There are some application in view of N-gram idea on computational phonetics and these models are utilized for anticipating words or foreseeing characters for the objective of different application. If I show an example in favor of my words, then I say, the word ব াংল would be composed of following character level n-grams. Table 2.2.1: Different n-grams for the word” ” (spaces are shown with”_”). Thus, we can compress our idea on n-gram that it is the system of character succession of length n removed from a report. For this situation, characterizing the estimation of N is so much concerned. The estimation of n is reliant on specific corpus of records. To produce the n-gram vector for a report, first need a window of character long that is traveled through the text . The, it ought to slide forward by a settled number of character. ©Daffodil International University 7 Why N-gram Based Text Categorization? It is usually appeared to us that human languages have some words that occur much more frequently than others. To get a better understanding on this concept, Zipf’s Law can be an example. It is actually indicates some common ways to express this idea. It can be re-state as follows: “The nth most common word in a human language text occurs with a frequency inversely proportional to n.” It can be said so, if f is considered as the frequency of the word and r is the rank of the word in the list ordered by the frequency, Zipf’s Law states, f = k / r The implication of this law is that there remains always a group of words in a text and it is commonly seen that these words are dominates most of the other words of the language in terms of frequency of use. Test Data For their experiment, they firstly selected 25 test documents randomly. These 25 documents</s>
<s>are from each of the six categories. These are defined from the 1 year Prothom Alo news corpus. Thus, the total numbers of test cases were 150. List of predefined categories and their content source are following. Figure 2.2.1: List of predefined categories. ©Daffodil International University 8 Procedures of their work flow is as follow: Figure 2.2.2: Classification procedure N-gram. Observation: In their experiment, they found that character level trigram perform better than any other n-grams. They thought the reason behind being better of trigram is it could hold more information for modeling the language. A machine learning approach for authorship attribution for Bengali blogs This examination work clarifies the portrayal of initiation attribution framework for Bengali blog writings. There, they had exhibited another Bengali blog corpus. This corpus contains just about 3000 entries. These 3000 entries were composed by three creators. In their examination, they have offered a framework that was with respect to arrangement framework. Their methodologies depended on lexical highlights. Lexical highlights alludes to character bigrams and trigrams, word n-grams and stop words. They accomplished over 99% precise outcomes on their dataset utilizing Multi layered Perceptrons (MLP) among the for classifiers [2]. They concluded by declaring that MLP can produce very good results for big data sets. They also claimed that lexical n-gram based features can be the best for any authorship attribution system. ©Daffodil International University 9 2.3 Research Summary The above discussion done on various types of research works from different research teams, it is being appeared to us that recently, research work on Bangla text is increasing day by day. Some good outcomes already prove this statement well. Though, enough resources are not present, but hope is that this field is becoming more resourceful each after passing a single day. 2.4 Challenges The main challenges of this work are dealing with the datasets. To clean the dataset, we need some efficient approaches to perform it but there are not enough recognized approaches to do it. Another challenge of this work is not having enough resources regarding this topic. ©Daffodil International University 10 CHAPTER 3 RESEARCH METHODOLOGY 3.1 Introduction This chapter mainly deals with the theoretical knowledge of the research work. It will give the clear understanding of the concept of work. To make it more clear, very first, Research Subject and Instrumentation is explained shortly. Then we know that in the data mining or machine learning process data are the heart. For this reason, data collection process is described in this section. The chapter is being closed by giving the explanation of our project’s statistical theories and besides, giving the clear concept of the implementation requirements. 3.2 Research Subject and Instrumentation We mean by research subject is that research area that is being studied and researched for clear understandings. Not only for clear understanding, but also research subject is responsible for giving the right knowledge of various research parameters. On the other hand, Instrumentation refers to the required instruments or tools that are used</s>
<s>by the researchers. 3.3 Data Collection Procedure To look into on particular field, the quick and first thing is the Data. Information is, really, considered as the core of the machine learning process. What's more, for our examination, there has no option of information. In this way, it has turned into our most difficult errand for our examination. We gather our information from the most popular Bangla news entrance of Bangladesh named Prothom Alo. Our Bangla news is gathered from this site by utilizing corpus. We gathered just about 4 year’s news from them. What's more, the news are put away as content record arrange. 3.4 Data Pre Processing When we manage the column information, the achievement generally relies upon the pre-prepared information. The all the more productively information will be pre-prepared; the result will be more exact. In single word, it is the starting test for such sort of research based work. Our ©Daffodil International University 11 column information has some html label name. So it must be expelled from the archive. This was our first mindful to expel the all html label name from the news. At that point, we need to keep up some to clean the superfluous space from the report. At that point it expels all new line to orchestrate it into a line. That implies, in the wake of aggregating any news document, each line will be dealt with as a news. At that point, in conclusion, for each individual news, this dole out a number for recognizing classification. We use (0-8) for six news categories. These are: 0Politics, 1 Crime, 2 Sports, 3 Entertainment, 4Business, 5 Life Style, 6Accident, 7National, 8International. Finally, subsequent to doling out a particular number for every news, this create tsv record arranged document that is tab isolated. This tsv record is, really, our pre-handled information with its class. Accordingly, every one of the six unmitigated news are being relegated particular number. The six news class brings about six tsv record. At that point we utilize another python record named join.py to join every one of the six tsv document into just a tsv document. For this, each of the six ordered news tsv records are stayed into a document. At that point, simply needs to say the name of the organizer, it marge the all documents augmentation as tsv into one record. 3.5 Work Flow of Identifying News Category Removing Excluded Word We have made a list that contains Bangla words that are actually meaningless with respect to identify a news category. We named after those words as Excluded words. We stored our all selected excluded words in a txt file named excluded_word_list_out.txt. When the program is being run, at first, remove all of the excluded words from our input file. Figure 3.5.1: Show the excluded Bangla word. Split and Join: To remove the excluded words from the dataset, firstly, the whole dataset is split. This process spit the whole news into words. So, after splitting process, the</s>
<s>whole news be the collection of ©Daffodil International University 12 only words. Then, every word is checking according to excluded word list. If any word from the dataset is being matched with the excluded word list, then, this word is being removed from the dataset. After checking all words remaining in the dataset, joining process starts. The joining process is very simple- just join the words into each news. Features Extraction: This phase is the main part of the news classification. Mainly, this phase decides in which way classify will be done. We use the word count as our feature extraction. There are built in method for this in the sklearn. We just use import this method to use this method for our feature extraction. Building Model: After successfully feature extraction, we are ready for building our model. And this is being accomplished by training our machine. We split our dataset into 3:1. The three portion of our data set are used for our training dataset and the rest portion is for testing. That means, 75% data from the datasets are used training and rest 25% is considered as the testing. Classifier Fitting: In this stage, our machine is ready or fit for the classifier. We use several classifier such as Naïve Byes, Decision Tree, K-Nearest Neighbors, Support Vector Machine and Random Forest for classify our news text. Sklearn has built in classifier of this. We just import it and fit it. Predict the Category This is the final stage of our news classification approach. In this stage, our model is being prepared for testing Bangla text input data. According to the given input text, this model can classify this text using several classifier such as Naïve Byes, Decision Tree, K-Nearest Neighbors, Support Vector Machine and Random Forest. ©Daffodil International University 13 Flow Chart: Yes Yes Figure 3.5.2: Proposed Working Flow chart for classification. Start Take Input Preprocess the text file Split the text Read Excluded File Remove the word Match Join the words Is classify? Show the class End ©Daffodil International University 14 Yes Yes Yes Yes Yes Yes Figure 3.5.3: Classification process flowchart. Text sports? crime? Is business? Entertainment? Is Life Style? Is Politics? Is other news? Show News Class Start End ©Daffodil International University 15 3.6 Implementation Requirements After the proper analysis on all necessary statistical or theoretical concepts and methods, a list of requirement has been generated that must be required for such a work of Bangla News Classification. The probable necessary things are: Hardware/Software Requirements  Operating System ( Windows 7 or above)  Hard Disk (minimum 4 GB)  Ram(more than 1 GB)  Web Browser(preferably chrome) Developing Tools  Python Environment  Spyder (Anaconda3)  Django 1.11 (For UI)  Notepad++  Bootstrap ©Daffodil International University 16 CHAPTER 4 EXPERIMENTAL RESULTS AND DISCUSSION 4.1 Introduction This chapter 4 mainly focuses on the descriptive analysis of the data used in the research as well as the experimental results of our project. 4.2 Raw Data</s>
<s>Our raw data are from the most renowned news portal of Bangladesh named Prothom Alo. We collect our data by using Corpus. After collecting data, news is stored on text document file. In these file, data are present with some html tag name. Our row data looks like: Figure 4.2.1: Experimental raw data. So it has become obvious to clean the data. That means pre-processed the row data for preparing for the model. 4.3 Cleaning Raw Data We use a script file to be helpful of our data pre-processing task. This python script file is responsible for: i. Remove all html tag name. ii. Remove unnecessary spaces from the text. iii. Remove all new line of each news and arrange it in a line. iv. Assign a integer number for pre defining the category of each news. ©Daffodil International University 17 This script result in a Tab Separated Value (tsv) formatted file and it looks like: Figure 4.2.2: Tab separated Bangla text. Actually, by this process, we can get all our categorical news in individuals file but the outputted file data are pre-processed and categorical. 4.4 Creating Input File After data cleaning phase, we get six categorical tsv files as we are working on this research on these six categories. The nine categories are: Politics, Crime, Sports, Entertainment, Business, Life Style, Accident, National and International. Hence, after successfully preprocessing process, there have these six categorical news file in our hand. Then, to perform Natural Language Process on a Bangla news, we must join all these files into a file. For this, we use another python script named join.py. This file takes the folder name that contains all tsv files as an input and produces only a file where all news contained individually being merged. 4.5 Excluded Words Removal We develop a python code for classify a news into a category. After joining all news into a file, our system is ready for building a model. For this, a little cleaning process is done before. We create a list that contains some Bangla words that actually no related with the category of the news. We called it as Excluded words and named it Excluded words list. Just checking that if excluded words are present in our input file or not. If exists, must be removed. ©Daffodil International University 18 Figure 4.5.1: Bangla removed excluded text. 4.6 Feature Selection and Extraction This phase is the main part of classifying approach and this is feature selection and extraction. It actually, decides, in which perspective classify will be done. We use word count as our feature selection and create it. 4.7 Building Model and Fit Dataset for Classifier To build a model, we separate our dataset into two parts.  Training Dataset  Testing Dataset We use 3:1 ratio for preparing our model. The three portion data set will be treated as training dataset and the rest one portion will be considered as testing dataset. Figure 4.7.1: Dataset chart ratio. 75%25%DatasetTraining DataTesting Data©Daffodil International</s>
<s>University 19 In the concept of percentage, 75% data will be for training and 25% will be for testing. And this will make our expected model, As, we are dealing with several classifier, we use it by importing sklearn package. This classifier can produce an integer that actually means the category of the expected news. 4.8 Experimental Result After completing the classification of Bangla news, User Interface shown in figure 4.8.1 and figure: 4.8.2. It is an experimental input field where user can produce any kind of Bangla news text. Figure 4.8.1: Graphical user interface. Figure 4.8.2: Experimental output of Bangla news class “Sports”. ©Daffodil International University 20 Figure 4.8.3: Shows the experimental output of “Entertainment News”. 4.9 Accuracy of Model This is the Confusion Matrix of our model, confusion matrix is a technique for summarizing the performance of a classification algorithm. Classification accuracy alone can be misleading if you have an unequal number of observations in each class or if you have more than two classes in dataset. For Naive Bayes Classifier Table 4.9.1: Confusion Matrix for Naïve Bayes. Output Input Politics Crime Sports Entertainment Business Life Style Accident National International Politics 141 28 4 0 5 0 0 12 12 Crime 10 206 0 0 2 0 0 17 3 Sports 13 1 383 10 1 0 0 12 4 Entertainment 3 01 10 77 0 0 0 12 3 Business 5 2 0 1 95 0 0 13 0 Life Style 2 0 0 0 2 5 0 4 0 Accident 0 28 0 0 0 0 1 1 0 National 28 43 3 12 21 1 0 298 8 International 5 9 6 2 6 0 0 12 50 ©Daffodil International University 21 Successfully Classified: Table 4.9.2: Naive Bayes classified news type. No. News Type Successfully Classify 1 Political News 141 2 Crime News 206 3 Sports News 383 4 Entertainment News 77 5 Business News 95 6 Life Style News 5 7 Accidental News 1 8 National News 298 9 International News 50 Total 1256 Total News = 6530 Testing News (25%) = 1632.5 Accuracy of this model = (1256 / 1632.5) * 100 = 76.94% Table 4.9.3: Precision, recall, F1-Score for Naive Bayes. Class Name Precision Recall F1-Score Politics 0.68 0.70 0.69 Crime 0.65 0.87 0.74 Sports 0.94 0.90 0.92 Entertainment 0.75 0.73 0.74 Business 0.72 0.83 0.77 Life Style 0.83 0.38 0.53 Accident 1.00 0.03 0.06 National 0.78 0.72 0.75 International 0.62 0.56 0.59 Average / Total 0.78 0.77 0.76 ©Daffodil International University 22 For K-Nearest Neighbors Table 4.9.4: Confusion Matrix for K–Nearest Neighbors. Output Input Politics Crime Sports Entertainment Business Life Style Accident National International Politics 97 26 41 2 0 0 0 22 10 Crime 8 126 85 0 0 0 1 17 1 Sports 1 4 404 8 0 0 0 6 1 Entertainment 0 2 31 53 0 0 0 20 0 Business 4 4 21 2 5 0 0 26 6 Life Style 1 0 1 0 1</s>
<s>5 0 5 0 Accident 1 11 13 0 0 0 5 0 0 National 16 30 123 12 9 1 5 211 7 International 1 8 40 3 2 0 0 12 24 Successfully Classified: Table 4.9.5: K –Nearest Neighbors classified news type. No. News Type Successfully Classify 1 Political News 97 2 Crime News 126 3 Sports News 404 4 Entertainment News 53 5 Business News 5 6 Life Style News 5 7 Accidental News 5 8 National News 211 9 International News 24 Total 930 Total News = 6530 Testing News (25%) = 1632.5 Accuracy of this model = (930 / 1632.5) * 100 = 56.97% ©Daffodil International University 23 Table 4.9.6: Precision, Recall, F1-Score for K –Nearest Neighbors. Class Name Precision Recall F1-Score Politics 0.75 0.48 0.59 Crime 0.60 0.53 0.56 Sports 0.53 0.95 0.68 Entertainment 0.66 0. 50 0.57 Business 0.77 0.46 0.57 Life Style 0.83 0.38 0.53 Accident 0.45 0.17 0.24 National 0.66 0.51 0.58 International 0.49 0.27 0.35 Average / Total 0.63 0.60 0.58 Figure 4.9.1: Error Rate of K Value. ©Daffodil International University 24 For Decision Tree Table 4.9.7: Confusion Matrix for Decision Tree Output Input Politics Crime Sports Entertainment Business Life Style Accident National International Politics 77 38 22 0 5 0 1 56 3 Crime 18 146 16 0 5 0 4 49 0 Sports 5 4 379 3 4 0 0 26 3 Entertainment 2 3 57 25 0 0 0 19 0 Business 3 8 14 0 39 0 0 49 3 Life Style 1 0 2 0 2 5 0 3 0 Accident 0 16 1 0 0 0 1 10 2 National 10 59 61 4 5 0 3 267 4 International 16 11 27 1 3 1 0 22 20 Successfully Classified: Table 4.9.8: Decision Tree classified news type. No. News Type Successfully Classify 1 Political News 77 2 Crime News 146 3 Sports News 379 4 Entertainment News 25 5 Business News 39 6 Life Style News 5 7 Accidental News 1 8 National News 267 9 International News 20 Total 959 Total News = 6530 Testing News (25%) = 1632.5 Accuracy of this model = (959 / 1632.5) * 100 = 58.74% ©Daffodil International University 25 Table 4.9.9: Precision, Recall, F1-Score for Decision Tree. Class Name Precision Recall F1-Score Politics 0.63 0.38 0.48 Crime 0.51 0.61 0.56 Sports 0.65 0.89 0.76 Entertainment 0.76 0. 24 0.36 Business 0.62 0.34 0.44 Life Style 0.83 0.38 0.53 Accident 0.11 0.03 0.05 National 0.53 0.64 0.58 International 0.57 0.22 0.32 Average / Total 0.59 0.59 0.56 For Random Forest Table 4.9.10: Confusion Matrix for Random Forest. Output Input Politics Crime Sports Entertainment Business Life Style Accident National International Politics 52 19 26 0 1 0 1 104 0 Crime 8 113 37 0 2 0 4 78 0 Sports 2 1 409 0 0 0 0 12 3 Entertainment 0 2 73 7 0 0 0 24 0 Business 9 3 12 0 20 0 0</s>
<s>72 0 Life Style 0 0 2 0 0 5 0 6 0 Accident 0 14 2 0 0 0 1 14 0 National 7 20 70 0 3 1 3 312 1 International 0 12 31 0 0 0 0 46 1 ©Daffodil International University 26 Successfully Classified: Table 4.9.11: Random Forest classified news type. No. News Type Successfully Classify 1 Political News 52 2 Crime News 113 3 Sports News 409 4 Entertainment News 7 5 Business News 20 6 Life Style News 5 7 Accidental News 1 8 National News 312 9 International News 1 Total 920 Total News = 6530 Testing News (25%) = 1632.5 Accuracy of this model = (920 / 1632.5) * 100 = 56.35% Table 4.9.12: Precision, Recall, F1-Score for Random Forest. Class Name Precision Recall F1-Score Politics 0.67 0.26 0.37 Crime 0.61 0.47 0.54 Sports 0.62 0.96 0.75 Entertainment 1.00 0. 07 0.12 Business 0.77 0.17 0.28 Life Style 0.83 0.38 0.53 Accident 0.00 0.00 0.00 National 0.47 0.75 0.58 International 0.50 0.01 0.02 Average / Total 0.50 0.56 0.50 ©Daffodil International University 27 For Support Vector Machine Table 4.9.13: Confusion Matrix for Support Vector Machine. Output Input Politics Crime Sports Entertainment Business Life Style Accident National International Politics 127 19 6 0 6 0 1 32 11 Crime 18 172 2 0 2 0 7 31 6 Sports 6 1 395 0 0 1 0 10 2 Entertainment 1 0 15 76 1 1 0 9 3 Business 3 2 1 1 92 0 0 15 2 Life Style 2 1 0 0 1 5 0 4 0 Accident 0 13 0 0 0 0 13 3 1 National 24 26 6 5 29 2 3 305 14 International 3 11 8 1 3 0 0 18 46 Successfully Classified: Table 4.9.14: Support Vector Machine classified news type. No. News Type Successfully Classify 1 Political News 127 2 Crime News 172 3 Sports News 395 4 Entertainment News 76 5 Business News 92 6 Life Style News 5 7 Accidental News 13 8 National News 305 9 International News 46 Total 1231 Total News = 6530 Testing News (25%) = 1632.5 Accuracy of this model = (1231 / 1632.5) * 100 = 75.41% ©Daffodil International University 28 Table 4.9.15: Precision, Recall, F1-Score for Support Vector Machine. Class Name Precision Recall F1-Score Politics 0.69 0.63 0.66 Crime 0.70 0.72 0.71 Sports 0.91 0.93 0.92 Entertainment 0.83 0. 72 0.77 Business 0.69 0.79 0.74 Life Style 0.56 0.38 0.45 Accident 0.54 0.43 0.48 National 0.71 0.74 0.73 International 0.54 0.51 0.53 Average / Total 0.75 0.75 0.75 Compare Algorithms Table 4.9.16: Compare Precision of all classifier. Algorithms Accuracy Naive Bayes 0.78 K–Nearest Neighbors 0.63 Decision Tree 0.59 Random Forest 0.60 Support Vector Machine 0.75 Table 4.9.17: Compare Recall of all classifier. Algorithms Accuracy Naive Bayes 0.77 K–Nearest Neighbors 0.60 Decision Tree 0.59 Random Forest 0.56 Support Vector Machine 0.75 ©Daffodil International University 29 Table 4.9.18: Compare f1-score of all classifier. Algorithms Accuracy Naive</s>
<s>Bayes 0.76 K–Nearest Neighbors 0.58 Decision Tree 0.56 Random Forest 0.50 Support Vector Machine 0.75 Table 4.9.19: Compare algorithms accuracy. Algorithms Accuracy Naive Bayes 76.94% K–Nearest Neighbors 56.97% Decision Tree 58.74% Random Forest 56.35% Support Vector Machine 75.41% From the above comparison tables, we see that, in the case of Precision, Recall, f1-score and accuracy Naïve Bayes classifier is the best. The values of precision, recall and f1-score are respectively 0.78, 0.77, and 0.76 and the accuracy of this model is 76.94% that is highest value in comparison with all classifier. 4.10 Summary After getting this accuracy, highest result come from Naïve Byes and Support Vector Machine that’s why, we are satisfied, if we are try to increase accuracy level, must to prepare the dataset properly. The all categorical news should be equally numbered. At that, to increase the accuracy level, data cleaning has not alternative. The more data are preprocessed, the more accurate prediction will be shown by this classifier. ©Daffodil International University 30 CHAPTER 5 SUMMAY, CONCLUSION, RECOMMENDATION AD IMPLECATION FOR FUTURE RESEARCH 5.1 Summary of the Study It has no doubt that there are lots of research works on Natural Language Processing especially on English Language. When the outcome of such kind of works is taking a revolutionary change in our computing life, recently, such kind of research is being increased this time. We get some outstanding real life applications on the blessing of such kind of research works. But it is a matter of great regrets that there has no such of research work on Bangla Language. But it is the hope for us that many of researchers from the various countries have started to do research on this field. In our research work, we do some approaches of our Bangla News to classify its category. 5.2 Conclusion Though, the accuracy level of the classifier algorithm that we used in our project is not so good but we have learnt lots of things from this research. We can now deal with the Bangla Text. We can now preprocess the row data. And can apply the classifier on our trained dataset. Hope, it will be very beneficial to the future researchers to do such kind of research on Bangla Text or Bangla news. 5.3 Recommendations A few notable recommendations for this are as follows:  To create the data set more efficiently, can produce a better output of this research work. 5.4 Implication for Further Study  Adding more categories in this project, can make this more efficient.  Using more classifiers on this dataset, can get a better understanding on which classifier can be the best for this work. ©Daffodil International University 31 References [1] Mansur, Mineral, “Analysis of n-gram based text categorization for Bangla in a newspaper corpus”. Diss. BRAC University, 2006. [2] Phani, Shanta, Shibamouli Lahiri, and Arindam Biswas, "A machine learning approach for authorship attribution for Bengali blogs." Asian Language Processing (IALP), 2016 International Conference on. IEEE, 2016. [3] El-Barbary,O. G. El-Barbary, "Arabic</s>
<s>news classification using field association words." SCIENCEDOMAIN Int 6.1 (1-9), 2016. [4] Dutta, K., Kaushik, S. and Prakash, N, “Machine learning approach for the classification of demonstrative pronouns for Indirect Anaphora in Hindi News Items”, The Prague Bulletin of Mathematical Linguistics, 95, pp.33-50 , Apr 2011 . [5] Carreira, Ricardo, et al. "Evaluating adaptive user profiles for news classification." Proceedings of the 9th international conference on intelligent user interfaces. ACM, 2004. [6] Haque, Rejwanul, et al. "English-Hindi transliteration using context-informed PB-SMT: the DCU system for NEWS 2009." Proceedings of the 2009 Named Entities Workshop: Shared Task on Transliteration. Association for Computational Linguistics, 2009. [7] Asy'arie, Arni Darliani, and Adi Wahyu Pribadi, "Automatic news articles classification in Indonesian language by using naive bayes classifier method." Proceedings of the 11th International Conference on Information Integration and Web-based Applications & Services. ACM, 2009. [8] Buana, Putu Wira, and I. Ketut Gede Darma , "Combination of k-nearest neighbor and k-means based on term re-weighting for classify Indonesian news." International Journal of Computer Applications 50.11, 2012. [9] Kanan, Tarek, and Edward A. Fox. "Automated arabic text classification with P‐Stemmer, machine learning, and a tailored news article taxonomy." Journal of the Association for Information Science and Technology 67.11: 2667-2683, 2016. [10] Kanan, Tarek, and Edward A. Fox, "Automated arabic text classification with P‐Stemmer, machine learning, and a tailored news article taxonomy" Journal of the Association for Information Science and Technology 67.11: 2667-2683, 2009. [11] Ee, Chee-Hong Chan Aixin Sun, and Peng Lim, "Automated online news classification with personalization." 4th international conference on asian digital libraries, 2001. [12] Dilrukshi, Inoshika, Kasun De Zoysa, and Amitha Caldera, "Twitter news classification using SVM." Computer Science & Education (ICCSE), 2013 8th International Conference on. IEEE, 2013. ©Daffodil International University 32 [13] Selamat, Ali, Hidekazu Yanagimoto, and Sigeru Omatu, "Web news classification using neural networks based on PCA." SICE 2002. Proceedings of the 41st SICE Annual Conference.,Vol. 4. IEEE, 2002. [14] Kroha, Petr, and Ricardo Baeza-Yates, "A case study: News classification based on term frequency" Database and Expert Systems Applications, 2005. Proceedings. Sixteenth International Workshop on , IEEE, 2005. [15] Kotsiantis, Sotiris B., I. Zaharakis, and P. Pintelas, "Supervised machine learning: A review of classification techniques." Emerging artificial intelligence applications in computer engineering 160 : 3-24, 2007. [16] Billsus, Daniel, and Michael J. Pazzani, "A hybrid user model for news story classification." UM99 User Modeling. Springer, Vienna, 99-108 , 1999. [17] Xu, Jun, Yu-Xin Ding, and Xiao-Long Wang, "Sentiment classification for Chinese news using machine learning methods." Journal of Chinese Information Processing 21.6 : 95-100, 2007. [18] Kotsiantis, Sotiris B., I. Zaharakis, and P. Pintelas, "Supervised machine learning: A review of classification techniques." Emerging artificial intelligence applications in computer engineering 160 : 3-24 ,2007. [19] Masand, Brij, Gordon Linoff, and David Waltz, "Classifying news stories using memory based reasoning." Proceedings of the 15th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 1992. ©Daffodil International University 33 Appendix Project Reflection To complete the project we faced so</s>
<s>many problem, first one was to determine the methodological approach for our project. It was not traditional work it was a research based project, more over there were not much work done before on this area. So we could not get that much help from anywhere. Another problem was that, collection of data, it was big challenge for us. There was no available source where we could get Bangla news text data, that’s why we were develop a corpus for data collection. Also we started collect data manually. After a long time with hard work we could do that. ©Daffodil International University 34 Plagiarism Report Screenshot:</s>
<s>BANGLA LANGUAGE MODE (SADHU/CHOLITO) CLASSIFICATION Abdul Bari Parves ID: 151-15-4879 AND Emranul Haque Rakib ID: 151-15-5049 This Report Presented in Partial Fulfillment of the Requirements for the Degree of Bachelor of Science in Computer Science and Engineering Supervised By Md. Riazur Rahman Senior Lecturer Department of CSE Daffodil International University Co-Supervised By Zerin Nasrin Tumpa Lecturer Department of CSE Daffodil International University DAFFODIL INTERNATIONAL UNIVERSITY DHAKA, BANGLADESH MAY 2019©Daffodil International University ©Daffodil International University iii ©Daffodil International University ACKNOWLEDGEMENT First, we express our heartiest thanks and gratefulness to almighty Allah for His divine blessing makes us possible to complete the final year project successfully. We really grateful and wish our profound our indebtedness to Md. Riazur Rahman, Senior Lecturer, Department of CSE Daffodil International University, Dhaka. Deep knowledge & keen interest of our supervisor in the field of “natural language processing” to carry out this project. His endless patience, scholarly guidance, continual encouragement, constant and energetic supervision, constructive criticism, valuable advice, reading many inferior drafts and correcting them at all stage have made it possible to complete this project. We would like to express our heartiest gratitude to Prof. Dr. Syed Akhter Hossain and Head, Department of CSE, for his kind help to finish our project and also to other faculty member and the staff of CSE department of Daffodil International University. We would like to thank our entire course mate in Daffodil International University, who took part in this discuss while completing the course work. Finally, we must acknowledge with due respect the constant support and patients of our parents. ©Daffodil International University ABSTRACT This project addresses the problem of distinguishing between two form of Bangla language, namely Sadhubhasha and Cholitobhasha. The classifier would be beneficial for finding the right word choice for Bangla literature. The main vision of this project is to different the modern era’s early Bangla form of Sadhubhasha to the current form of Cholitobhasha. As far as we know there has been no single work done addressing this particular issue. From another perspective, only a few works have been done on “Bangla Language”. So, it has been difficult to conduct advance linguistic works on Bangla language like extracting information or summarizing. We had to face difficulties when collecting Bangla data due to the limited availability, but finally we have collected around total 100000 words dataset for this project. Among which 80% of the data is used for training and rest 20% is test data. Machine learning algorithms Random forest, Naïve Bayes, Support Vector Machine, K-nearest neighbor and Decision tree are applied to classify the language and the Term Frequency-Inverse Document Frequency and Bag of Words are used for the numerical representation. With these classifiers 91% to 99.5% accuracy is observed. The promising outcome of this project is, "sadhu and cholito Language classifier" can be used as the first step on that ladder from where others will be influenced to do further research on Bangla language. ©Daffodil International University TABLE OF CONTENTS CONTENTS PAGE Board of</s>
<s>Examiners i Declaration ii Acknowledgment iii Abstract iv CHAPTER PAGE CHAPTER 1: INTRODUCTION 1-2 1.1 Introduction 1 1.2 Motivation 1-2 1.3 Research Questions 2 1.4 Expected Outcome. 2 1.5 Layout of the Report 2 CHAPTER 2: BACKGROUND STUDY 3-5 2.1 Introduction 3 2.2Related Works 2.3 Research Summary 2.4 Challenges 3-4 4-5 CHAPTER 3: RESEARCH METHODOLOGY 6-19 3.1 Introduction 6 3.2 Data Collection Procedure 6 3.3 Data Processing 6-10 3.4 Proposed Methodology 10-18 3.5 Statistical Analysis 18 ©Daffodil International University 3.6 Implementation Requirements 19 CHAPTER 4: EXPERIMENTAL RESULTS AND DISCUSSION 20-26 4.1 Introduction 21 4.2 Experimental Result 21-24 4.3 Descriptive Analysis 24-26 4.4 Summary 27 CHAPTER 5: SUMMARY AND CONCLUSION 27-28 5.1 Summary of the Study 27 5.2 Conclusion 27 5.3 Recommendations 27 5.4 Implications for Further Study 27-28 REFERENCES 29 vii ©Daffodil International University LIST OF FIGURES FIGURES PAGE NO Fig 3.1: stop words for Bangla 9 Fig 3.2: stop words for Bangla 9 Fig 3.3: Random Forest algorithm 12 Fig 3.4: Support vector machine creating hyperplane to classify 15 Fig 3.5: Details of confusion matrix 16 Fig 3.6: Flow Chart of Proposed Model 18 Fig 4.1: List of most used words in Sadhubhasha 24 Fig 4.2: List of most used words in Cholitobhasha 24 Fig 4.3: image plot of most used words in Sadhubhasha 25 Fig 4.4: image plot of most used words in Cholitobhasha 25 viii ©Daffodil International University LIST OF TABLES TABLES PAGE NO Table 2.1: Summary of the related works 4-5 Table 3.1: Format of dataset 7 Table 3.2: Data statistics 8 Table 3.3: Vocabulary Scoring 11 Table 3.5: Sentence binary vector representation 11 Table 4.1: result achieved from random forest algorithm 20 Table 4.2: result achieved from Multinomial Naive Bayes algorithm 21 Table 4.3: result achieved from Gaussian Naive Bayes algorithm 21 Table 4.4: result achieved from Support vector machine algorithm 22 Table 4.5: result achieved from K-nearest neighbor algorithm 22 Table 4.6: result achieved from decision tree algorithm 23 ©Daffodil International University CHAPTER 1 INTRODUCTION 1.1 Introduction Bengali is sixth most commonly spoken language in the world which was originated and evolved from the Sanskrit in 1000-1200CE. The modern literary form of Bangla language was developed during the 1800 and early 1900 based on the different dialect spoken in the Nadia region, a west-central Bengali dialect. In modern era, Bengali language has two form: Sadhubhasha and Cholitobhasha. Sadhubhasha was considered as the proper from of Bangla language which later took as a form of writing novel and Cholitobhasha is used for the normal conversation. Over the year, rapid changes in tradition and culture might gave us many benefits but also come with some problems like language malformation. Now-a-days, an erroneous desire to become modernized is affecting our Bangla language deleteriously through misusing it. So, people are neither using Sadhubhasha nor Cholitobhasha. Instead, we are becoming habituated to practice malformed Bangla language. Text classification is known as an important method to handle and process a large number of documents in digital forms which is increasing</s>
<s>continuously. Text classification is mainly used for extracting information, text retrieval, and summarization. This project will demonstrate the text classification process through machine learning techniques. In our project, for classifying Sadhubhasha and Cholitobhasha, we have first used Term Frequency-Inverse Document Frequency and Bag of Words model to convert the text document into corresponding numerical features. In addition, Random forest, Naive Bayes, Support Vector Machine, K-nearest neighbor, and Decision tree classifier to classify Sadhubhasha and Cholitobhasha. Our proposed method is expected to perform better than other methods used to classify Bangla text. 1.2 Motivation Everything evolves with the time which ranges from lifestyle, culture and even language. In modern days, everything is stored in a digital form and its popularity is increasing day by day. As Bangla is our first language, most of the data of our country https://en.wikipedia.org/wiki/Sanskrit©Daffodil International University is in Bangla. It is inevitable that data is the most powerful source of information. To avail it we need to extract the data, however, most of the tools or methods are created for English and few other popular languages. For this deficiency, most of the data in Bangla could not be extracted and therefore these vast amount of data gets wasted. This current scenario has motivated us to work with our mother tongue Bangla. We have decided to start our work with the earliest two form of Bangla language, Sadhubhasha and Cholitobhasha. 1.3 Research Questions 1. Is it possible to accurately classify the forms of bangla language namely Sadhubhasha and Cholitobhasha? 1.4 Expected Outcome • To classify Sadhubhasha and Cholitobhasha. • To find out the most frequently used words in both form of Bengali language. 1.5 Layout of the Report This report is organized as follows: • Chapter One includes introduction to the project, motivation, research questions, and expected outcome. • Chapter Two includes “Background”, related works, research summary, and challenges. • Chapter Three include Research Methodology. • Chapter Four includes Experimental Results and Discussion. • Chapter five includes Summary and Conclusion. ©Daffodil International University CHAPTER 2 BACKGROUND STUDY 2.1 Introduction In this section, we have reviewed some related works, research summary and challenges about our research. In related works section, we will try to explain other research paper and their works, their methods, and accuracy which are related to our work. In research summary section we will give the summary of our related works. In challenges section, we will discuss how we increased our accuracy. 2.2 Related Works Abu Nowshed Chy, Md. Hanif Seddiqui, Sowmitra Das have proposed a method to classify Bangla news. They have used the Naive Bayes classifier to classify news from news article. They also used RSS crawler for data collection and then build a bangli lexicon and a Bengali stemmer and finally run Naive Bayes classifier [1]. In another project, Andrew McCallum and Kamal Nigam have discussed and compared different model of Naive Bayes classifier. They have compared Multi-variate Bernoulli Model and Multinomial Model. Each of the model perform differently with the variation</s>
<s>of data and size of data. In few data set, Bernoulli Model showed good performance, especially in small size data set. In contrast, Multinomial Model performed well with large scale datasets [2]. To classify text, Andronicus A. Akinyelu and Aderemi O. Adewumi have proposed a new method in 2014. As people gets hack every day using phishing email, they have used a machine learning algorithm random forest in their study. The result was impressive and accuracy rate was 99.7 % [3]. Baoxun Xu, Xiufeng Guo, Yunming Ye and Jiefeng Cheng also have proposed an improved method of random forest algorithm for text categorizing earlier. Their proposed feature weighting method and tree selection method an improvement or random forest algorithm. With the new feature weighting method for subspace sampling and tree selection method, they effectively reduce subspace size and improve classification performance without increasing error bound. They have conducted six datasets and all of their proposed Around 70-90% accuracy was achieved by their improved random forest algorithm [4]. Recently, in 2018, Suresh Merugu, M. Chandra Shekhar Reddy, Ekansh Goyal and Lakshay Piplani proposed a supervised machine learning approach ©Daffodil International University for classifying text massages. They used many supervised algorithms such as SVM, Random Forest, K Nearest Neighbor and BernoulliNB. K Nearest Neighbor performed worst between all of them while Random Forest and BernoulliNB had the best accuracy almost 98% [5]. In another previous study, M. Ikonomakis, S. Kotsiantis and V. Tampakas have discussed about few machine learning techniques for text classification. They stated the detailed information about how a algorithom works, how we should prepare our dataset and how we should preprocess. They also outlined the result evaluation [6]. Timothy P. Jurka, Loren Collingwood, Amber E. Boydstun, Emiliano Grossman, and Wouter van Atteveldt discussed about a new tool called RTextTools which are used in text classification for beginners. By using RTextTools one could classify any text only through 10 easy steps. From training to result evaluation by RTextTools are discussed in this paper [7]. 2.3 Research Summary Table 2.1: Summary of the related works SL Author Methodology Description Outcome 1. Abu Nowshed Chy, Md. Hanif Seddiqui, Sowmitra Das naive Bayes classifier Classifying Bangla news 78% 2. Andrew McCallum and Kamal Nigam Multi-variate Bernoulli Model and Multinomial Model Comparison between Multi-variate Bernoulli Model and Multinomial Model. Multinomial Model performed 4.8% better Multi-variate Bernoulli Model 3. Andronicus A. Akinyelu and Aderemi O. Adewumi Random Forest Classifying phishing email from emails. 99.7% 4. Baoxun Xu, Xiufeng Guo, Yunming Ye and Jiefeng Cheng Weighting method and tree selection method an improvement New feature weighting method for subspace sampling and tree selection method, they 70-90% for six different datasets. ©Daffodil International University for random forest algorithm. effectively reduce subspace size and improve classification performance without increasing error bound 5. Suresh Merugu, M. Chandra Shekhar Reddy, Ekansh Goyal and Lakshay Piplani SVM, Random Forest, K Nearest Neighbor and BernoulliNB Took a dataset of 5000 messages used 90% of them a training and rest for testing. Used</s>
<s>different supervised Machine Learning algorithm for classifying. SVM and Random Forest got accuracy 98% and BernoulliNB 97.6% 6. M. Ikonomakis, S. Kotsiantis and V. Tampaka Different Machine learning algorithms Discussed about different machine learning algorithms and techniques. Discussed about algorithms. 7. Timothy P. Jurka, Loren Collingwood, Amber E. Boydstun, Emiliano Grossman, and Wouter van RTextTools Discussed about RTextTools by which in ten step one can easily classify text. Discussed about RTextTools. 2.4 Challenges The main challenge for our project was not only a huge number of data collection but also make sure that the data is in its purest form. Because we are working on a language’s two different form. So, the data we took has to be sadhu and cholito separately. ©Daffodil International University CHAPTER 3 RESEARCH METHODOLOGY 3.1 Introduction In this section we will discuss about our data collection procedure, data processing, proposed methodology, statistical analysis and implementation requirements. Firstly, in data collection procedure we have discussed how we have collected our data. Next, in data processing part, we have discussed how we pre-processed it for our model. Then in proposed methodology we briefly addressed about the algorithms and methodology that were used for this classification. Consequently, in statistical analysis we highlighted few statistical method and flow charts of the project. Finally, the chapter is closed by a clear concept about what we used for the project. 3.2 Data Collection Procedure Constant evaluation of culture has a great impact on our Bangla language. Over the year we included many words from many other languages such as English, Hindi, Urdu, Persian, Dutch, Portuguese and many more. At present, we are just not involving new words but also have malformed our language dangerously. Therefore, finding or creating a pure dataset which only has Cholitobhasha or Sadhubhasha are challenging. We were aware of the need for natural raw data in two Bangla forms for the significance of the proposed study. So, we decided to collect data from Bangla novels which were apparently written in Sadhubhasha and Cholitobhasha. Famous bangla literatures of Sarat Chandra Chattopadhyay, Bankim Chandra Chattopadhyay, Syed Mujtaba Ali and Humayun Ahmed were evaluated to form our dataset. We have collected these data from books and different website dedicated for Bangla literature where most of the Bangla writers’ book can be found. 3.3 Data Processing First, we have collected the data in a .docx format. Then, we processed a raw dataset of more than 100000 words. These raw data consisted of Sadhubhasha and Cholitobhasha separately. Consequently, for training we transferred the data to a .xlsx format where we categorized the data into two different classes Sadhubhasha and Cholitobhasha. Every line of the dataset consisted two column text and class. ©Daffodil International University Data Format and Statistics Data Format: Table 3.1: Format of data set Text Class শক্তিশশল বুশে পক্তিবার সময় লক্ষ্মশের মশুের ভাব ক্তিশ্চয় েুব োরাপ হইয়া ক্তিয়াক্তিল ক্তেন্তু গুরুচরশের চচহারাটা চবাধ েক্তর তার চচশয়ও মন্দ চেোইল যেি প্রতয ূশেই অন্তঃপুর হইশত সংবাে চপাোঁক্তিল িৃক্তহেী এইমাত্র ক্তিক্তবিশে পঞ্চম েিূার জন্মোি</s>
<s>েক্তরয়াশিি চিাটশবলায় আক্তম এেবার চমহমািী উৎসশব ক্তিশয়ক্তিলাম আজোলোর পাঠে পাঠিোরা চমহমািী শব্দটার সশে পক্তরক্তচত ক্তে িা জাক্তি িা োশজই এেটু বূােূা েশর চিই আশিোর আমশল ক্তবত্তবাি চলােশের এেটা প্রবেতা ক্তিল তাশের ক্তবশত্তর ক্তবেয় অিূশের জািাশিা রাজা বলল ভাল েবর চমম সাব আক্তম লটারী ক্তজশতক্তি চমম সাব আমার টিক্তেশট ফার্স্ি প্রাইজ উশঠশি হূাোঁ চমম সাব ক্তবশ লাে টাো রােী ক্তবস্মশয় চচাে বি বি েশর বলল সক্ততূ রাজা বলল সক্ততূ রােী বলল েুব ভাল েথা পাবিতী োশি আক্তসয়া বক্তসল আোঁচশল যাহা বাোঁধা ক্তিল তৎক্ষোৎ চেবোশসর চশক্ষ পক্তিল চোি েথা ক্তজজ্ঞাসা িা েক্তরয়া চস তাহা েুক্তলয়া োইশত আরম্ভ েক্তরয়া েক্তহল পারু পক্তিতমশাই ক্তে বলশল চর?জূাঠামশাশয়র োশি বশল ক্তেশয়শচ Where, 0 = Sadhubhasha 1 = Cholitobhasha Data Statistics: ©Daffodil International University Table 3.2: Data statistics Number of Instance Class 500 Sadhubhasha 501 Cholitobhasha We used total 1000 paragraphs or instances for our dataset where half of them were in Sadhubhasha and rest was Cholitobhasha. Data Pre-Processing After data collection, we need to preprocess it again. We have removed the punctuation, brackets and stopwords so that while we train the model, so that we could find maximum accuracy. Finally, data preprocess was done in two-part; denoising and normalization. Denoising Denoising is a process by which we can remove any kind of html tags and brackets that could have gathered with dataset. It generally happens when we scrap data from different websites. The pseudo code for denoising are: 1. Import regular expression and string library 2. Import beautiful soup library 3. Define a function for beautiful soup soup = BeautifulSoup(text.strip(), "lxml") 4. Define a function for brackets return re.sub('\[[^]]*\]', '', text Normalization Data normalization is a process by which data attributes are organized in a data model or dataset. Data normalization increase the data consistency and reduce or eliminate data redundancy. Data normalization also helps to object-to-data mapping. For our dataset we used two function, one for removing punctuations and other for stop words. Stop words are basically those words which are filtered out before or after NLP (natural language processing) data. Stop words are normally known as the most common words. ©Daffodil International University For our Bangla text classification, we have created a list of these stop words we have to eliminate. List are given below. Fig 3.1: stop words for Bangla Fig 3.2: stop words for Bangla We kept these words to a text file which we took as an input when we normalized the data. The pseudo code for normalization is, 1. import string and regular expression library 2. define a function for punctuations 3. sentence = re.sub(r'’|‘|।','',sentence) ©Daffodil International University 4. run an if statement and return sentence.translate(str.maketrans('', '',string.punctuation)) 5. define a function for stop words 6. ('stopwords-bn.txt', 'r', encoding='utf8', errors='ignore') as f: bn_stopwords = f.read().split() 7. Run a for loop and join words without stop words 3.4 Proposed Methodology Methodology Text classification can be done in few different ways. For automatic text classification there are three approach are widely recognized. They are namely Rule based, Machine learning based and Hybrid system.</s>
<s>For our project we have choose machine learning approach. We first converted our data to vectors using Bag of Words. As Bag of Words has few drawbacks, then we have used IF-TDF for numeric representation of Bangla dataset. After that, we split our dataset into two-part, training and testing. For training and classifying we tested different classifier algorithms. Bag of Words BoW or known as Bag of Words is a way of extracting features from text for machine learning algorithms. A Bag of Words mainly a representation of text that illustrate the occurrence of words in a text document. It implicates two things, they are, 1. A vocabulary of known words. 2. A measure of the presence of known words. The main reason it is called a “bag” of words, because any information about the structure or the order of words in the document is neglected. The model is only concerned about if the known words occur in the text document, not where in the text document.” A very common feature extraction procedures for sentences and documents is the bag-of-words approach (BOW). In this approach, we looked at the histogram of the words within the text, i.e. considering each word count as a feature” The bag-of-words model can be as simple or complex depending on the dataset and researcher. The complexity of BoW comes both in deciding how to design the vocabulary of known ©Daffodil International University words or tokens and how to score the presence of known word or tokens. Let’s see an example, রাজা বলল ভাল েবর চমমসাব আক্তম লটারী ক্তজশতক্তি চমমসাব let’s take this line as an example. Now BoW will first collect all the vocabulary. For this line those vocabularies are, রাজা, বলল, ভাল, েবর, চমমসাব, আক্তম, লটারী, ক্তজশতক্তি After collecting all vocabularies BoW will score all the words. Like, Table 3.3: Vocabulary Scoring Vocabulary Score vocabulary Score রাজা 1 চমমসাব 0 বলল 1 আক্তম 0 ভাল 1 লটারী 0 েবর 0 ক্তজশতক্তি 1 So, the line we took its binary vector representation would be like Table 3.4: Sentence binary vector representation [1 1 1 0 0 0 0 1 0] TF-IDF TF-IDF also known as term frequency-inverse document frequency. TF-IDF weight is a statistical measure normally used to evaluate how important a word is to a text document in a dataset. The importance increases proportionally to the number of times a word appears in the text document but is cancel out by the frequency of the word in the dataset. The bag of words approach works well for converting text into numerical from but it also has a drawback. Which is It doesn't take into account the fact that the ©Daffodil International University word might also be having a high frequency of occurrence in other text documents. TF-IDF handle this issue by multiplying the term frequency of a word by the inverse document frequency. The term frequency is calculated as: Term frequency = (number of Occurrence of a word) Total words in a document (1)</s>
<s>And the Inverse Document Frequency is calculated as: IDF (word)= Total number of documentsNumber of documents containing the words (2) Classification Algorithms Random Forest Random forest is a supervised machine learning algorithm mostly use for classification and regression. Random forest is an ensemble learning method algorithm means it use multiple learning algorithms to get a better predictive performance. Random forest works in a very easy way. From its name we can understand it creates forest means it generate many decision trees then merge them together to get a high accuracy performance. Fig 3.3: Random Forest algorithm ©Daffodil International University Random forest applies the general technique of bootstrap aggregating or bagging to teach the trees which trains the dataset. After finishing training predictions for unseen samples x' can be predict by averaging the predictions from all the individual regression trees on x': 𝑓^= ∑ 𝑓𝑏(𝑥′𝐵𝑏=1 ) (3) or by taking the majority vote in the case of classification trees. And for creating the standard deviation we need the summation of the uncertainty of prediction from all the individual regression trees on x’: Ϭ= √∑ (𝑓𝑏(𝑥′𝐵𝑏=1 )−𝑓𝐵−1 (4) Where, B= number of trees fb= regression tree Naive Bayes classifier Naive Bayes classifiers are a group of probabilistic supervised machine learning algorithms based on applying Bayes' theorem with naive assumption of conditional independence between every pair of features. The Bayes theorem: P(Y|x1,….,xn) = P(y)P(𝑥1,….,𝑥𝑛|y)P(𝑥1,….,𝑥𝑛) (5) where, y= given class variable 𝑥1, … . , 𝑥𝑛 = dependent feature vector For our dataset we used two Naive Bayes classifier Gaussian Naive Bayes and Multinomial Naive Bayes. Gaussian Naive Bayes When working with continuous data an assumption is made that the continuous values related to each class are distributed by following Gaussian distribution. The equation for it is: P(xi|y) = √2∏Ϭ𝑦 exp (- (𝑥𝑖−𝜇𝑦)2Ϭ𝑦2 ) (6) ©Daffodil International University Multinomial Naive Bayes Multinomial naive Bayes algorithm implements the multinomial distribution method for distributing data. Tish algorithms widely used for text classification. The equation is: p (x | Ck) = (Σ𝑖x𝑖)!Π𝑖x𝑖! ∏ 𝑝𝑘𝑖𝑥𝑖𝑖 (7) Decision Tree Decision tree classifier is a non-parametric predictive supervised machine learning algorithm which use decision tree to predict. Decision tree predict a variables value by learning decision rules extracted from data features. Decision tree are mainly used for classification and regression. For our work we only work with the classification part of the decision tree. The mathematical formulation for classification is: pmk=1/Nm∑ 𝐼(𝑦𝑖 − 𝑘)𝑥𝑖𝜖𝑅𝑚 (8) Where, m= node Rm= region and Nm= observation Now, to measure impurity are Gini: H(Xm) = ∑ 𝑝𝑚𝑘(1 − 𝑝𝑚𝑘)𝑘 (9) Entropy: H(Xm) = -∑ 𝑝𝑚𝑘𝑙𝑜𝑔(𝑝𝑚𝑘)𝑘 (10) And Misclassification: H(Xm) = 1- max (𝑝𝑚𝑘) (11) Where Xm represent the training data in node m. Support Vector Machines (SVMs) Support vector machines are a group of supervised machine learning algorithms which widely used for classification and regression. Support vector machine algorithms are very efficient while working in high dimensional space. Support vector machine ©Daffodil International University algorithms are also very memory efficient. For our text classification</s>
<s>we used SVC method. To classify Support vector machine creates a hyperplane in a high dimensional space which allow us to achieve more efficient and accurate classify result. Fig 3.4: Support vector machine creating hyperplane to classify k-nearest neighbors K-nearest neighbors is a non-parametric supervised algorithm used for classification and regression. K nearest neighbors mainly work with similarity. This algorithm works on instance base. It does not create any model but save the instance of the training data. From the data it calculates highest vote of the nearest neighbors of each point then a query point is assigned the data class which has the most representatives within the nearest neighbors of the point k-nearest neighbors calculated by a distance function. If K = 1, then the case is simply assigned to the class of its nearest neighbor. The functions are: Euclidean = √∑ (𝑥𝑖 − 𝑦𝑖)𝑖=1 (12) Manhattan = ∑ |𝑥𝑖 − 𝑦𝑖|𝑘𝑖=1 (13) ©Daffodil International University Minkowski = ( ∑ (|𝑥𝑖 − 𝑦𝑖)𝑞|)1/𝑞𝑘𝑖=1 ) (14) All these functions work well with continuous variable. For categorical variable we need to use Hamming distance. The equation is: DH= ∑ |𝑥𝑖 − 𝑦𝑖|𝑘𝑖=1 (15) x = y⇒D = 0 x ≠ y⇒D = 1 The best value is determined by the large value of K which reduce the overall noise. Evaluation Metrics Confusion matrix A confusion matrix is a table by which we describe the performance of a dataset which is used for classification. There are four elements in confusion matrix. TP or True Positive, TN or True negative, FP or False Positive and FN Or False negative. Fig 3.5: Details of confusion matrix True Positives (TP) - These are the correctly predicted positive values which means that the value of actual class is yes and the value of predicted class is also yes. True Negatives (TN) - These are the correctly predicted negative values which means that the value of actual class is no and value of predicted class is also no. Now False positives and false negatives, these values appear when actual class contradicts with the predicted class. False Positives (FP) – When actual class is no and predicted class is yes. False Negatives (FN) – When actual class is yes but predicted class in no. ©Daffodil International University Accept this there are few more term that we need to understand before we discuss about. They are, Accuracy Accuracy is the most direct approach for performance measure which is a ratio of correctly predicted class to the total class. If we have high accuracy then our model is best. But accuracy is a great measure only when we have symmetric datasets where values of false positive and false negatives are almost same. That’s why, we have to look at other parameters to evaluate the performance of our model. Accuracy = 𝑇𝑃+𝑇𝑁𝑇𝑃+𝐹𝑃+𝐹𝑁+𝑇𝑁 (16) Precision Precision is the ratio of correctly predicted positive instances to the total predicted positive instances. The question that this metric answer is of all instance that labeled</s>
<s>as Sadhubhasha and Cholitobhasha, how many of them are actually Sadhubhasha and Cholitobhasha instance? High precision means low false positive rate. Precision = 𝑇𝑃+𝐹𝑃 (17) Recall Recall also known as Sensitivity. Recall is the ratio of correctly predicted positive instances to the all instances in actual classes. The question recall answers is: Of all the instances of Sadhubhasha and Cholitobhasha that truly classified, how many did we label? Recall = 𝑇𝑃+𝐹𝑁 (18) F1 score F1 Score is basically the average of Precision and Recall. That means, this score takes both false positives and false negatives into count. basically, it is not as easy to understand as accuracy, but F1 is usually more useful than accuracy. ©Daffodil International University F1 score = 2∗(Recall ∗ Precision)(𝑅𝑒𝑐𝑎𝑙𝑙 + 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛) (19) now we understand the terms let’s see the result we got from our classifying model we used for classifying Sadhubhasha and Cholitobhasha. 3.5 Statistical analysis In our dataset we took 1001 instance as data. 500 of them are Sadhubhasha and 501 are Cholitobhasha. These 1000 instance combinedly have over 100000 thousand words of Bangla language. We took 80% means almost 80000 word for our training and rest 20000 words data are kept for testing. Fig 3.6: Flow Chart of Proposed Model ©Daffodil International University 3.6 Implementation Requirements After the reviewing all the all necessary statistical or theoretical concepts and methods, we created a list of Hardware, Software and developing tools we need for classifying Sadhubhasha and Cholitobhasha. The probable necessary things are: Hardware/Software Requirements • Operating System (Windows 7 or above) • Ram (more than 4 GB) • Web Browser (preferably chrome) Developing Tools • python 3.7 • Anaconda • Jupiter notebook • NLTK • Pandas • NumPy ©Daffodil International University CHAPTER 4 EXPERIMENTAL RESULTS AND DISCUSSION 4.1 Introduction In Chapter four we will discuss about descriptive analysis of our project. We will state about our experimental result and finally we will close the chapter with a summarization of result. 4.2 Experimental Results To measure the effectiveness and accuracy of the algorithms we choose few methods such as precision, recall, f1 score and support. This method will help us to understand which algorithms can classify bangla text more accurately. Random forest classifier: Random forest classifier is a supervised machine learning algorithm that applies the bootstrap aggregating or bagging methods to train data. The accuracy we achieve from random forest is 98.5%. The detailed result: Confusion matrix = [ [97 3] [0 101] ] Here, 0 = Sadhubhasha 1 = Cholitobhasha Table 4.1: result achieved from random forest algorithm Metrics 0 1 Precision 1.00 0.97 Recall 0.97 1.00 F1 score 0.98 0.99 Support 100 101 ©Daffodil International University Naive Bayes We implemented two classifier algorithms from naive Bayes classifier. Gaussian Naive Bayes and multinomial Naive Bayes. Multinomial Naive Bayes For multinomial Naive Bayes we achieve accuracy of 99.5%. Confusion matrix = [ [99 1] [0 101] ] Table 4.2: result achieved from Multinomial Naive Bayes algorithm Metrics 0 1 Precision 1.00 0.99 Recall 0.99</s>
<s>1.00 F1 score 0.99 1.00 Support 100 101 Gaussian Naive Bayes For Gaussian naive Bayes we achieve accuracy of 91.04%. Confusion matrix = [ [88 12] [6 95] ] Table 4.3: result achieved from Gaussian Naive Bayes algorithm Metrics 0 1 Precision 0.94 0.89 Recall 0.88 0.94 F1 score 0.91 0.91 Support 100 101 ©Daffodil International University Support Vector Machine We use the SVC or support vector classifier method from Support Vector Machine algorithm. It achieves 97.014% accuracy. Confusion matrix = [ [94 6] [0 101] ] Table 4.4: result achieved from Support Vector Machine algorithm Metrics 0 1 Precision 0.94 0.89 Recall 0.88 0.94 F1 score 0.91 0.91 Support 100 101 K-nearest neighbor For K-nearest neighbor algorithm we achieve accuracy of 94.52% . Confusion matrix = [ [96 4] [7 94] ] Table 4.5: result achieved from K-nearest Neighbor algorithm Metrics 0 1 Precision 0.93 0.96 Recall 0.96 0.93 F1 score 0.95 0.94 Support 100 101 Decision Tree For decision tree algorithm we achieve accuracy of 91.54% . Confusion matrix = [ [91 9] [8 93] ] ©Daffodil International University Table 4.6: result achieved from decision tree algorithm Metrics 0 1 Precision 0.92 0.91 Recall 0.91 0.92 F1 score 0.91 0.92 Support 100 101 From the result tables we can find out that all algorithms achieve good accuracy. For our bangla data Multinomial Naive Bayes and Random forest algorithms achieves the highest accuracy 99.5% and 98.5% respectively. Support vector Machine and K-nearest neighbor algorithms performed well with an accuracy of 97% and 94% respectively. Decision tree performed good but had the lowest accuracy between all the algorithms. 4.3 Descriptive Analysis In this section we will further discuss about the result of our proposed method. We will also focus the word cloud analysis about project and finally a graphical representation of most used words in our dataset. For our bangla data Multinomial Naive Bayes and Random forest algorithms achieves the highest accuracy 99.5% and 98.5% respectively. Support vector Machine and K-nearest neighbor algorithms performed well with an accuracy of 97% and 94% respectively. Decision tree performed good but had the lowest accuracy between all the algorithms. Word cloud Analysis This is a process by which we could find the most used word for Sadhubhasha and Cholitobhasha separately. It is a simple process by which we can easily determine those words that helped us to classify Sadhubhasha and Cholitobhasha. Now let’s have a look to most used 20 words in Sadhubhasha and Cholitobhasha from our dataset. ©Daffodil International University Fig 4.1: List of most used words in Sadhubhasha Fig 4.2: List of most used words in Cholitobhasha ©Daffodil International University Now if we plot these words in to a image by the size of every word we will easily find out which word is repeated or used most in Sadhubhasha and Cholitobhasha from our dataset Fig 4.3: image plot of most used words in Sadhubhasha Fig 4.4: Image plot of most used words in Cholitobhasha ©Daffodil International University 4.4 Summary After</s>
<s>getting the accuracy by using the Bag of Words and Term Frequency-Invers Document Frequency for vector representation and from different classifier algorithm, we trained our data and classify it between Sadhubhasha and Cholitobhasha in our research dataset from different novels of Bangla language. We achieved from 91% to 99.5% accuracy for different algorithms. But if we want to increase the accuracy, we need to prepare the dataset more accurately. Also there is a need to increase the size of dataset for statistical significance. More preprocessed data will also help to achieve higher accuracy to classify Sadhubhasha and Cholitobhasha. ©Daffodil International University CHAPTER 5 SUMMARY, CONCLUSION, RECOMMENDATION AND IMPLICATION FOR FUTURE RESEARCH 5.1 Summary of the Study Our main target was to build a model which will help to classify two form of Bangla language, Sadhubhasha and Cholitobhasha. We took the machine learning approach to classify the two form Sadhubhasha and Cholitobhasha. In this approach we vectorize the Bangla data then we split the data for training and testing. Following that, we implemented machine learning algorithm like Random forest, Naive Bayes, Support Vector Machine, Decision Tree and K nearest neighbor. Each of these algorithms performed well. Random Forest achieves 98.5% accuracy, Multinomial Naive Bayes achieves 99.5% accuracy, Support Vector Machine achieves 97% accuracy, K nearest neighbor achieves 94.5% accuracy and Decision Tree achieves 91% accuracy. 5.2 Conclusions The accuracy level that we have achieved using these classifier algorithms is significant. It is indicative that our proposed method performed better than other methods used for bangla text classification. After completing the research, we have learned a lot of things about Natural Language processing and Machine Learning. Now we can preprocess the data and also train a model for classifying text documents. We hope this will also help in further research in Bangla language and in text classification area. 5.3 Recommendations Few recommendations for Bangla text classification are: 1. Create a large dataset for high accuracy 2. Try to remove all other language word written in Bangla for better accuracy 3. Find and list all the stop words this will also help you to increase the accuracy 5.4 Implication for Further Study Few implications that possible in further studies are: ©Daffodil International University 1. Adding more categories like combined Sadhubhasha and Cholitobhasha data in this project, can make this more efficient. 2. Using more classifier algorithms on this dataset, can get a better understanding on which classifier perform well and give us the best and higher accuracy. ©Daffodil International University REFERENCES [1].Abu Nowshed Chy, Md. Hanif Seddiqui, Sowmitra Das, " Bangla News Classification using Naive Bayes classifier," 16th Int'l Conf. Computer and Information Technology, 8-10 March 2014, Khulna, Bangladesh. [2].Andrew McCallum and Kamal Nigam,” A Comparison of Event Models for Naive Bayes Text Classification,” Published 1998. [3].Andronicus A. Akinyelu and Aderemi O. Adewumi, " Classification of Phishing Email Using Random Forest Machine Learning Technique," Journal of Applied Mathematics Volume 2014, Article ID 425731, 6 pages. [4].Baoxun Xu, Xiufeng Guo, Yunming Ye and Jiefeng</s>
<s>Cheng " An Improved Random Forest Classifier for Text Categorization," journal of computers, vol. 7, no. 12, December 2012. [5].Suresh Merugu, M. Chandra Shekhar Reddy, Ekansh Goyal and Lakshay Piplani, "Text Message Classification Using Supervised Machine Learning Algorithms," International Conference on Communications and Cyber Physical Engineering 2018, ICCCE 2018: ICCCE 2018 pp 141-150. [6].M. Ikonomakis, S. Kotsiantis and V. Tampaka ," Text Classification Using Machine Learning Techniques," WSEAS TRANSACTIONS on COMPUTERS, Issue 8, Volume 4, August 2005, pp. 966-974. [7].Timothy P. Jurka, Loren Collingwood, Amber E. Boydstun, Emiliano Grossman, and Wouter van Atteveldt," RTextTools: A Supervised Learning Package for Text Classificatio" The R Journal Vol. 5/1, June ISSN 2073-4859. ©Daffodil International University Plagiarism Report</s>
<s>2017 20th International Conference of Computer and Information Technology (ICCIT), 22-24 December, 2017Question Classification Using Support VectorMachine with Hybrid Feature Extraction MethodSyed Mehedi Hasan NirobComputer Science and EngineeringShahjalal University ofScience and TechnologySylhet-3114, Bangladeshsmh.nirob@gmail.comMd. Kazi NayeemComputer Science and EngineeringShahjalal University ofScience and TechnologySylhet-3114, Bangladeshmasum.nayeem@gmail.comMd. Saiful IslamComputer Science and EngineeringShahjalal University ofScience and TechnologySylhet-3114, Bangladeshsaiful-cse@sust.eduAbstract—This paper presents an approach to categorizingBangla language question into some predefined coarse-grainedcategory that represents expected answer type of that particularquestion. Support vector machine was used with different kernelfunction to increase the accuracy of existing Bangla questionclassification system. Both predefined feature set and the streamof unigram based on the frequency of data set was considered tobuild feature matrix. For five cross validation average 89.14%accuracy was achieved using 380 top frequent words as thefeature which outperformed existing single model based Banglaquestion classification system. For same cross validation, 88.62%accuracy was achieved with a combination of wh-word, wh-wordposition and question length as feature set.Index Terms—Question Classification, SVM, Question Taxon-omy, Feature Extraction, Kernel Function, Wh-word.I. INTRODUCTIONQuestion classification is actually the system of classifyingquestions into some predefined class which reflects expectedanswer type of these questions. These semantic answer cate-gories can also suggest different question processing strategies.For example, the question ”Who wrote the national anthemof Bangladesh?” asks for a person name and task of aclassification system is to tag this question as the person. Ifwe find a sentence that has the answer to this question thenname entity recognition of that sentence can reveal the exactanswer to this question. That’s why question classification isimportant.Question classification is an influential part of a questionanswering system [1]. A question answering system finds themost relevant answer to a question asked by a user from a lotof documents. This task is challenging because these questionsare asked in natural language and don’t follow grammar rulesin many cases [2]. And with a large amount of data, searchspace for question answering is also huge. But knowing theexpected answer type can help us to reduce the search spaceby a considerable amount [3].Text Retrieval Conferences question answering track hasintroduced different QA model with varying performance.These models use different QA framework with some formof question classification module.II. RELATED WORKS ON QUESTION CLASSIFICATIONThere are a lot of existing and ongoing research workson question classification in the different language. Researchon some topic related to question classification like questionclassifiers, question taxonomies, question features has beenissued continuously. Question feature extraction procedure andclassifier used to classify question makes difference amongthose approach.Rule based techniques to classify question can be lesscomplex if we can represent question in a different way likea semantic parse tree. Authors Hermjakob et al., 2001 wrote276 hand written rules to classify question into 122 categories[4]. But statistical question classification methods require littleor no hand tuning in many instances [5]. An experiment resultshowed that with only surface text features like bag-of wordsand bag-of-n-grams the support vector machine outperformsother machine learning methods [6]. Authors Chen et al., 2006showed that syntactic structure of a sentence can provide moreconvenient information than a bag of n grams [7].Selecting an optimal set of feature has</s>
<s>always been achallenging task for the researcher [8]. Some preferred richfeature space for their question classifier. The small-scalefeature set can also be impactful if it is chosen wisely andhead words are one of them [9] [10]. But authors achieved89.2% and 89.0% accuracy using linear SVM and MaximumEntropy models with a traditional standard feature set likeunigrams. On the other hand, many researchers used only n-gram as a feature with suitable rule based question classifier[11]. Authors achieved 88.8% accuracy for coarse grainedcategories and 80.6% accuracy for fine grained categories.Despite challenges of processing Bangla questions, there is978-1-5386-1150-0/17/$31.00 ©2017 IEEEsome research work on Bangla question classification. In theearly stage of question classification for Bangla language,only single-layer taxonomy was proposed [12]. Author’s useddifferent lexical, syntactic and semantic features and variousmachine learning approach to categorize nine course-grainedclasses. Those classifiers are Naive Bayes, Kernel NaiveBayes, Rule Induction and Decision Tree. Decision tree clas-sifier provided highest 87.63% accuracy among all of them.Later sixty-nine fine-grained question classes for previous ninecourse-grained classes was suggested [13]. Machine learningensemble technique like bagging and boosting was applied tothe training data to improve accuracy for an increased numberof class [14].Research works in question classification also vary withlanguage. We worked on Bangla question classification andresult or performance won’t be same even if we use a systemdesigned for another language that uses similar feature set andalgorithm.III. DATASET PREPARATION AND ANALYSISIt was mentioned earlier that there is no accessible QuestionClassification dataset for Bengali language right now. So, wecollected some sample question from a website [15]. There isan existing research work that uses this dataset but their datasetis not open. This website has factoid questions in Banglalanguage in different categories like Bangladesh, international,literature etc.1375 questions from Bangladesh subject and 1118 questionsfrom international category was collected. Then we prepared120 Bengali questions manually related to computer science.We used some selected Wikipedia article for this purpose. Bothquestion and answer was prepared for our future analysis.From this question set we selected 1160 questions for ourclassification purpose. The problem with other 215 questions isthat they don’t represent any of the 9 category that we defined.These 1160 questions were classified into nine main categoriesmanually. We only considered coarse-grained classes for clas-sification. Table I shows question category details.Table I: Bangla Question CategoriesClass Name DescriptionPER Person nameLOC Location or place related questionTIME Time related questionGRO Question about a group or organizationREA Reason of somethingNUM Answer of those question will be a numberDEF Asks for definition of somethingMETH Procedure related questionMISC Miscellaneous questions like biggest, smallest etc.In a question dataset not every word is useful. Some datasegment can make the data model unstable. Suppose, Thereare some English word and some special character within thisdataset. We need to exclude those to improve performance.Now, We have a question set Q with n question and for ourdataset, n = 1160.Q = {Q1,Q2,Q3 . . .Qn−1,Qn}And a set of class or category C. For our dataset, m = 5.C = {C1,C2,C3 . . .Cm−1,Cm}Figure 1: Number and percentage of questions in differentcategoriesFigure 1 shows number and percentage of manually clas-sified questions</s>
<s>in each question category in our dataset.Number of question asking for a person name is relativelyhigher(21.7%) than other type of question. Then comes lo-cation type question which is 18.4% of our total dataset.Percentage of questions asking for a method of something islowest, only 1.38% of our dataset. This type of difference inpercentage has huge impact on any classification system.IV. FEATURE EXTRACTIONSelecting an optimal set of feature is the most influentialsegment for any machine learning based classification model.There are several research works on feature extraction fromtext for categorization purpose [16].Lets define a question Qk with p words.Qk =W1W2W3...Wp−1WpWk is any word where 1 <= k <= n and we selected featurebased on those words.For sentence or document level classification there are threetypes of feature that is needed to be consider. And theseare lexical features, syntactical features and semantic features[17].A. Lexical FeaturesWe selected lexical features for our classifier based onwords of question dataset.Table II: WH-words in Bangla question datasetক কাথায় িকভােবিক কেব কয়িটিক প কতিট কারাকত কান কনকােক কানিট কখনকার কােদর1) Wh-word: A wh-word is one of the function word that isused to commence a wh question and a very important featurein question classification system. Sometimes only wh-wordcan distinguish a question category from another. If we findwh-word ”where” or " কাথায়" in a question then assuredly it canbe said that this question asks for a place. So, this questionwill be in location category. Table refwh show’s wh-wordsthat we extracted from out Bangla question dataset.Although there are three types of interrogatives in Banglalanguage, only simple or unit interrogatives were utilized.Other two types of interrogatives are actually fusion of unitinterrogatives. Because presence of dual and compound in-terrogatives is quite infrequent in question dataset. Hence,Compound interrogatives are irrelevant in our classificationsystem. Suppose,ক কেব এই কাজিট কেরিছল?With interrogative ক কেব this question ask for both personand time. But according to our system one question belongs toonly one class. We will choose any one of them in our system,not both of them. So, considering only unit interrogatives willwork in this case.Table III: Feature words related to question categoriesCategory Words related to categoryLOC ান, ােনর, দশ, জায়গা, অবি ত, থানা, জলা, দশিটেক, দশিটTIME সময়, বছর, মাস, িদন, কাল, িখ াে , সােল, হেয়িছলORG ম নালয়, কা ানী, সং া,কিমশনPER নাম, কেরন, িছেলনNUM সংখ া, পিরমান, অংশ, শতকরা, ভাগ, দরূ , কততম, গড়, অব ান, উ তাREA কারন, উে শMISC পািখ, াণী, বৃহ ম, সেবা , দীঘতম, জাতীয়, হয়, থম, কের2) Wh-word position: Wh-word position is an effectivefeature with wh-word. We considered four cases regarding wh-word position in question sentence.• First position• Second position• Penultimate (Second to the last position)• Last positionWe noticed that in most cases position of a particular wh-worddoesn’t change.3) question length: For some particular question classlength can be a critical feature. By length we mean how manyword this question contains. For example, usually length ofdefinition type question is two and number of three lengthlocation type question frequent in dataset.B. Syntactical Features1) Main words: In a particular question dataset, everyquestion word is not equally important. Some</s>
<s>word has highimpact on classification system. That’s why we manuallypicked some word closely associated with question categories.These words occur frequently in dataset and system provideshigher accuracy if used as feature. Table III shows main featurewords related to question classes that was defined earlier.There is another syntactical feature called Part ofSpeech(POS) tags. But we didn’t use this feature becauseaccuracy of Bangla POS tagger is not decent. We didn’t useany semantic feature like named entities(NE) for same reason.Most importantly accuracy of our system is not dependent onany other system.C. Other FeaturesBesides, training system with well defined lexical, syntac-tical and semantic features the first thing we tried as featureis n-gram which is actually traditional and straightforward.Individual word of a question can be very important featurespace for any question classifier [18]. But if we go further withn-gram performance of classifier decreases rapidly. Bigram ortrigram is not much useful to distinguish a question from other.There is another problem with unigram feature. If the numberof features is much greater than the number of samples, SVMmethod is likely to give poor performance.But, unigrams with higher frequency in our dataset did thetrick. Higher frequency means higher impact on dataset andin question dataset we don’t have to worry about stop words.Accuracy of our system shows the proof of this observation.Table IV: Top 10 feature words based on frequencyFeatureWord Frequencyক 187কাথায় 170কান 168হয় 168বাংলােদেশর 167কেরন 147কত 124কেব 119িক 105থম 98Table IV shows 10 feature words with highest frequencyand of them is wh-word. If we observe frequent words listfrom table IV, we will find that this list has 6 wh-words fromtable II. Which is predictable because in a question datasetwh-words are more frequent than other words and also a vitalfeature candidate in question classification system.V. METHODOLOGYWe designed our question classification system in four mainsteps. These are,• Question dataset collection and processing• Extracting feature set and building feature matrix• Designing a machine learning based classifier• Performance measurementThe task of question classification can be performed in twodifferent ways. First one is hand crafted rules and the secondone is using machine learning techniques. Machine learningtechnique was used in our research.We have a set of question Q and a set of class or categoryC and our classification task is to tag questions from set Qwith any one class label from set C.After preparing question dataset, we defined optimal featureset. Then we constructed feature matrix for each featureset. In feature matrix, each row represents a question orobservation and each column represents a feature. Maximumfeature including n-grams is boolean in our system. Let, MATis a feature matrix and for MAT [i][ j], if j’th feature is presentin the i’th question then value of MAT [i][ j] will be 1 otherwisevalue will be 0. That’s the main concept of our systems featurematrix.To build a machine learning based classifier we need featurematrix and a suitable algorithm. There is no best algorithm inmachine learning. Performance of an algorithm depends onthe specific problem, data size, and feature set. But when itcomes to text classification problems, historically performanceof SVM is very decent</s>
<s>[19] [20]. Also, besides linear classifi-cation, SVMs can efficiently map input into high-dimensionalfeature spaces which is called kernel trick. That’s why weapplied SVM algorithm in our classifier with kernel trick [21].Given a training set of N data points {yk,xk}Nk=1, where xk isthe kth input pattern and yk is the kth output pattern, the formof classifier following support vector method approach is likeEq. 1.y(x) = signk=1αkykψ(x,xk)+b(1)In this equation, for every k, αk are positive real constants andb is a real constant and ψ(., .) is the kernel function. For linearkernel function based classification system value of ψ(x,xk)is in Eq. 2.ψ(x,xk) = xTk x (2)We also tried nonlinear classification using RBF, poly-nomial and sigmoid kernel function. RBF or radial basisfunction kernel transform a single vector to a vector of higherdimensionality using Eq. 2.ψ(x,xk) = exp(−γ|x− xk||2) (3)Here, x represents training question data vector and xk isinput to be classified. And γ is the slope between them.RBF kernel is more popular in SVM classification than thepolynomial kernel. But the polynomial kernel is quite popularin natural language processing or NLP than RBF. On theother hand, the polynomial kernel deals with features in adifferent way. To determine similarity, it looks not only atthe given features of input samples, but also combinations ofthese features. That can improve classification performance alot.To evaluate a classification system we need some perfor-mance measurement technique. We measured accuracy of oursystem for a particular parameter set which is a widely usedmetrics if we need to resolve a classifiers class discriminationability.accuracy =T P+T NP+NWhere,TP(True Positive) = Number of positive samplesand labeled as such.TN(True Negative) = Number of negative samplesand labeled as such.P + N = Total number of positive and negativesamplesWe trained and evaluated our system for every combinationof our feature set and kernel function. Then accuracy wasmeasured for each parameter to find the best features-kernelcombination for our classification system.VI. RESULT AND PERFORMANCE ANALYSISQuestions were manually classified in the predefined cat-egory for training purpose. 70% question from our datasetwas used to train the system and 30% was used to measurethe accuracy of the model. But to estimate a final predictivemodel single round of cross-validation is not enough. That’swhy we performed multiple rounds of cross-validation usingdifferent partitions and then the validation results were com-bined(averaged).We used support vector machine with different kerneltricks to prepare the system. As feature set, wh-words aremost important without any doubt. But only wh-word can’tguarantee competent accuracy in most cases. We proposedsome supplementary features in the previous section. Also,there are no guarantees for one kernel to work better thanthe other. So, we need to check every possible option andchoose the best option. It is recommended to use linear kernelfor text classification in some case. Because most of the textclassification are linearly separable. Also, linear kernel is goodwhen there is a lot of features. That’s because mapping thedata to a higher dimensional space does not really improvethe performance of classification.Figure 2: Performance(accuracy) for linear kernel functionAt first, we experimented SVM with the linear kernel usingfrequent 1-grams in decreasing order. The graph in</s>
<s>figure2 shows the performance of SVM using linear kernel for adifferent number of feature. At first, accuracy is very lowfor less number of feature. Then, accuracy increases as thenumber of feature increases and at one point it becomes flatlike an exponential hyperbolic curve. For 1156 most frequentunigram feature we get the best average accuracy of 89.14%.For a particular cross validation, 91.38% is the best accuracywhich used 380 unigrams as a feature.Figure 3: Performance(accuracy) for RBF kernel functionIf it is not possible to separate data linearly, then we cannonlinear kernel like RBF, polynomial or sigmoid function.RBF uses normal curves around the data points and sums theseso that the decision boundary can be defined for a particularclass.Figure 4: Performance(accuracy) for sigmoid kernel functionFigure 3 shows performance or accuracy of our systemwhile using RBF kernel for 1 to 1000 number of features.Accuracy is much less than linear function and it decreasesexponentially as a number of feature increases. Best averageaccuracy 76.55% was achieved for 32 most frequent words.Figure 4 shows performance or accuracy of our systemwhile using sigmoid kernel function for 1 to 1000 number offeatures. Accuracy drops more quickly than RBF kernel as anumber of feature increases. Best average accuracy 72.24%was achieved for 31 most frequent words. Performance ofpolynomial kernel is worse than all kernel function. In bestcase, 41.4% accuracy can be achieved using polynomial kernelfunction.After using frequent words as a feature we trained oursystem with the pre-defined feature set. In this time we onlyconsidered linear kernel function. In the first run, only wh-words were used as feature set. For five cross validation,88.62% is average accuracy and 91.37% is the best accuracy.Although in the worst case of cross validation accuracy is86.2% but average accuracy is the main fact. So, for five dif-ferent test-train dataset accuracy of this system is in between86.2% to 91.37%. But we can improve this performance byadding more feature to the feature set.Table V: SVM linear kernel performance for 5 cross validationusing specific feature setFeature Set Accuracy Average AccuracyWh Word86.20%88.62%87.94%87.94%89.65%91.37%Wh WordWh Word PositionQuestion Length87.07%89.31%87.94%89.66%89.66%92.25%Later two more feature wh-word position and questionlength were combined with wh-word and new feature matrixwas built for the classifier. And it improved the performanceof our classifier by a significant amount. We achieved 89.31%average accuracy for five different cross validation or train-testdata partition. In best case of dataset partition our classificationsystem achieves 92.25% accuracy and in worst case, thisaccuracy decreases to 87.07% which is quite good if comparedto the accuracy of existing Bangla language question classifier.Table V shows the performance of the system for this featureset.Comparison of worst, average and best performance for thedifferent kernel is shown on figure 5. This graph is based onfrequent word feature set. From this graph, we can see thatlinear kernel shows better performance than nonlinear kernelslike RBF, sigmoid or polynomial. Small size dataset and largesize feature set is the main reason of nonlinear kernels poorperformance.Our system’s average case accuracy outperformed accuracyof question classification system which relies on the singleclassifier. Previous best accuracy was 87.63% using decisiontree classifier. And best case cross</s>
<s>validation accuracy out-performed accuracy of ensemble approach of question clas-sification. Four main classifiers Naive Bayes, Kernel NaiveBayes, Rule Induction and Decision Tree performance wereused. But for the first time, we applied SVM or supportvector machine algorithm to classify Bangla question. SVMalways tries to separate different question category with mostFigure 5: Worst, average and best performance comparison fordifferent kerneloptimal hyperplane that’s why it is more suitable for textcategorization. Performance of our system is the proof of thisassumption. Our classification is for coarse-grained or singlelayered classes but can be extended to fine-grained classeswith ensemble approach.Performance of a question classification system largelydepends on dataset and algorithm. Our labeled small datasetcan do that with help of semi-supervised learning [22]. Alsothe more question class we have there will be more chance ofmisclassification of a question. Our next target is to build acorpus with more question available. Surely that will improvecurrent performance a lot.VII. CONCLUSIONOur research work is a combination of support vectormachine kernel function and word based feature set. Questiondataset for Bangla language is not so rich to performmachine learning classification task yet we achieved thehighest accuracy for an effective feature set with an efficientalgorithm. It is possible to increase this accuracy with a largerdataset and applying neural network algorithm. Also, somefeature like part of speech and name entity can be used forthis classification task but the poor performance of Banglalanguage processing tools is the main problem in this case.Question classification is a subproblem of many otherproblems like question answering as it represents expectedanswer type. Question classification can open door for manyother research work in Bangla language or natural languageprocessing field. This classification system proposes a dy-namic feature selection method that can adapt to any datasetand will show better performance for a better dataset withsome optimization technique.References[1] S. Xu, G. Cheng, and F. Kong, “Research on question classification forautomatic question answering,” in Asian Language Processing (IALP),2016 International Conference on. IEEE, 2016, pp. 218–221.[2] K. Yu, Q. Liu, Y. Zheng, T. Zhao, and D. Zheng, “History questionclassification and representation for chinese gaokao,” in Asian LanguageProcessing (IALP), 2016 International Conference on. IEEE, 2016, pp.129–132.[3] E. Haihong, Y. Hu, M. Song, Z. Ou, and X. Wang, “Research andimplementation of question classification model in q&a system,” inInternational Conference on Algorithms and Architectures for ParallelProcessing. Springer, 2017, pp. 372–384.[4] U. Hermjakob, “Parsing and question classification for question an-swering,” in Proceedings of the workshop on Open-domain questionanswering-Volume 12. Association for Computational Linguistics,2001, pp. 1–6.[5] D. Metzler and W. B. Croft, “Analysis of statistical question classifica-tion for fact-based questions,” Information Retrieval, vol. 8, no. 3, pp.481–504, 2005.[6] D. Zhang and W. S. Lee, “Question classification using support vectormachines,” in Proceedings of the 26th annual international ACM SIGIRconference on Research and development in informaion retrieval. ACM,2003, pp. 26–32.[7] Y. Chen, M. Zhou, and S. Wang, “Reranking answers for definitionalqa using language modeling,” in Proceedings of the 21st InternationalConference on Computational Linguistics and the 44th annual meetingof the Association for Computational Linguistics. Association forComputational Linguistics, 2006, pp. 1081–1088.[8] A. Sangodiah, R. Ahmad, and W.</s>
<s>F. W. Ahmad, “A review in featureextraction approach in question classification using support vectormachine,” in Control System, Computing and Engineering (ICCSCE),2014 IEEE International Conference on. IEEE, 2014, pp. 536–541.[9] Z. Huang, M. Thint, and Z. Qin, “Question classification using headwords and their hypernyms,” in Proceedings of the Conference onEmpirical Methods in Natural Language Processing. Association forComputational Linguistics, 2008, pp. 927–936.[10] M. Pota, M. Esposito, and G. De Pietro, “A forward-selection algorithmfor svm-based question classification in cognitive systems,” in IntelligentInteractive Multimedia Systems and Services 2016. Springer, 2016, pp.587–598.[11] J. Silva, L. Coheur, A. C. Mendes, and A. Wichert, “From symbolicto sub-symbolic information in question classification,” Artificial Intel-ligence Review, vol. 35, no. 2, pp. 137–154, 2011.[12] S. Banerjee and S. Bandyopadhyay, “Bengali question classification:Towards developing qa system,” in Proceedings of the 3rd Workshopon South and Southeast Asian Natural Language Processing (SANLP),COLING, India, 2012, pp. 25–40.[13] ——, “Ensemble approach for fine-grained question classification inbengali,” in Proceedings of 27th Pacific Asia Conference on Language,Information, and Computation (PACLIC), Taiwan, 2013, pp. 75–84.[14] ——, “An empirical study of combining multiple models in bengaliquestion classification,” in Proceedings of International Joint Confer-ence on Natural Language Processing (IJCNLP), Japan, 2013, pp. 892–896.[15] “Bcs/other exam preparation,” [Accessed : 11-March-2017]. [Online].Available: http://www.bcstest.com/[16] A. Moh’d A Mesleh, “Chi square feature extraction based svms arabiclanguage text categorization system,” Journal of Computer Science,vol. 3, no. 6, pp. 430–435, 2007.[17] B. Loni, “A survey of state-of-the-art methods on question classifica-tion,” 2011.[18] M. A. Islam, M. F. Kabir, K. Abdullah-Al-Mamun, and M. N. Huda,“Word/phrase based answer type classification for bengali questionanswering system,” in Informatics, Electronics and Vision (ICIEV), 20165th International Conference on. IEEE, 2016, pp. 445–448.[19] T. Joachims, “Text categorization with support vector machines: Learn-ing with many relevant features,”Machine learning: ECML-98, pp. 137–142, 1998.[20] S. Zadrożny, J. Kacprzyk, and M. Gajewski, “A new approach to themultiaspect text categorization by using the support vector machines,”in Challenging problems and solutions in intelligent systems. Springer,2016, pp. 261–277.[21] B. Scholkopf and A. J. Smola, Learning with kernels: support vectormachines, regularization, optimization, and beyond. MIT press, 2001.[22] Y. Li, L. Su, J. Chen, and L. Yuan, “Semi-supervised learning forquestion classification in cqa,” Natural Computing, pp. 1–11, 2016.</s>
<s>International Conference on Bangla Speech and Language Processing(ICBSLP), 27-28 September, 2019Automatic Detection of Satire in BanglaDocuments: A CNN Approach Based on HybridFeature Extraction ModelArnab Sen SharmaComputer Science and EngineeringShahjalal University ofScience & TechnologySylhet-3114, BangladeshEmail: arnab.api@gmail.comMaruf Ahmed MridulComputer Science and EngineeringShahjalal University ofScience & TechnologySylhet-3114, BangladeshEmail: mridul-cse@sust.eduMd Saiful IslamComputer Science and EngineeringShahjalal University ofScience & TechnologySylhet-3114, BangladeshEmail: saiful-cse@sust.eduAbstract—Wide spread of satirical news in online communitiesis an ongoing trend. The nature of satires are so inherentlyambiguous that sometimes it’s too hard even for humans tounderstand whether it’s actually satire or not. So, researchinterest has grown in this field. The purpose of this research is todetect Bangla satirical news spread in online news portals as wellas social media. In this paper we propose a hybrid technique forextracting feature from text documents combining Word2Vec andTF-IDF. Using our proposed feature extraction technique, withstandard CNN architecture we could detect whether a Banglatext document is satire or not with an accuracy of more than96%.Index Terms—satire detection, natural language processing,TF-IDF, fact-checking, CNN, Word2Vec.I. INTRODUCTIONSatires can be considered as a literary form which involves adelicate balance between criticism and humor. Through satireor sarcasm, messages are conveyed in an artistic form thatsometimes creates a deviated implicit meaning. The goal ofsatire is not always to tell the truth. Sometimes humans arenot effective enough to distinguish between satires and actualnews because often the satires are so ambiguous that it is easyto get deceived.The spread of satirical news is not a new concept. But inthe recent years it has become a real threat that can not beignored anymore. Easy access of Internet and hyperactivity ofusers in various social media platforms has given rise to theextensive spread of satirical news. Internet has largely replacedtraditional news media. Many people, especially a huge portionof youth depend on Internet and social media as the primarysource for news consumption because of their easy access,low cost and 24/7 availability. They simply believe in whatthey read in internet and spread the news what they assumedto be true. So, most of the times, satires are not spread withan intention to deceive. But sometimes some people for theirpersonal benefits take advantage and promote the spread ofsatires as actual news.As a matter of fact, there are some web based applicationssuch as Snopes.com, FactCheck.org, PolitiFact etc which actas fact-checkers. But, these services use human staffs tomanually check facts. Though these services provide accurateinformation most of the time, these are not efficient enoughsince they are not automated. We propose an automated systembased on Convolutional Neural Network and Natural LanguageProcessing to address the problem.There are some related existing works. The LiteratureReview section will discuss about these. Also, we’ll definesome terms and techniques that we used in this work.A. Literature ReviewDe Sarkar, et al. proposed a hierarchical deep neural net-work approach to detect satirical fake news which is capable ofcapturing satire both at the sentence level and at the documentlevel [1]. Burfoot, et al. used SVM and bag-of-words to detectsatires [2]. They used binary feature weights and bi-normalseparation feature scaling for feature weighting. They got abest overall</s>
<s>F-score of 79.8%[2].Rubin, et al. classifies news as Satires, Fabrications andHoaxing as the parts of fake news [3]. Reyes, et al. usedfigurative language processing for humour and irony detection[4].Ahmad and Tanvir used tockenized, stopword free andstemmed data to classify satire and irony using SVM andgot an accuracy of 83.41%[5]. el Pilar Salas-Zrate and Maraused some psycholinguistic approaches for satire detection intwitter and got F-score of 85.5% for mexican data and 84.0%for spanish data [6].Tacchini and Eugenio check facts using the information ofthe users who liked a news[7]. Applying logistic regressionon the information of likers, around 99% accuracy is achievedfor their dataset.Some approaches simply used a naive bayes classifier toclassify a news. After a little bit of preprocessing on 1-grams from the news context, words are fed to a naive bayesclassifier. Granik, et al. proposed to do a stemming to increaseaccuracy[8]. The accuracy got using this approach without978-1-7281-5242-4/19 c©2019 IEEEpreprocessing is nearly 70%[8] and Pratiwi, et al. got accuracyof 78.6% with preprocessing in Indonesian Language[9].Ruchansky, et al. proposed a Capture Score and Integratemodel[10]. Sense making words from the body text of thenews of twitter[11] and weibo[12] are taken and fed to a RNNand the reviews of the news are taken as feature. The accuracyfor this model is 89.2% for twitter data and 95.3% for weibodata[10].Conroy,et al. proposed two different approaches for detect-ing fake news [13]. One is linguistic approach which includesdeep syntax analysis and semantic analysis. Deep syntaxanalysis is implemented based on Probability Context FreeGrammars(PCFG). It can predict falsehood approximately91% accurately.Another one is network approach that is based on factchecking using the knowledge networks formed by intercon-necting the linked data. This approach gives an accuracy inthe range of 61% to 95% for different subject areas.B. Definitions1) Word Embedding: Word embedding simply refers tovector representation of words. Normally machine learningmodels are not capable of processing string or text as input.These models expect vectors or values as input. So, trans-formation of a word to a vector is a crucial part. Thereare several techniques to convert word to vectors. Thesetechniques can be categorized as two types 1. Frequency based(TF-IDF, CountVectorizer, HashVectorizer) 2. Prediction based(Word2Vec)2) TF-IDF: In this work we focused on TF-IDF vector-izer amongst TF-IDF, CountVectorizer, HashVectorizer etc.as the frequency based word embedder. TF stands for TermFrequency and IDF stands for Inverse Document Frequency.It is used in text mining as a weighting factor for features.The equation representing the TF-IDF weight of a term t fora particular document d (given the whole dataset D and thenumber of documents in the dataset N) istf − idf(t, d) = tf(t, d)× idf(t,D) (1)Here,tf(t, d) = Frequency of t in didf(t,D) = log( N|{dεD:tεd}| ) [14]TF is upweighted by the number of times a term occurs in anarticle. And IDF is downweighted by the number of times aterm occurs in the whole dataset/corpus. So TF-IDF assignsless significant values to words that generally occurs in mostdocuments such as is, are, be, to, on ... etc.3) Word2Vec: Though heavily used in the field ofNLP, frequency based word embedders fail to</s>
<s>capture thesemantic value of a word or document. Word2Vec is theprocess of transforming words to vectors preserving someof their syntactical and semantic correlations. Word2Vectries to determine the meaning of a word and understandits correlation with other words by looking at its context.For example, lets take two sentences ”Range Rover isgreat car” and ”Range Rover is a wonderful vehicle”, thena well-trained Word2vec should be able map similaritiesbetween the words great and wonderful and the words carand vehicle. Word2Vec uses cosine distance over euclideandistance to measure the similarity or distance between twowords. Let’s take some pair of singular-plural words likecat and cats, dog and dogs. Here the singular-plural relationbetween the words cat and cats is represented by the cosinedifference between the two words Vcat and Vcats (1) is givenby the equation belowcosine(Vcat, Vcats) =Vcat × Vcats||Vcat|| × ||Vcats||(2)Fig. 1: Word2Vec (a) Fig. 2: Word2Vec (b)And now the word pair dog and dogs have the samesingular-plural relationship between them like cat and cats (2).So according to Word2VecVdog − Vdogs = Vcat − Vcats⇒ Vdogs = Vcats − Vcat + VdogSo, if we know the particular relationship between twowords and know one of the words, Word2vec can predict theother word.4) CNN in NLP: Convolutional Neural Network(CNN) isa deep learning algorithm that takes a multidimentional vectoror an image as input. CNNs captures the significant aspectsof the input through the application of appropriate filters andperform classification tasks. With enough training CNNs areable to learn which filters are appropriate for different contexts.Different filters/kernels are slided on the beginning layersnamed after convolutional layer and extract different featuresand feed-forward the values to next layer of the architecture.Besides learning high level features of an image these filtersalso reduce the size of convolved feature space and thus reducethe computational power required for processing the data. Theconvolved features extracted by the convolutional layers arethen fed to a normal neural network architecture possibly witha number of hidden layers. This neural network learns theconvolved feature vector and does the actual classification task.Recently CNNs are used heavily in NLP. CNN expects theinput to be a multidimensional image but we have a onedimensional vector from a word after word embedding. So,instead of single words we feed whole sentences or documentsinto CNN . From a 20 word document or sentence whereeach word is embedded to a 200 dimension vector, we canget a 20×200 matrix. This two dimensional vector can act asan image and can be fed as an input to CNN. From variousexperiments and researches it was found that CNN performsquite well in generalizing the relationships between words ina document and thus capturing itś semantic meaning. CNNperforms better than the simple bag of words and prone toless inaccurate assumptions.II. PROPOSED METHODFirst of all, we built our own Word2Vec model modifyingthe traditional Word2Vec model. Then an image is createdfrom a preprocessed document combining Word2Vec and TF-IDF vectorizers. Finally, that image is used as input to a CNNarchitecture. The detailed procedure is discussed below.A. Building our own Word2Vec modelThere are some great Word2Vec models for English lan-guage, but as</s>
<s>per our knowledge there is no good performingWord2Vec model for Bangla. So, we had to create our ownWord2Vec model. To do so, we relied on gensim library ofpython. We needed a lot of textual data to train the model.We used the scrapy library of python to build crawlers andcrawled Bangla textual data from Wikipedia and online newsportals. We collected 380832 articles in total and used these totrain our model. Our Word2Vec model converts Bangla wordsto a vector of size 10. To check the performance of our model,we checked the 5 most similar words of a Bangla word. Hereare the results.Fig. 3: Testing Word2Vec model with a wordB. DatasetWe used the scrapy library of Pyhon to build crawlers tocrawl bangla textual data from different websites. For authenticnews data we crawled news articles from two bangla newsportals Prothom Alo[15] and Ittefaq[16]. For satire data wecrawled articles from a renowned satire news portal Motikon-tho[17]. We crawled a total of 1480 articles from Motikontho.To balance our dataset we randomly selected 1480 articlesfrom the REAL news articles we collected from Prothom Aloand Ittefaq. The formation of the dataset is very simple. Asingle data is only a document and a label (satire or not).C. Data PreprocessingThe collected dataset might be mixed with some noisy andunnecessary data. So, we had to get rid of them through a bitof preprocessing. Our preprocessing consisted of the followingsteps.1) Ignoring stopwords and punctuations: We ignored thestopwords from every news document. Because, these wordsappear in almost every article and do not provide any signifi-cant information. Some example of stopwords -We collected the list of Bangla stopwords from a githubrepository by genediazjr[18].2) Stemming: The purpose of a stemming is to find the rootword. We used a stemmer developed by Rafi Kamal[19] whichwe found in his github repository. The stemmer performedbetter compared to other available Bangla stemmers that wecould find.Fig. 4: StemmingD. Document to ImageFor converting a document to a vector we selected TF-IDF vectorizer of size 1000. It means 1000 words/terms wereselected based on our corpus. We ignored the terms that occurin more than 70% of the documents in our corpus. Becausecommon words do not add any significant value to a specificdocument. We also ignored rare terms(the terms that occur inless than 10% of the documents). Because these terms mightoverfit any model. Also any numeric terms were ignored. Theterm frequency of the rest of the words were calculated andmost significant 1000 terms were selected. The respective TF-IDF values of these terms for a document are used to createa vector of size 1000 that represents the document. But theseTF-IDF values do not hold any semantic meanings. So eachof the words were converted to a vector of size 10 using ourWord2Vec model. These vectors of size 10 were multipliedby their respective TF-IDF values for a document. So, foreach document we had a 2D vector of size 1000× 10. Also,convolutional layers expect pixel values of an image. Pixelvalues are never negative. So, CNNs cannot take a vectorwhich contains some negative values. But word2vec embeddedvectors can contain</s>
<s>negative values. CNN input layers actuallytake a 3 dimensional vector. First two dimensions representsthe 2D image and the third dimension is the number of filters.For coloured images the number of filters is 3(Red, Green,Blue) and each pixel value of image is formed with 3 valuesin RGB system. Our strategy to handle negative numbers wasto separate the positive and negative values in two dimensionsjust like the picture below.Fig. 5: Transformation of Feature VectorsSo, for each document we had a 3D vector of size 1000×10× 2.E. Structure of the model• Input layer: Input layer contains a Convolutional2D layerwith 256 filters that takes vectors of shape 1000×10×2as input.• Another Convolutional2D layer with 128 filters withReLU activation.• Pooling layer of size 2× 2.• Dropout layer with value 0.25 to avoid overfitting.• A Dense layer with 512 neurons with ReLU activation.• Dropout layer with value 0.5.• Output layer: One neuron with sigmoid activation.III. RESULTS AND ANALYSISThe dataset was randomly split to two parts. 70% of thedata (2018 documents) were used as training dataset. The rest30% (942 documents) were used to test the performance ofour model. This model gave us an accuracy rate of 96.4% onthe test dataset. Since the dataset was balanced the F1 scorewas same as the accuracy value. The confusion matrix is givenbelow.(a) Total Count Representation(b) Percentile RepresentationFig. 8: Confusion Matrices showing the resultsThere are some scopes to improve our Word2Vec model anda perfect stemmer for Bangla language can boost the overallperformance of our proposed model. The model was compiledwith 2GB graphics card of NVIDIA GeForce 940M. So, wecould not use a TF-IDF vector and a Word2Vec vector of largersize. If we had access to more resources we could use biggerfeature vectors and the accuracy might have improved some.Actually, in terms of accuracy, humans are much moreeffective than a machine for this task. Maybe for our dataset,human will be able to detect satires 100% accurately. But,though the accuracy falls a bit, we think it’s much better touse an automated approach which saves a lot of time.IV. CONCLUSIONSatire detection for Bangla news is completely new. As perour knowledge, no such work has been done in this sector forBangla language. We found that our hybrid feature extractiontechnique combined with a CNN model performs great inlanguage processing for pattern finding. Since satire is a typeof fakeness, satire detection can be an important prerequisiteof fake news detection. So, this work might be helpful to takebetter decisions on fake news detection and other such works.Also our hybrid feature extraction technique can be used inother works of similar nature.REFERENCES[1] De Sarkar, Sohan, Fan Yang, and Arjun Mukherjee. ”Attending sentencesto detect satirical fake news.” In Proceedings of the 27th InternationalConference on Computational Linguistics (pp. 3371-3380).[2] Burfoot, Clint, and Timothy Baldwin. ”Automatic satire detection: Areyou having a laugh?.” Proceedings of the ACL-IJCNLP 2009 conferenceshort papers. 2009.[3] Rubin, Victoria L., Yimin Chen, and Niall J. Conroy. ”Deception detectionfor news: three types of fakes.” Proceedings of the 78th ASIS&T AnnualMeeting: Information Science with Impact: Research in and for theCommunity. American Society for Information</s>
<s>Science, 2015.[4] Reyes, Antonio, Paolo Rosso, and Davide Buscaldi. ”From humor recog-nition to irony detection: The figurative language of social media.” Data& Knowledge Engineering 74 (2012): 1-12.[5] Ahmad, Tanvir, et al. ”Satire detection from web documents usingmachine learning methods.” 2014 International Conference on Soft Com-puting and Machine Intelligence. IEEE, 2014.[6] del Pilar Salas-Zrate, Mara, et al. ”Automatic detection of satire inTwitter: A psycholinguistic-based approach.” Knowledge-Based Systems128 (2017): 20-33.[7] Tacchini, Eugenio, et al. ”Some like it hoax: Automated fake newsdetection in social networks.” arXiv preprint arXiv:1704.07506 (2017).[8] Granik, Mykhailo, and Volodymyr Mesyura. ”Fake news detection usingnaive Bayes classifier.” 2017 IEEE First Ukraine Conference on Electricaland Computer Engineering (UKRCON). IEEE, 2017.[9] Pratiwi, Inggrid Yanuar Risca, Rosa Andrie Asmara, and FaisalRahutomo. ”Study of hoax news detection using nave bayes classifier inIndonesian language.” 2017 11th International Conference on Information& Communication Technology and System (ICTS). IEEE, 2017.[10] Ruchansky, Natali, Sungyong Seo, and Yan Liu. ”Csi: A hybrid deepmodel for fake news detection.” Proceedings of the 2017 ACM onConference on Information and Knowledge Management. ACM, 2017.[11] https://twitter.com/[12] https://www.weibo.com[13] Conroy, Niall J., Victoria L. Rubin, and Yimin Chen. ”Automaticdeception detection: Methods for finding fake news.” Proceedings of theAssociation for Information Science and Technology 52.1 (2015): 1-4.[14] https://en.wikipedia.org/wiki/Tf-idf[15] https://www.prothomalo.com/[16] https://www.ittefaq.com.bd/[17] https://motikontho.wordpress.com/[18] https://github.com/stopwords-iso/stopwords-bn/blob/master/stopwords-bn.txt[19] https://github.com/rafi-kamal/bangla-stemmerhttp://arxiv.org/abs/1704.07506 I Introduction I-A Literature Review I-B Definitions I-B1 Word Embedding I-B2 TF-IDF I-B3 Word2Vec I-B4 CNN in NLP II Proposed method II-A Building our own Word2Vec model II-B Dataset II-C Data Preprocessing II-C1 Ignoring stopwords and punctuations II-C2 Stemming II-D Document to Image II-E Structure of the model III Results and Analysis IV Conclusion References</s>
<s>Paper Title (use style: paper title)See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/327621770BanglaMusicStylo: A Stylometric Dataset of Bangla Music LyricsConference Paper · September 2018DOI: 10.1109/ICBSLP.2018.8554661CITATIONSREADS3,6242 authors:Some of the authors of this publication are also working on these related projects:Music Title Estimation View projectStylometric Dataset for Bangla Music View projectRafayet HossainFrankfurt University of Applied Sciences4 PUBLICATIONS 11 CITATIONS SEE PROFILEAhmed Al MaroufDaffodil International University26 PUBLICATIONS 23 CITATIONS SEE PROFILEAll content following this page was uploaded by Rafayet Hossain on 23 September 2018.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/327621770_BanglaMusicStylo_A_Stylometric_Dataset_of_Bangla_Music_Lyrics?enrichId=rgreq-ac538ef24ed59749b6eb2e0a987bebff-XXX&enrichSource=Y292ZXJQYWdlOzMyNzYyMTc3MDtBUzo2NzQwNTk2NDE3MDQ0NTRAMTUzNzcxOTc1MDQ2Ng%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/327621770_BanglaMusicStylo_A_Stylometric_Dataset_of_Bangla_Music_Lyrics?enrichId=rgreq-ac538ef24ed59749b6eb2e0a987bebff-XXX&enrichSource=Y292ZXJQYWdlOzMyNzYyMTc3MDtBUzo2NzQwNTk2NDE3MDQ0NTRAMTUzNzcxOTc1MDQ2Ng%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Music-Title-Estimation?enrichId=rgreq-ac538ef24ed59749b6eb2e0a987bebff-XXX&enrichSource=Y292ZXJQYWdlOzMyNzYyMTc3MDtBUzo2NzQwNTk2NDE3MDQ0NTRAMTUzNzcxOTc1MDQ2Ng%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Stylometric-Dataset-for-Bangla-Music?enrichId=rgreq-ac538ef24ed59749b6eb2e0a987bebff-XXX&enrichSource=Y292ZXJQYWdlOzMyNzYyMTc3MDtBUzo2NzQwNTk2NDE3MDQ0NTRAMTUzNzcxOTc1MDQ2Ng%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-ac538ef24ed59749b6eb2e0a987bebff-XXX&enrichSource=Y292ZXJQYWdlOzMyNzYyMTc3MDtBUzo2NzQwNTk2NDE3MDQ0NTRAMTUzNzcxOTc1MDQ2Ng%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Rafayet_Hossain?enrichId=rgreq-ac538ef24ed59749b6eb2e0a987bebff-XXX&enrichSource=Y292ZXJQYWdlOzMyNzYyMTc3MDtBUzo2NzQwNTk2NDE3MDQ0NTRAMTUzNzcxOTc1MDQ2Ng%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Rafayet_Hossain?enrichId=rgreq-ac538ef24ed59749b6eb2e0a987bebff-XXX&enrichSource=Y292ZXJQYWdlOzMyNzYyMTc3MDtBUzo2NzQwNTk2NDE3MDQ0NTRAMTUzNzcxOTc1MDQ2Ng%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Frankfurt_University_of_Applied_Sciences?enrichId=rgreq-ac538ef24ed59749b6eb2e0a987bebff-XXX&enrichSource=Y292ZXJQYWdlOzMyNzYyMTc3MDtBUzo2NzQwNTk2NDE3MDQ0NTRAMTUzNzcxOTc1MDQ2Ng%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Rafayet_Hossain?enrichId=rgreq-ac538ef24ed59749b6eb2e0a987bebff-XXX&enrichSource=Y292ZXJQYWdlOzMyNzYyMTc3MDtBUzo2NzQwNTk2NDE3MDQ0NTRAMTUzNzcxOTc1MDQ2Ng%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Ahmed_Marouf2?enrichId=rgreq-ac538ef24ed59749b6eb2e0a987bebff-XXX&enrichSource=Y292ZXJQYWdlOzMyNzYyMTc3MDtBUzo2NzQwNTk2NDE3MDQ0NTRAMTUzNzcxOTc1MDQ2Ng%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Ahmed_Marouf2?enrichId=rgreq-ac538ef24ed59749b6eb2e0a987bebff-XXX&enrichSource=Y292ZXJQYWdlOzMyNzYyMTc3MDtBUzo2NzQwNTk2NDE3MDQ0NTRAMTUzNzcxOTc1MDQ2Ng%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Daffodil_International_University?enrichId=rgreq-ac538ef24ed59749b6eb2e0a987bebff-XXX&enrichSource=Y292ZXJQYWdlOzMyNzYyMTc3MDtBUzo2NzQwNTk2NDE3MDQ0NTRAMTUzNzcxOTc1MDQ2Ng%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Ahmed_Marouf2?enrichId=rgreq-ac538ef24ed59749b6eb2e0a987bebff-XXX&enrichSource=Y292ZXJQYWdlOzMyNzYyMTc3MDtBUzo2NzQwNTk2NDE3MDQ0NTRAMTUzNzcxOTc1MDQ2Ng%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Rafayet_Hossain?enrichId=rgreq-ac538ef24ed59749b6eb2e0a987bebff-XXX&enrichSource=Y292ZXJQYWdlOzMyNzYyMTc3MDtBUzo2NzQwNTk2NDE3MDQ0NTRAMTUzNzcxOTc1MDQ2Ng%3D%3D&el=1_x_10&_esc=publicationCoverPdfBanglaMusicStylo: A Stylometric Dataset of Bangla Music Lyrics Rafayet Hossain Ahmed Al Marouf Department of Computer Science and Engineering Human Computer Interaction Research Lab (HCI RL) Daffodil International University (DIU) Email: rafayet3994@diu.edu.bd Department of Computer Science and Engineering Systems and Software Lab (SSL) Islamic University of Technology (IUT) Email: samcit41@iut-dhaka.edu Abstract—With the rapid growth of Bangla music industry huge volume of Bangla songs are produced every day. Immense number of producers, lyricists, singers and artists are involved in production of songs from different genres. Among many genres of Bangla music; classical, folk, baul, modern music, Rabindra Sangeet, Nazrul Geeti, film music, rock music and fusion music has gained the highest popularity. Lyricists try to express their feelings and views towards any situation or subject through their writings. Therefore, each lyricist have their own dictionary of thoughts to put on music lyrics. In this paper, we have presented “BanglaMusicStylo”, the very first stylometric dataset of Bangla music lyrics. We have collected 2824 Bangla song lyrics of 211 lyricists in a digital form. All the lyrics are stored in text format for further use. This dataset could be used for stylometric analysis such as authorship attribution, linguistic forensics, gender identification from textual data, Bangla music genre classification, vandalism detection, emotion classification etc. Identifying the significant research opportunities in this area, we have formalized this dataset which could be used for stylometric analysis. Keywords—Bangla Music Lyrics; Stylometric Analysis; Authorship Attribution; Bangla Stylometric Dataset. I. INTRODUCTION With the rapid growth of Bangla music industry, enormous amount of lyrics has been written by lyricists to produce music. Availability of music production tools and music sharing social networks such as YouTube, Vimeo, Amazon Prime Videos etc. are the main reason of great number of Bangla music production. Many peoples from different stages of society are involved in this industry. Lyricists, producers, singers and artists are involved with this entertainment industry. Apart from rhythm, tune, fusion, singer or genre; lyrics is the most vital element of songs, as it has direct impact on the listeners’ choice and mood. Bangla music genres could be divided into many sections, because of its highly diversified music categories. Genres could be categorized into three part: religious music (Hamd, Naat, Ghazal, Qawwali etc.), ethnic music (Baul, Bhatiali, Bhawaiya, Jari Gan, Sari Gan etc.) and traditional music (Rabindra Sangeet, Nazrul Geeti, Lalon, Hason Raja etc.). As people from different sector and different occupation are involved in Bangla music, this kind of diversity</s>
<s>has adopted. For instance, the ‘Bhatiali’ is a genre of ethnic or folk music which is commonly sung by the boatmen in Bangladesh. The word ‘Bhatiali’ comes from ‘Bhata’ which means downstream. Choice of listening music depends on the lyrics of the songs. On the other hand, lyrics are form of text, which could be easily used in digital platforms for analysis. Text analytic tools could be proven as very effective tools to analyze the lyrics to find out the stylometric features of songs, for further applications. Stylometry is the study of language style, especially on written form of language such as story, poems, and music lyrics. In the literature, researchers’ have investigated the stylometric features or characteristics to find out the authorship attribution of Twitter data [1], analysis on scientific articles [2], to find the gender and age of bloggers [3], linguistic forensics [4] etc. It is evident that there could be many possible methodology to attribute authorship of documents, emails or story [5-8]. Therefore, to train a system properly to be used for other applications, a firm dataset. In Bangla computing or Bangla language processing the key challenge is lack of ground-truth dataset to train and test proposed methods. In this paper, to the best of our knowledge, we presented the very first Bangla Stylometric dataset having 2824 Bangla lyrics of 211 lyricists. For our use in this paper, we would refer author or lyricist having same meaning. The rest of the paper is divided into six sections. Literature review is illustrated in section II, the attributes of the dataset are described in section III. Data collection procedure, statistical analysis of the dataset and applications of the dataset are demonstrated in section IV, section V and section VI, respectively. Finally, conclusion statements are in section VII. II. LITERATURE REVIEW This sections illustrates on the related works performed by the researchers in the Bangla computing area and associated area. This research area has got attention of researchers’ from Bangla speaking countries specially Bangladesh and West Bengal of India. Million Song Dataset (MSD) [17] is an English song dataset containing one million song, which is a cluster of complementary datasets having cover songs, lyrics, song-level tags, user data, genre labels etc. musiXmatch1 is the official lyrics dataset of MSD having 237,662 tracks lyrics. Similar datasets could be found having English songs, but not available for Bangla songs. Song lyrics could be used for analyzing many research topics such as emotion classification [11-13, 16], mood classification [14], semantic analysis [15] ________________________________________________ 1https://labrosa.ee.columbia.edu/millionsong/musixmatch International Conference on Bangla Speech and Language Processing (ICBSLP), 21-22 September, 2018 978-1-5386-8207-4/18/$31.00 ©2018 IEEE etc. A. Jamdar et al. [13] proposed a lyrical and audio features based method to detect the emotion of a song. He applied ANEW [19] and WordNet [18] knowledge to associate the linguistic features extracted from lyrics. Weighted and stepwise threshold reduction on KNN algorithm is applied for the classification task. R. Malheiro et al. [16] created a song dataset containing 180 song lyrics</s>
<s>according to Russell’s emotion model. He extracted features complemented by stylistic, structural and semantic features to identify the arousal and valence categories of each song. In [16] regression analysis and different criteria-wise classification is also applied. X. Hu et al. [14] proposed a text mining method to classify the music mood. He proposed a ground-truth database of English songs having total 21,000 songs. Among these songs only 8784 English songs have lyrics and the proposed method applies WordNet-Affect [20], a linguistic resource to filter the affective meanings of the tags. The method uses Bag-of-Words (BoW), Part-of-Speech (POS) and function words as features and SVM as classifier. B. Logan et al. [15] performed a semantic analysis on song lyrics on the publicly available uspop2002 dataset [21]. He applied Probabilistic Latent Semantic Analysis (PLSA) on the dataset. Moreover, this paper also focuses on the artist similarity, acoustic similarity and topic modeling. Natural language processing (NLP) could be applied on lyrics [22] collected from Lyrics.com2 and Lyrics4u.com3. Mahedero et al. [22] proposed a language identification, structure extraction and thematic categorization methods. In this paper, we have proposed the very first Bangla Stylometric dataset ‘BanglaMusicStylo’ which could be used for many further application including authorship attribution, linguistic forensics etc. This paper could be considered as the starting journey towards the exploration of research possibilities in Bangla stylometric analysis. III. DATASET ATTRIBUTES In BanglaMusicStylo dataset, the lyrics of 2824 Bangla songs are stored. In this collection, we have tried to cover the most popular genres of Bangla music. Some attributes of the dataset could be listed as following. - Different authors song lyrics are kept in separate folders. - Different songs of same author are kept in the same folder. - Different song lyrics are stored in ‘Siyam Rupali’ Bangla font in Microsoft .docx file format. Using simple file reader methods in Java or any object oriented programming language, it is possible to read the separate lines of the files for further text processing. Fig. 1. Author folders having songs inside. The Fig. 1 illustrates some of the author folders and snippet of the dataset. Fig. 2 shows the song files named as “songID_songTitle.docx” format. Each file contains the lyrics of the songs. The songs are written by the national poet of Bangladesh, Kazi Nazrul Islam. Fig. 2. List of songs of Kazi Nazrul Islam. Each file contains lyrics in text format, written in Siyam Rupali Bangla font. The lyrics also contains the necessary notations used to support the singers to sign the song properly, such as number of repetition, specification of song sections (like অন্তরা). In this dataset, we have collected the songs from different genres of Bangladeshi music. We tried to collect song lyrics from each category of religious music, ethnic music and traditional music. Fig. 3 shows an example song lyrics of Rabindranath Tagore, the writer of national anthem of Bangladesh. ___________________________ 2https://www.lyrics.com/ 3http://lyrics4u.com/ Fig. 3. Snippet of the song authored by Rabinranath Tagore. Most of the songs of the proposed dataset</s>
<s>could be classified in the following genres of Table I. TABLE I. BANGLADESHI MUSIC GENRES Genres Description Classical Classical music is based on modes called rags. Based on various versions of Hindustani classical forms Bangla classical music are adopted. Folk This genre distinguished by simple musical instruments and words. It has evolved from the traditional cultures. Baul It most commonly known category of Bangladeshi folk songs. It incorporates simple words expressing song with deeper meanings. These songs are performed with very little musical instruments such as ‘aktara (one string instrument) or dotara (two string instrument)’ and supports the main vocal. Adhunik or Modern Music Contemporary songs are generalized as adhunik or modern music. Nowadays, newbie singers are mainly focusing in producing this genre songs. Rabindra Sangeet It also known as “Tagore Song”. Written and composed by Rabindranath Tagore, the Nobel Prize winning Bengali writer, who wrote the national anthem of Bangladesh as well as India. People like the overall tone, rhythm and lyrics of these songs for more than decades. Nazrul Geeti Nazrul geeti songs are written and composed by Kazi Nazrul Islam, a famous Bengali poet and national poet of Bangladesh. He is specially known because of his revolutionary poems which are converted to songs. Film music The film industries of Bangladesh supported music by according reverences to classical music while utilizing the western orchestration to support melodies. Film music are placed in the films to decrease monotonous story line into interesting one. These genres consists romantic, sad, happy and anger emotions songs. Rock music Bangladeshi rock music was introduced in 1972 by a singer, song writer, composer Nasir Ahmed Apu. Influenced by western music, Bangladeshi young artists got involved in this trend and produced some of the most popular songs. Fusion music Traditional music with western instrument to revitalize and re-popularize Bengali music. This genre is recently becoming popular to the young listeners. IV. DATA COLLECTION PROCEDURE The ‘BanglaMusicStylo’ dataset contains 2824 Bangla song lyrics. For collecting these Bangla song lyrics, we have used meta-searching techniques. We have used keyword based searching in the World Wide Web (WWW). We have picked up numerous keywords to search the lyrics from sites, blogs or in audio or video sharing sites. The keywords are chosen from different categories such as lyricists name, song titles, song genres, emotion words. But our main focus was to collect as much as song lyrics possible for each lyricist. In this dataset we have collected 211 lyricists song. Among them more than 10 songs are collected of 38 lyricists. The keywords for searching criteria are shown in Table II. We have used both Bangla and English keywords to collect the data. TABLE II. KEYWORDS USED FOR META SEARCHING Keyword Category Example Keywords Name of the Lyricist Rabindranath Tagore, Kazi Nazrul Islam, Gazi Mazharul Anowar, Lalon, Gauri Prasanna Majumder, Hason Raja etc. Title of the songs Genre class Bangla Rock song, Bangla Band song, Rabindra Sangeet, Nazrul Geeti, Bangla Folk song list, Hason Raja songs etc. Emotional</s>
<s>words Bangla sad songs, Bangla celebration songs, Bangla happy songs etc. Apart from the keyword based searches, we have also collected lyrics from the album covers and comment section of YouTube. Many listener try to comment the song lyrics under the YouTube videos if they like the song. We have also gathered some of the lyrics from there, which are mostly modern music. V. STATISTICAL ANALYSIS OF DATASET In this section, we have illustrated the statistical perspective of the ‘BanglaMusicStylo’. Table III shows the properties of the dataset and Table IV demonstrates some of the insights of number of songs per author. The average songs per author and more than thousand words per author would be sufficient to train-test a machine learning system applying text mining algorithms. TABLE III. PROPERTIES OF THE DATASET Properties Values Total Number of Songs 2284 Total Number of Words 224,342 Avg. Songs per Author 13.38388626 Avg. Words per Author 1063.232227 In Table IV, the snippet of author-wise number of songs and words are listed. The highest number of songs (856) and second highest (620) are collected of Rabindranath Tagore and Kazi Nazrul Islam, the two most influential lyricists in Bangla music. The dataset contains 20 plus songs of 15 lyricists and 10 plus songs of more than a hundred lyricists. More than a thousand words of lyrics are collected for 100 lyricists. TABLE IV. SNIPPET OF THE AUTHOR-WISE DATA STATISTICS Author Name No. of Songs No. of Words Rabindranath Tagore 856 52784 Kazi Nazrul Islam 620 43246 Gazi Mazharul Anwar 82 7282 Pulak Bandyopadhyay 66 5955 Gauri Prasanna Majumder 62 5080 Lalon Shah 38 3108 Latiful Islam Shibli 30 3052 Shibdas Bandyopadhyay 24 1773 Kabir Bakul 22 2483 Mohammad Rafiquzzaman 22 1559 VI. APPLICATIONS OF DATASET In this section, we described the applications of the proposed dataset. Using this dataset could be applied to the following challenging tasks, but are not limited to. A. Authorship Attribution Authorship attribution is the most common approach to identify the author of the written document or music [1]. This algorithm has been applied in many contexts such as document, story, music lyrics or even social media text data like Twitter data [1]. The proposed dataset could be used for identifying the authors of Bangla Music lyrics. B. Linguistic Forensics Linguistic forensics is an ancient method to recognize the gender, age or other characteristics of author [4]. Finding sufficient features from the text applying text mining tools and classify the features into gender or age of the author. With this dataset we have the derived gender of the lyricists, therefore, this dataset could be used for the same purpose. C. Bangla Music Genre Classification Ashfaqur et al. [9] proposed a Bangla music genre classification method based on learning and prediction approach for classifying four genre of songs namely, Rabindra Sangeet, Folk Song, Modern song and pop music. Our dataset could be used for similar purpose as we have collected song lyrics of nine different genre mentioned in Table I. D.</s>
<s>Vandalism Detection Vandalism detection considered as one-class classification problem applied on textual data [10]. Character level, word level and sentence level features could be used as content features for the classification task. Our proposed dataset could be used for vandalism detection as it contains the lyrics as text. E. Emotion-based Classification Emotion based text classification [11] is a challenging task to classify text content based on the understandings of emotion cues from it. Finding emotion from US song lyrics is presented in [12] based on linguistic markers of psychological traits and emotions over time. Similar emotion based classification could be adopted for Bangla song lyrics using our proposed dataset. VII. CONCLUSION This paper proposed a stylometric dataset of Bangla song lyrics for analysis of stylometric features. To the best of our knowledge, this is the very first Bangla song lyrics dataset which could be used in many stylometric analysis. As Bangla computing is extending its branches in many dimensions, stylometric analysis of Bangla lyricists is a considerable task to be performed. This paper investigates the possibilities of future research in Bangla stylometry. REFERENCES [1] M. Bhargava, P. Mehndiratta, K. Asawa, “Stylometric analysis for authorship attribution on twitter.”, In Big Data Analytics, vol. 8302 of Lecture Notes in Computer Science, pp. 37–47. Springer International Publishing, 2013. [2] S. Bergsma, M. Post, D. Yarowsky, “Stylometric analysis of scientific articles.”, North Americal Chapter of Association of Computational Linguistics (NAACL), 2012. [3] S. Goswami, S. Sarkar, M. Rustagi, “Stylometric analysis of bloggers' age and gender”, In Proceedings of the 3rd International AAAI Conference on Weblogs and Social Media (ICWSM), San Jose, USA, May 17 - 20, 2009. [4] M. T. Turell, “The use of textual, grammatical and sociolinguistic evidence in forensic text comparison”, International Journal of Speech Language and the Law, vol. 17, issue. 2, 2010. [5] J. Diederich, J. Kindermann, E. Leopold, G. Paass, “Authorship attribution with Support Vector Machines”, Applied Intelligence, vol. 19, pp. 109–123, 2000. [6] H. Craig, “Authorial attribution and computational stylistics: If you can tell authors apart, have you learned anything about them?”, Literary and Linguistic Computing, vol. 14, issue. 1, pp. 103–113, 1999. [7] A. Gray, P. Sallis, S. MacDonell, “Software forensics: Extending authorship analysis techniques to computer programs.”, 3rd biannual conference of the International Association of Forensic Linguists (IAFL), 1997. [8] D. Lowe, R. Matthews, (1995). “Shakespeare vs. Fletcher: A stylometric analysis by radial basis functions.”, Computers and the Humanities, vol. 29, pp. 449–461, 1995. [9] A. Rahman, “Bangla Music Genre Classification”, Journal of Multidisciplinary Computational Intelligence Techniques: Application in Business, Engineering and Medicine, pp. 15, 2012. [10] S. Heindorf, M. Potthast, B. Stein, G. Engels, "Vandalism Detection in Wikidata", Conference on Information and Knowledge Management (CIKM), Indianapolis, USA, October 24-28, 2016. [11] T. Danisman, A. Alpkocak, “Feeler: Emotion classification of text using vector space model.”, In AISB 2008 Convention, Communication, Interaction and Social Intelligence, vol. 2, pp. 53–59, Aberdeen, Scotland, 2008. [12] C. N. DeWall, R. S. Pond, W. K. Campbell, J. M. Twenge, “Tuning in</s>
<s>to psychological change: Linguistic markers of psychological traits and emotions over time in popular U.S. song lyrics.”, Psychology of Aesthetics, Creativity, and the Arts, vol. 5, pp. 200-207, 2011. [13] A. Jamdar, J. Abraham, K. Khanna, R. Dubey, "Emotion Analysis of Songs based on lyrical and audio features", International Journal of Artificial Intelligence & Applications (IJAIA), vol. 6, issue. 3, May 2015. [14] X. Hu, J. S. Downie, A. F. Ehmann, "Lyric text mining in music mood classification", 10th International Society for Music Retrival Conference (ISMIR), 2009. [15] B. Logan, A. Kositsky, P. Moreno, "Semantic Analysis of Song Lyrics", IEEE International Conference on Multimedia and Expo, Taipei, Taiwan, 27-30 June, 2004. [16] R. Malheiro, R. panda, P. Gomes, R. P. Paiva, "Emotionally-relevant features for classification and regression of music lyrics", IEEE Transactions of Journal Affective Computing, 2016. [17] T. B. Mahieux, D. P. W. Ellis, B. Whitman, P. Lamere, "The million song dataset", International Society for Music Information Retrieval, 2011. [18] G. A. Miller, “Wordnet: A lexical database for English.”, Community of ACM, vol. 38, issue. 11, 1995. [19] M. M. Bradley, P. J. Lang, “Affective norms for English words (ANEW): Instruction manual and affective ratings”, (Tech. Rep. No. C-1). Gainesville, FL: University of Florida, The Center for Research in Psychophysiology, 1999. [20] C. Strapparava, A. Valitutti, “WordNet-Affect: an Affective Extension of WordNet,” Proceedings of the International Conference on Language Resources and Evaluation (LREC), pp. 1083-1086, 2004. [21] A. Berenzweig, B. Logan, D.P.W. Ellis, B. Whitman, “A large-scale evaluation of acoustic and subjective music similarity measures.”, In Proceedings International Conference on Music Information Retrieval (ISMIR), 2003. [22] J. P. G. Mahedero, “Natural language processing of lyrics.”, In Proceedings of the 13th annual ACM International conference on Multimedia, New York, USA, 2005.View publication statsView publication statshttps://www.researchgate.net/publication/327621770</s>
<s>informationArticleBangla DeConverter for Extraction of BanglaText fromUniversal Networking LanguageMd. Nawab Yousuf Ali 1, Md. Lizur Rahman 1,* and Golam Sorwar 21 Department of Computer Science and Engineering, East West University, Dhaka 1212, Bangladesh;nawab@ewubd.edu2 School of Business and Tourism, Southern Cross University, Lismore, QLD 4225, Australia;Golam.Sorwar@scu.edu.au* Correspondence: lizur.sky@gmail.comReceived: 25 August 2019; Accepted: 12 October 2019; Published: 21 October 2019�����������������Abstract: The people in Bangladesh and two states (i.e., Tripura and West Bengal) in India, whichis about 230 million of the world population, use Bangla as their first dialect. However, veryfew numbers of resources and tools are available for this language. This paper presents a BanglaDeConverter to extract Bangla texts from Universal Networking Language (UNL). It explains andillustrates the different phases of the proposed Bangla DeConverter. The syntactic linearization, theimplementation of the results of the proposed Bangla DeConverter, and the extraction of a Banglasentence from UNL expressions are presented in this paper. The Bangla DeConverter has beentested on UNL expressions of 300 Bangla sentences using a Russian and English Language Server.The proposed system generates 90% syntactically and semantically correct Bangla sentences with aUNL Bilingual Evaluation Understudy (BLEU) score of 0.76.Keywords: Bangla DeConverter; UNL; UNL expression; generation rules; syntactic linearization1. IntroductionThe Universal Networking Language (UNL) [1] is a digital language in the form of a networkof semantic words that performs as an intermediate representation to expose and interchange alltypes of knowledge and information. EnConverter (EnCo) and DeConverter (DeCo) are two vitalcomponents of UNL. EnCo changes a native language text intoUNL expressions andDeCo transformsthem into a target language. Therefore, a UNL system bridges the gap between languages around theworld. This paper develops a Bangla DeConverter in producing Bangla texts from UNL. Syntacticlinearization, a process of ascertaining a proper order of lexicons/words in generated texts, acts asignificant role in the quality of generated output.Unlike English, Bangla is a free word order language known for its affluent semantical andmorphological features similar to Hindi and Punjabi. English is a fixed word order language thatfollows the subject, verb, and object (SVO) pa ern. While the Bangla language is pa erned with asubject, object, and verb (SOV) structure, it can also be arranged with VSO and OSV structure.For conversion of a source language to UNL and fromUNL to a target language, EnCo and DeCotools need to be developed. These tools execute their tasks based on a word dictionary and a set ofanalysis and generation rules for a given language [1]. UNL provides knowledge and informationbased on the structure of universal words (UW), a ributes of UNL, and relations of UNL. The role ofeach word is represented by the concepts of UWs and UNL relations. UNL a ributes represent thesubjective meaning of a sentence [1]. For example, consider the following UNL expressions shownin (1) for the sentence ‘The color of the screen has changed from red to green’ and the correspondingInformation 2019, 10, 324; doi:10.3390/info10100324 www.mdpi.com/journal/informationhttp://www.mdpi.com/journal/informationhttp://www.mdpi.comhttps://orcid.org/0000-0002-3868-3471http://dx.doi.org/10.3390/info10100324http://www.mdpi.com/journal/informationhttps://www.mdpi.com/2078-2489/10/10/324?type=check_update&version=2Information 2019, 10, 324 2 of 17UNL graph shown in Figure 1. Here, obj_indicates object relation, src_for source (initial state) relation,and gol_for goal (final state), respectively.{unl}obj(change(icl>occur,src>thing,obj>thing,gol>thing).@entry.@present.@complete,colour(icl>kind>thing,equ>color).@def)obj(colour(icl>kind>thing,equ>color).@def,screen(icl>surface>thing).@def)src(change(icl>occur,src>thing,obj>thing,gol>thing).@entry.@present.@complete,red(icl>adj,equ>crimson))gol(change(icl>occur,src>thing,obj>thing,gol>thing).@entry.@present.@complete,green(icl>adj)){/unl}(1)Information</s>
<s>2019, 10, x FOR PEER REVIEW 2 of 16 green’ and the corresponding UNL graph shown in Figure 1.Here, obj_indicates object relation, src_for source (initial state) relation, and gol_for goal (final state), respectively. {unl} obj(change(icl>occur,src>thing,obj>thing,gol>thing).@entry.@present.@complete,colour(icl>kind>thing,equ>color).@def) obj(colour(icl>kind>thing,equ>color).@def,screen(icl>surface>thing).@def) src(change(icl>occur,src>thing,obj>thing,gol>thing).@entry.@present.@complete,red(icl>adj,equ>crimson)) gol(change(icl>occur,src>thing,obj>thing,gol>thing).@entry.@present.@complete,green(icl>adj)) {/unl} (1) Figure 1.Universal Networking Language (UNL) expression and UNL graph. In the UNL system [1],the linguistic communication process is administered by two tools: EnConverter (EnCo) [2,3] and DeConverter (DeCo) [3,4]. EnCo interprets a linguistic communication text into UNL expressions and DeCo translates/extracts UNL expressions into a good form of native languages. Each of the tools area unit connected with a word dictionary of linguistic communication and a group of language specific analysis and generation rules. Our paper focuses on extracting Bangla texts from UNL. Thus, we attempted to develop a group of generation rules for attaining our goals. The paper is organized as follows. Section 2 presents literature review if related areas. The architecture of the Bangla DeConverter is discussed in Section 3. Different phases of Bangla DeConverter along with syntactic linearization issues are detailed in Sections 4 and 5. Syntactic linearization of simple and compound sentences are demonstrated in Section 6. In Section 7, we illustrate experimental results and discussions by extracting a sentence from a UNL expression using some generation rules. Finally, Section 8 includes a summary of the paper with some concluding remarks. 2. Related Works The structure of UNL–Russian DeCo has been presented by [5]. A DeCo has been developed by [6] for extracting Brazilian Portuguese text from UNL Expression. DeCo for Marathi and Hindi has been proposed by [7]. DeCo for converting UNL to Panjabi has been presented by [8]. A system ‘ARIANE-G5′ has been proposed by [9]. A DeCo has been proposed by [10] for converting UNL to Chinese. They addressed the drawbacks of the DeCo developed by the UNL center. An Arabic DeCo has been introduced by [11] involving the mapping of morphological analysis, lexical generation, word order, and the relation of the words for semantic meanings. In [12], the authors have designed the architecture of a Nepali DeCo. They have highlighted two major modules, morphological Figure 1. Universal Networking Language (UNL) expression and UNL graph.In the UNL system [1], the linguistic communication process is administered by two tools:EnConverter (EnCo) [2,3] and DeConverter (DeCo) [3,4]. EnCo interprets a linguistic communicationtext into UNL expressions and DeCo translates/extracts UNL expressions into a good form of nativelanguages. Each of the tools area unit connected with a word dictionary of linguistic communicationand a group of language specific analysis and generation rules. Our paper focuses on extractingBangla texts from UNL. Thus, we a empted to develop a group of generation rules for a ainingour goals.The paper is organized as follows. Section 2 presents literature review if related areas.The architecture of the Bangla DeConverter is discussed in Section 3. Different phases of BanglaDeConverter along with syntactic linearization issues are detailed in Sections 4 and 5. Syntacticlinearization of simple and compound sentences are demonstrated in Section 6. In Section 7,we illustrate</s>
<s>experimental results and discussions by extracting a sentence from a UNL expressionusing some generation rules. Finally, Section 8 includes a summary of the paper with someconcluding remarks.2. Related WorksThe structure of UNL–Russian DeCo has been presented by [5]. A DeCo has been developedby [6] for extracting Brazilian Portuguese text from UNL Expression. DeCo for Marathi and Hindihas been proposed by [7]. DeCo for converting UNL to Panjabi has been presented by [8]. A system‘ARIANE-G5′ has been proposed by [9]. A DeCo has been proposed by [10] for converting UNL toChinese. They addressed the drawbacks of the DeCo developed by the UNL center. An Arabic DeCohas been introduced by [11] involving the mapping of morphological analysis, lexical generation,word order, and the relation of the words for semantic meanings. In [12], the authors have designedthe architecture of a Nepali DeCo. They have highlighted two major modules, morphologicalgeneration and syntactic linearization in their architecture. A DeCo for Hindi text ‘HinD’ has beenpresented by [13]. They have indicated the complex rule format in writing analysis and generationrules, non-availability of source codes, and slow speed of DeCo tools provided by the UNDLInformation 2019, 10, 324 3 of 17Foundation. Their system includes word selection, morphological analysis of the lexicons, wordinsertion, and syntactic linearization. So far, no a empt has been taken for designing the architecturefor Bangla DeCo. These concerns motivated us to exhibit a tool for Bangla DeConverter.3. Architecture of Bangla DeConverterFigure 2 shows the Architecture of the Bangla DeCo. The structure of a Bangla sentence issimilar to that of Hindi and Punjabi sentences [8]. The architecture is based on language-dependentand language-independent components during the text extraction process. A parser is a tool thatconverts the UNL expressions, and, based on the output, it creates a node. During the selection of thelanguage unit, the word unit starts fromwords to express the UNL in the input taken for the universalwords (UW) unit of Bangla keywords and their properties. After that, morphological analysis is to beperformed in the morphology phase based on the Bangla language. In this phase, the Bangla rootswords are changed by adding the inflexions/morphemes to obtain the fullmeaning of thewords. In thecase of the insertion phase of themaker, the casemaker is inserted into themorphedword, e.g., ইেতিছ,ইব, ই. These case makers are integrated into the extracted sentence. Finally, to determine the lexiconorder for the extracted Bangla sentence, syntactic linearization is used to match the output with theBangla native language sentence [13,14].Information 2019, 10, x FOR PEER REVIEW 3 of 16 generation and syntactic linearization in their architecture. A DeCo for Hindi text ‘HinD’ has been presented by [13]. They have indicated the complex rule format in writing analysis and generation rules, non-availability of source codes, and slow speed of DeCo tools provided by the UNDL Foundation. Their system includes word selection, morphological analysis of the lexicons, word insertion, and syntactic linearization. So far, no attempt has been taken for designing the architecture for Bangla DeCo. These concerns motivated us to exhibit a tool for Bangla DeConverter. 3.</s>
<s>Architecture of Bangla DeConverter Figure 2 shows the Architecture of the Bangla DeCo. The structure of a Bangla sentence is similar to that of Hindi and Punjabi sentences [8]. The architecture is based on language-dependent and language-independent components during the text extraction process. A parser is a tool that converts the UNL expressions, and, based on the output, it creates a node. During the selection of the language unit, the word unit starts from words to express the UNL in the input taken for the universal words (UW) unit of Bangla keywords and their properties. After that, morphological analysis is to be performed in the morphology phase based on the Bangla language. In this phase, the Bangla roots words are changed by adding the inflexions/morphemes to obtain the full meaning of the words. In the case of the insertion phase of the maker, the case maker is inserted into the morphed word, e.g., ইেতিছ, ইব, ই. These case makers are integrated into the extracted sentence. Finally, to determine the lexicon order for the extracted Bangla sentence, syntactic linearization is used to match the output with the Bangla native language sentence [13,14]. Figure 2. Architecture of Bangla DeConverter for Universal Networking Language. The execution procedure of the Bangla DeCo is depicted with a given Bangla sentence as follows. Bangla sentence: বািলকা ট মে গান গেয়িছল। Transliterated sentence: Balikati monche gan geyechhilo. Equivalent English sentence: The girl sang a song on a stage. The UNL expression for the above sentence is presented in (2). {unl} agt(sing(icl>do).@entry.@past,girl(icl>child>person,ant>boy)) obj(sing(icl>do).@entry.@past,song(icl>musical_composition)) plc(sing(icl>do).@entry.@past,stage(icl>place)) {/unl} (2) The Bangla DeConverter is used to convert the above UNL expression (2) into the Bangla language text. This expression is the input for the Bangla DeCo. The parser verifies the input expression for faults and creates the UNL graph or node-net as illustrated in Figure 3. Figure 2. Architecture of Bangla DeConverter for Universal Networking Language.The execution procedure of the Bangla DeCo is depicted with a given Bangla sentence as follows.Bangla sentence: বািলকা ট মে গান গেয়িছল|Transliterated sentence: Balikati monche gan geyechhilo.Equivalent English sentence: The girl sang a song on a stage.The UNL expression for the above sentence is presented in (2).{unl}agt(sing(icl>do).@entry.@past,girl(icl>child>person,ant>boy))obj(sing(icl>do).@entry.@past,song(icl>musical_composition))plc(sing(icl>do).@entry.@past,stage(icl>place)){/unl}(2)The Bangla DeConverter is used to convert the above UNL expression (2) into the Banglalanguage text. This expression is the input for the Bangla DeCo. The parser verifies the inputexpression for faults and creates the UNL graph or node-net as illustrated in Figure 3.Information 2019, 10, 324 4 of 17Information 2019, 10, x FOR PEER REVIEW 4 of 16 Figure 3. UNL graph generated by the parser for the input UNL expression. The morpheme selection phase selects the node list for the UWs provided in the inserted UNL phase with corresponding Bangla phases. The settled node list is set out in (3). Node1: Bangla word: ‘গাওয়া’ pronounce as gawa UW: sing(icl>do,agt>living_thing,obj>song.@entry.@past) Node2: Bangla word: ‘বািলকা’ pronounce as balika UW: girl(icl>child>person,ant>boy)) Node3: Bangla word: ‘গান’ pronounce as gan UW: song(icl>musical_composition)) Node4: Bangla word: ‘ম ’ pronounce as moncho UW: stage(icl>place) (3) In</s>
<s>the morphology stage, morphological analysis is performed by applying morphological rules to amend the Bangla words in the nodes based on the UNL attributes in the inserted UNL expressions and lexicons retrieved from the word dictionary. The morphological analysis is to be performed on the nodes given in (3) using morphological rules. The processed nodes are provided in (4) after evaluation. Node1: Bangla word: ‘ গেয়িছল’pronounce as geyechhilo UW: sing(icl>do,agt>living_thing,obj>song.@entry.@past) Node2: Bangla word: ‘বািলকা’ pronounce as balika UW: girl(icl>child>person,ant>boy) Node3: Bangla word: ‘গান’ pronounce as gan UW: song(icl>musical_composition) Node4: Bangla word: ‘ম ’ pronounce as moncho UW: stage(icl>place) (4) From the nodes given in (4) morphological analysis has been performed on verbal noun ‘গাওয়া’. Firstly, it has been changed from the main verb root ‘গা’(ga) to alternative verb root ‘ গ’(ge). Then the morphological rule is applied to integrated verb root ‘ গ’ (ge) and verbal inflexion‘ য়িছল’‘echhilo’ to from ‘ গেয়িছল’ (geyechhilo). It means in this phase, sing (gawa) is changed to its past form ‘sang’ by morphology rule. Hence, the case maker is inserted in the morphed lexicon in the case maker insertion phase. The nodes processed during the insertion phase of the case maker are shown in (5). Node1: Bangla word: ‘ গেয়িছল’pronounce as geyechhilo UW: sing(icl>do,agt>living_thing,obj>song.@entry.@past) Node2: Bangla word: ‘বািলকা ট’pronounce as balikati UW: girl(icl>child>person,ant>boy) Node3: Bangla word: ‘গান’pronounce as gan (5) Figure 3. UNL graph generated by the parser for the input UNL expression.The morpheme selection phase selects the node list for the UWs provided in the inserted UNLphase with corresponding Bangla phases. The se led node list is set out in (3).Node1: Bangla word: ‘গাওয়া’ pronounce as gawaUW: sing(icl>do,agt>living_thing,obj>song.@entry.@past)Node2: Bangla word: ‘বািলকা’ pronounce as balikaUW: girl(icl>child>person,ant>boy))Node3: Bangla word: ‘গান’ pronounce as ganUW: song(icl>musical_composition))Node4: Bangla word: ‘ম ’ pronounce as monchoUW: stage(icl>place)(3)In the morphology stage, morphological analysis is performed by applying morphological rulesto amend the Bangla words in the nodes based on the UNL a ributes in the inserted UNL expressionsand lexicons retrieved from theword dictionary. Themorphological analysis is to be performed on thenodes given in (3) usingmorphological rules. The processed nodes are provided in (4) after evaluation.Node1: Bangla word: ‘ গেয়িছল’pronounce as geyechhiloUW: sing(icl>do,agt>living_thing,obj>song.@entry.@past)Node2: Bangla word: ‘বািলকা’ pronounce as balikaUW: girl(icl>child>person,ant>boy)Node3: Bangla word: ‘গান’ pronounce as ganUW: song(icl>musical_composition)Node4: Bangla word: ‘ম ’ pronounce as monchoUW: stage(icl>place)(4)From the nodes given in (4) morphological analysis has been performed on verbal noun ‘গাওয়া’.Firstly, it has been changed from the main verb root ‘গা’(ga) to alternative verb root ‘ গ’(ge). Then themorphological rule is applied to integrated verb root ‘ গ’ (ge) and verbal inflexion‘ য়িছল’‘echhilo’ tofrom ‘ গেয়িছল’ (geyechhilo). It means in this phase, sing (gawa) is changed to its past form ‘sang’ bymorphology rule. Hence, the casemaker is inserted in themorphed lexicon in the casemaker insertionphase. The nodes processed during the insertion phase of the case maker are shown in (5).Node1: Bangla word: ‘ গেয়িছল’pronounce as geyechhiloUW: sing(icl>do,agt>living_thing,obj>song.@entry.@past)Node2: Bangla word: ‘বািলকা ট’pronounce as balikatiUW: girl(icl>child>person,ant>boy)Node3: Bangla word: ‘গান’pronounce as ganUW: song(icl>musical_composition)Node4: Bangla word: ‘মে ’ pronounce</s>
<s>as moncheUW: stage(icl>place)(5)Information 2019, 10, 324 5 of 17Here, case makers ‘ ট’(ti) and ‘এ(e)’ are combined with Node2 and Node4, respectively based onthe case maker insertion rule. The string of the processing nodes is shown in (6), and the Bangla textproduced by this sequence is shown in (7).Node2 Node4 Node3 Node1 (6)বািলকা ট মে গান গেয়িছল| (7)Balikati Monche gan geyechhilo.From the extracted Bangla sentence, it is apparent that the proposed system can accuratelytranslate a UNL expression to the Bangla sentence.4. Phases of Bangla DeConverterA Bangla sentence is produced by the Bangla DeConverter from UNL expressions throughdifferent phases.4.1. Parser PhaseThe parser phase is the first phase of the Bangla DeCo. It parses the UNL-expression input torespond to errors in the expression if any. If the input expression is free from error, it then buildsa semantic network called node-net structure for the expression. The node-net is also known as theUNL graph consisting of nodes and edges. The nodes of the UNL graph represent a concept in theform of UNL universal word (UWs). The edge in the node-net represents the UNL binary relationsbetween two nodes. From the parent node to the child node, the edges in the UNL graph (Figure 3)are indicated. The system allows accessing from children to their parents for backtracking purposes.4.2. Morpheme Selection PhaseMorpheme selection is theway of choosing Bangla lexicons for universalwords (UWs) in theUNLexpressions given as input. During morpheme selection, UWs are searched in the word dictionaryalong with constraints specified in the expression. A Bangla word lexicon containing around 10,000entries has been developed. The dictionary consists of the Bangla root words as the headword (HW)of the UW and a set of grammatical, morphological, syntactic, and semantic a ributes as its entries.The format of some generation rules that are to be used for the insertion of selectionwords/morphemesfor the Bangla word dictionary is given below.Format 1: Subjective pronoun insertion ruleswhere, HPR indicates human pronoun, SUB is a topic, agt is an agent relationship, VR is a verb root,VEN is a vowel ended root, ^AL is an alternative root, P is an individual, and p is a temporary a ributefor an individual to avoid recursive activities. Example of rule,:”HPR,P2,SUB,^@respect,@contempt,^RES,NGL::agt:”{RT,VEN,^AL,#AGT,^ p2:p2::}P9;Format 2: Noun insertion rule before verb root:”N,[^]@pl,^SUB:SUB:agt:”{RT,VEN,#AGT,^p3,[^]sg|pl:p3,sg|pl::}P7;Example:# :”N,^@pl,^SUB:SUB:agt:”{VR,VEN,#AGT,^p3,^sg:p3,sg::}P7;4.3. Morphological Analysis PhaseMorphology is the field of linguistics that studies the structure ofwords. It focuses on the pa ernsof words formation within and across languages and a empts to formulate rules that model theknowledge of the speakers of those languages.Information 2019, 10, 324 6 of 17A morpheme is defined as the minimal meaningful unit of a language. For example, in a wordlike ‘independently’, the morphemes are in-, depend, -ent, and –ly. In this case, the word ‘depend’ isthe root and the othermorphemes are the derivational affixes. Consider the following Bangla sentence,Bangla sentence: আিম ট খাইেতিছPronounce as: Ami ruti khaitechhiEnglish meaning: I am eating bread.The verb of the sentence is ‘খাইেতিছ’ (eating). The verb is the combination of two words root ‘খা’and morpheme ‘ইেতিছ’. The construction of the verb ‘খাইেতিছ’ is depicted below.Root</s>
<s>+ Verbal Inflexion = Verbখা + ইেতিছ ইেতিছ খাইেতিছThe root ‘খা’ (eat) and verbal inflexion (morpheme) ‘ইেতিছ’ are the headwords of dictionaryentries as follows.[খা] {} “eat(icl>consume>do)” (|R, VR, VEN, VEG1)[ইেতিছ] {} “ “ (VI, VEN, P1, PRS)where, R stands for root, VR for verb roots, VEN for vowel ended, VEG for vowed group 1, VI forverbal inflexion (morpheme), P1 for the first person, and PRS for the present tense.The following morphological rule is to be used to combine verb root and verbal inflexion to formthe verb khaitechhi+ {R, VR, VEN:::} {VI:+V,-VI::}The rule illustrates that verb root ‘খা’ (kha) is inserted into the left analysis window (LAW) andverbal inflexion ‘ইেতিছ’ (itechhi) is inserted into the RAW (right analysis window) of the DeConverterto form a verb ‘খাইেতিছ’ (khaitechhi).Therefore, the application of morphological rules results in changing, the headword’s root andverbal inflexion into a verb.These rules are developed according to the analysis of Bangla morphology. Three categoriesof morphology are defined for the conversion of UNL expressions into equivalent Banglasentences [15]. A ribute level resolutionmorphology, relation level resolutionmorphology, andwordlevel morphology.The first label of morphology forms Bangla based on UNL a ributes appended to a UW, andits specifications recovered dictionary entries. The root word fetched from the word dictionary ischanged based on persons, numbers, gender, tenses, aspects, modality (PNGTAM), and vowel andconsonant ended roots.The relation label resolution morphology handles the postpositions in Bangla or prepositions inEnglish as the prepositions in la er resemble the postpositions in Bangla. These postpositions connectto nouns, pronouns, verbs, and other parts of a sentence.For example,verb + suffix = verbal nounচল + অ = চলIn this case, most of the UNL relations set up postpositions or case makers or function wordsbetween the parent and child node during the text extraction process. Generation of a target languagetexts/words depends on the UNL relations and the conditions entailed on the child and parentnodes’ specifications of UNL relation. With UNL relation and a ribute label morphology the BanglaDeConverter generates a sentence close to its original form.4.4. Case Maker Insertion PhaseThe case maker insertion phase inserts case makers such as conjunction and postposition inBangla e.g., ‘হইেত’(from), ‘র’(of), ‘ পিরেয়’ (over) to the words produced at the morphology phase.Information 2019, 10, 324 7 of 17The insertion of case makers in the rendered output depends on child and parent nodes’ features in arelation [13]. For each of the 46 UNL relations [1], various types of case makers are used based on thegrammatical structures of a target language for each of the UNL relations [16]. The rule which is tobe prepared for the insertion of case makers includes nine columns. The explanation of each columnis as follows.1. The first column (UNL relation name): This column stores the UNL relations’ name in which therule is being made. For example; agt (agent relation), obj (object relation), etc.2. The second column (The case maker preceding the parent node): This column stores the casemaker that can be inpu ed before the parent node of a given UNL relation in produced output.3. The third</s>
<s>column (The casemaker following the parent node): This column stores the casemakerthat can be inpu ed after the parent node of a given UNL relation in produced output.4. The fourth column (The casemaker preceding the child node): This column stores the casemakerthat can be inpu ed before the child node of a given UNL relation in produced output.5. The fifth column (The case maker following the child node): This column stores the case makerthat can be inpu ed after the child node of a given UNL relation in produced output.6. The sixth column (Positive condition for the parent node): This column stores a ributes thatneed to be declared on the parent node for firing the rule.7. The seventh column (Negative conditions for the parent node): This column stores a ributesthat need to be declared on the parent node for firing the rule.8. The eighth column (Positive condition for the child node): This column stores a ributes thatneed to be declared on the child node for firing the rule.9. The ninth column (Negative conditions for the child node): This column stores a ributes thatneed to be asserted on the child node for firing the rule.Formats of some generation rules used for insertion of the case makers from the Bangla worddictionary are given below.Format 1: Verbal inflexion insertion rules for first person:{RT,VEN,p(x),[^]@present,[^]@progress,[^]@complete,^vi:vi::}”[[VI]],VI,VEN,P(x),PRE|PAS|FUT,[^]PRG,[^]CMPL:::”P7;Examples::{RT,VEN,p1,@present,^@progress,^@complete,^vi:vi::}”[[VI]],VI,VEN,P1,PRE,^PRG,^CMPL:::”P9;Format 2: Verbal inflexion insertion rules for second person:{RT,VEN,p(x),[^]@present,[^]@progress,[^]@complete,[^]res,[^]ngl,^vi:vi::}”[[VI]],VI,VEN,P(x),PRE|PAS|FUT,[^]PRG,[^]CMPL,[^]RES,[^]NGL:::”P7;Examples::{RT,VEN,2p,@present,^@progress,^@complete,^hon,^ngl,^kbiv:kbiv::}”[[VI]],VI,VEN,P2,PRE,^PRG,^CMPL,^RES,^NGL:::”P7;Format 3: Verbal inflexion insertion rules for third person:{RT,VEN,p(x),[^]@present,[^]@progress,[^]@complete,^res,|^ngl,^vi:vi::}”[[VI]],VI,VEN,P(x),PRE|PAS|FUT,[^]PRG,[^]CMPL,^RES,^NGL:::”P7;Examples::{RT,VEN,p3,@present,^@progress,^@complete,^resn,^vi:vi::}”[[VI]],VI,VEN,P3,PRE,^PRG,^CMPL,^RES:::”P7;4.5. Syntactic Linearization PhaseThis is the process of linearizing the words/morphemes of the sentence in the semantichypergraph. As such, it determines the word order of produced texts. Syntactic linearizationhandles the systematic organization of lexicons (words/morphemes) in generated output to matchInformation 2019, 10, 324 8 of 17the target language sentence. It allocates relative positions to different lexicons based on the bondthey keep with the headwords of a sentence [15]. Constructive variety between languages Bangla(subject–object–verb, SOV) and English (subject–verb–object, SVO) enforces the stage of semanticlinearization in order to improve the Bangla DeCo.5. Issues in Syntactic LinearizationTwo significant issues in syntactic linearization are parent–child relation and matrix-basedpriority of relation.5.1. Parent–Child RelationBinary relation between two words is represented as rel (uw1,uw2) in UNL, where uw1 worksas a parent and uw2 works as a child. The systemmanages whether parent should be stated before orafter the child in generated output [17]. In most of the UNL relations, child nodes appear before theirparent nodes in the produced output. To demonstrate this conception, we take a sentence say, ‘He iswriting a le er’. UNL expression of the sentence is given below.{unl}agt(write(icl>do,equ>compose).@entry.@present.@progress,he(icl>person))obj(write(icl>do,equ>compose).@entry.@present.@progress,le er(icl>text).@indef){/unl}(8)In this expression, there are two UNL relations: agt(write, he) or agt(uw1,uw2) and obj(write,le er) or obj(uw1,uw2). In the first relation, write (uw1) is a verb and he(uw2) is the subject, and inthe second relation, uw1 is the same verb as in first relation and le er (uw2) is the object or noun.Since, Bangla is an SOV type language, in case of UNL relation both children i.e., ‘he(icl>person)’and ‘le er(icl>document)’ will be set to the left of the verb ‘write’. Matrix-based priority</s>
<s>will decidewhich child is to be placed first in the produced output.5.2. Matrix-Based Priority of RelationsWhen a parent node has two or more children in a UNL relation, a matrix-based priority ofrelations is necessary to decide the relative position of children with respect to each other in theproduced output [8].The relative positions of children in a sentence are decided in the proposedBangla DeConverter using a matrix M ∗M. The matrix has 46 columns and 46 rows, publishing 46UNL relations [1]. The matrix M = [mi j], where i = 1, 2, . . . 46 and j = 1, 2, . . . .46. The elements of thematrix are ‘L’ denotes toward left, ‘R’ denotes toward right and ‘-’ denotes no action.If mi j = ‘L’, it indicates that when two children share the same parent, the position of the child ofith relation is to the left of the child of jth relation. Again, if mi j = ‘R’, then the place of the child of ithrelation is to the right of the child of jth relation, sharing the common parent. If mi j = ‘-’, no action isto be taken, as it is not possible to share a common parent by the children of ith and jth relations [16].The following UNL graph illustrates the relationships of two nodes with the same parent.In this Figure 4, the children nodes are ‘N1′ and ‘N2′ and the parent node is ‘N3′. The UNLbinary relations between these three nodes ‘N1’, ‘N2′, and ‘N3′ are Ri(N3,N1) and Rj(N3,N2). If weconsider a Bangla sentence, say,আিমভাতখাই, ‘ami vat khai’ meaning ‘I eat rice’, then according to thegraph in Figure 4, the representation of the first word of the sentenceআিম will be the node N1, thesecond word ‘ভাত’ will be the node N2, and the third word ‘খাই’ will be the node N3, respectively.According to the structure of Bangla language, if ‘N1′ places to the left of ‘N2′ in the producedsentence denoted by ‘(N1 L N2)’ then the priority matrix given in Figure 5 should be followed for itssyntactic linearization.Information 2019, 10, 324 9 of 17Information 2019, 10, x FOR PEER REVIEW 9 of 16 Figure 4. UNL graph of two nodes with the common parent. According to the structure of Bangla language, if ‘N1′ places to the left of ‘N2′ in the produced sentence denoted by ‘(N1 L N2)’ then the priority matrix given in Figure 5 should be followed for its syntactic linearization. Ri Rj Ri - L Rj R - Figure 5.Representation of matrix for (N1 L N2) structure. Precedence of a child in a binary relation relies on the frequency of ‘L’ in its row. The child in a relationship will have the highest priority and will be placed at the remaining output if the UNL binary relationship has all ‘L’ in its line. Similarly, if the UNL binary relationship has all ‘R’ in its line shared by the same parent, the child of the relationship will be the smallest priority and position</s>
<s>at the extreme right of all child nodes in the output produced [14,15]. 6. Syntactic Linearization of Simple and Compound Sentences Syntactic linearization of simple and compound sentences is described in this section. 6.1. Syntactic Linearization of Simple Sentence A simple sentence says, ‘She has earned 100 dollars’. The UNL expression of this sentence is given in (9). {unl} agt(earn(icl>do).@entry.@present.@complete,she(icl>person)) qua(dollar(icl>monetary_unit).@pl,100) obj(earn(icl>do).@entry.@present.@complete,dollar(icl>monetary_unit>thing).@pl) {/unl} (9) The UNL graph of the UNL expression in (9) is depicted in Figure 6. Figure 4. UNL graph of two nodes with the common parent.Information 2019, 10, x FOR PEER REVIEW 9 of 16 Figure 4. UNL graph of two nodes with the common parent. According to the structure of Bangla language, if ‘N1′ places to the left of ‘N2′ in the produced sentence denoted by ‘(N1 L N2)’ then the priority matrix given in Figure 5 should be followed for its syntactic linearization. Ri Rj Ri - L Rj R - Figure 5.Representation of matrix for (N1 L N2) structure. Precedence of a child in a binary relation relies on the frequency of ‘L’ in its row. The child in a relationship will have the highest priority and will be placed at the remaining output if the UNL binary relationship has all ‘L’ in its line. Similarly, if the UNL binary relationship has all ‘R’ in its line shared by the same parent, the child of the relationship will be the smallest priority and position at the extreme right of all child nodes in the output produced [14,15]. 6. Syntactic Linearization of Simple and Compound Sentences Syntactic linearization of simple and compound sentences is described in this section. 6.1. Syntactic Linearization of Simple Sentence A simple sentence says, ‘She has earned 100 dollars’. The UNL expression of this sentence is given in (9). {unl} agt(earn(icl>do).@entry.@present.@complete,she(icl>person)) qua(dollar(icl>monetary_unit).@pl,100) obj(earn(icl>do).@entry.@present.@complete,dollar(icl>monetary_unit>thing).@pl) {/unl} (9) The UNL graph of the UNL expression in (9) is depicted in Figure 6. Figure 5. Representation of matrix for (N1 L N2) structure.Precedence of a child in a binary relation relies on the frequency of ‘L’ in its row. The child ina relationship will have the highest priority and will be placed at the remaining output if the UNLbinary relationship has all ‘L’ in its line. Similarly, if the UNL binary relationship has all ‘R’ in its lineshared by the same parent, the child of the relationship will be the smallest priority and position atthe extreme right of all child nodes in the output produced [14,15].6. Syntactic Linearization of Simple and Compound SentencesSyntactic linearization of simple and compound sentences is described in this section.6.1. Syntactic Linearization of Simple SentenceA simple sentence says, ‘She has earned 100 dollars’. The UNL expression of this sentence isgiven in (9).{unl}agt(earn(icl>do).@entry.@present.@complete,she(icl>person))qua(dollar(icl>monetary_unit).@pl,100)obj(earn(icl>do).@entry.@present.@complete,dollar(icl>monetary_unit>thing).@pl){/unl}(9)The UNL graph of the UNL expression in (9) is depicted in Figure 6.Information 2019, 10, x FOR PEER REVIEW 9 of 16 Figure 4. UNL graph of two nodes with the common parent. According to the structure of Bangla language, if ‘N1′ places to the left of ‘N2′ in</s>
<s>the produced sentence denoted by ‘(N1 L N2)’ then the priority matrix given in Figure 5 should be followed for its syntactic linearization. Ri Rj Ri - L Rj R - Figure 5.Representation of matrix for (N1 L N2) structure. Precedence of a child in a binary relation relies on the frequency of ‘L’ in its row. The child in a relationship will have the highest priority and will be placed at the remaining output if the UNL binary relationship has all ‘L’ in its line. Similarly, if the UNL binary relationship has all ‘R’ in its line shared by the same parent, the child of the relationship will be the smallest priority and position at the extreme right of all child nodes in the output produced [14,15]. 6. Syntactic Linearization of Simple and Compound Sentences Syntactic linearization of simple and compound sentences is described in this section. 6.1. Syntactic Linearization of Simple Sentence A simple sentence says, ‘She has earned 100 dollars’. The UNL expression of this sentence is given in (9). {unl} agt(earn(icl>do).@entry.@present.@complete,she(icl>person)) qua(dollar(icl>monetary_unit).@pl,100) obj(earn(icl>do).@entry.@present.@complete,dollar(icl>monetary_unit>thing).@pl) {/unl} (9) The UNL graph of the UNL expression in (9) is depicted in Figure 6. Figure 6. UNL graph for the UNL expression (9).Information 2019, 10, 324 10 of 17After morphological and case maker insertion phase the node list in the UNL expression ispresented in (10).Node1: Bangla word: আয়করা, pronounce as ‘aye kora’UW: earn(icl>do,agt>person,obj>thing,ins>uw)Node2: Bangla word: স (মিহলা), pronounce as ‘se (mohila)UW: she(icl>person)Node3: Bangla word: ডলার, pronounce as dollarUW: dollar(icl>monetary_unit>thing)Node4: Bangla word: ১০০, pronounce as 100UW: 100(10)In Figure 6, the parent node ‘earn(icl>do).@entry.@present.@complete’ is the entry node as it is themain predicate and includes UNL a ribute @entry. The labels on the two edges denote UNL relation.The relation ‘agt’ has a higher priority than ‘obj’ relation as shown in Figure 6. Hence, the node ‘She’will traverse first. The priority matrix of UNL relations ‘agt’ and ‘obj’ is given in Figure 7.Information 2019, 10, x FOR PEER REVIEW 10 of 16 Figure 6.UNL graph for the UNL expression (9). After morphological and case maker insertion phase the node list in the UNL expression is presented in (10). Node1: Bangla word: আয়করা, pronounce as ‘aye kora’ UW: earn(icl>do,agt>person,obj>thing,ins>uw) Node2: Bangla word: স (মিহলা), pronounce as ‘se (mohila) UW: she(icl>person) Node3: Bangla word: ডলার, pronounce as dollar UW: dollar(icl>monetary_unit>thing) Node4: Bangla word: ১০০, pronounce as 100 UW: 100 (10) In Figure 6, the parent node ‘earn(icl>do).@entry.@present.@complete’ is the entry node as it is the main predicate and includes UNL attribute @entry. The labels on the two edges denote UNL relation. The relation ‘agt’ has a higher priority than ‘obj’ relation as shown in Figure6. Hence, the node ‘She’ will traverse first. The priority matrix of UNL relations ‘agt’ and ‘obj’ is given in Figure 7. agt obj agt - L obj R - Figure 7.Precedence matrix of UNL relations for ‘agt’ and ‘obj’. The node ‘She’ does not have any child and its parent node ‘earn’ has traversed. This node ‘She’ will be processed, and</s>
<s>its Bangla attributes will appear to the final string in the produced output, i.e., the produced output will be ‘ স’ (She).At present, the parent node ‘earn’ of the child ‘she’ will be the active node. The ‘earn’ node has one unprocessed child, i.e., the node ‘dollar’. Thus, the node ‘dollar’ will traverse next and will mark as traversed. The node ‘dollar’ has one unprocessed child, i.e., ‘100′ and it will traverse next and mark as traversed. The node ‘100′ has no child and it will be processed and its Bangla word specification will appear to its final string. Therefore, the last sentence will be১০০. Now, the node ‘earn’ will be the active node. It is the main predicate of the sentence, i.e., the entry node, and it has no unprocessed child. Hence, it will be processed, and its Bangla word appears in the final sentence; i.e., the final sentence will be ‘ স১০০ডলারআয়কেরেছ।’. Since, the main predicate is processed, the produced output will be available in the string based on the syntactic linearization as shown in node produced sequence below. Node sequence: Node2 Node4 Node3 Node1 (11) Bangla sentence: স ১০০ ডলার আয় কেরেছ (12) Pronounced as: Se 100 dollar aye korechhe. Equivalent English sentence: She has earned 100 dollars. 6.2. Syntactic Linearization of Compound Sentence A compound sentence is represented in UNL expression as a compound concept. A compound concept or a compound universal word is defined using a scope-node. A scope-node is a set of UNL binary relations combined jointly to denote a complex concept. This complex concept is named by an identifier, UW-ID, which appears after the relation label. In UNL expression, a compound UW is referred to with its UW-ID. The syntactic linearization of compound sentences is a bit different from Figure 7. Precedence matrix of UNL relations for ‘agt’ and ‘obj’.The node ‘She’ does not have any child and its parent node ‘earn’ has traversed. This node ‘She’will be processed, and its Bangla a ributes will appear to the final string in the produced output, i.e.,the produced output will be ‘ স’ (She). At present, the parent node ‘earn’ of the child ‘she’ will bethe active node. The ‘earn’ node has one unprocessed child, i.e., the node ‘dollar’. Thus, the node‘dollar’ will traverse next and will mark as traversed. The node ‘dollar’ has one unprocessed child,i.e., ‘100′ and it will traverse next and mark as traversed. The node ‘100′ has no child and it will beprocessed and its Bangla word specification will appear to its final string. Therefore, the last sentencewill be১০০.Now, the node ‘earn’ will be the active node. It is themain predicate of the sentence, i.e., the entrynode, and it has no unprocessed child. Hence, it will be processed, and its Bangla word appears inthe final sentence; i.e., the final sentence will be ‘ স১০০ডলারআয়কেরেছ|’. Since, the main predicateis processed, the produced output will be available in the string based on the syntactic linearizationas shown in node produced sequence below.Node sequence: Node2</s>
<s>Node4 Node3 Node1 (11)Bangla sentence: স ১০০ ডলারআয় কেরেছ (12)Pronounced as: Se 100 dollar aye korechhe.Equivalent English sentence: She has earned 100 dollars.6.2. Syntactic Linearization of Compound SentenceA compound sentence is represented in UNL expression as a compound concept. A compoundconcept or a compound universal word is defined using a scope-node. A scope-node is a set of UNLbinary relations combined jointly to denote a complex concept. This complex concept is named byan identifier, UW-ID, which appears after the relation label. In UNL expression, a compound UW isreferred to with its UW-ID. The syntactic linearization of compound sentences is a bit different fromInformation 2019, 10, 324 11 of 17that of simple sentences. Consider a sentence say, ‘I will go to university after 40 minutes’. The UNLexpression of this sentence is given in (13).I will go to university after 40 min{/org}{unl}agt(go).@entry.@future,i(icl>person))plt(go.@entry.@future,university(icl>body))tim(go.@entry.@future,:01))man:01(after,@entry, minutes)qua:01(minute.@pl,40){/unl}[/S](13)The UNL graph of the UNL expression (13) is illustrated in Figure 8. In the UNL graph, ‘go’ isthe main predicate, the entry node. It has three children: ‘i(icl>person)’, with ‘agt’ relation between‘go’ and ‘i’, ‘university(icl>body)’ with ‘plt’ relation between ‘go’and ‘university’ and scope node ‘01′with ‘tim’ relation between ‘go’ and ‘01′. The UNL graph shown in Figure 8 shows that the relation‘agt’ has the highest precedence followed by the relation ‘tim’ and then by the relation ‘plt’ as givenin the precedence matrix in Figure 9.Information 2019, 10, x FOR PEER REVIEW 11 of 16 that of simple sentences. Consider a sentence say, ‘I will go to university after 40 minutes’. The UNL expression of this sentence is given in (13). I will go to university after 40 min {/org} {unl} agt(go).@entry.@future,i(icl>person)) plt(go.@entry.@future,university(icl>body)) tim(go.@entry.@future,:01)) man:01(after,@entry, minutes) qua:01(minute.@pl,40) {/unl} [/S] (13) The UNL graph of the UNL expression (13) is illustrated in Figure 8. In the UNL graph, ‘go’ is the main predicate, the entry node. It has three children: ‘i(icl>person)’, with ‘agt’ relation between ‘go’ and ‘i’, ‘university(icl>body)’ with ‘plt’ relation between ‘go’and ‘university’ and scope node ‘01′ with ‘tim’ relation between ‘go’ and ‘01′. The UNL graph shown in Figure 8 shows that the relation ‘agt’ has the highest precedence followed by the relation ‘tim’ and then by the relation ‘plt’ as given in the precedence matrix in Figure 9. Figure 8.UNL graph for the UNL expression of the sentence with a scope node. agt plt tim agt L L plt - - R tim R L - Figure 9.Precedence matrix for the UNL relation ‘agt’, ‘plt’, and ‘tim’. In the UNL expression given in (13), when we consider scope node: 01 as single nodes, the syntactic linearization of the UNL graph considering the UWs will be I: 01 University go (14) This scope node is then replaced by the scope nodes of the UNL graph from the generated output of syntactic linearization. So, the UNL graph of the scope node for UWs of syntactic linearization is Figure 8. UNL graph for the UNL expression of the sentence with a scope node.Information 2019, 10, x FOR PEER REVIEW 11 of</s>
<s>16 that of simple sentences. Consider a sentence say, ‘I will go to university after 40 minutes’. The UNL expression of this sentence is given in (13). I will go to university after 40 min {/org} {unl} agt(go).@entry.@future,i(icl>person)) plt(go.@entry.@future,university(icl>body)) tim(go.@entry.@future,:01)) man:01(after,@entry, minutes) qua:01(minute.@pl,40) {/unl} [/S] (13) The UNL graph of the UNL expression (13) is illustrated in Figure 8. In the UNL graph, ‘go’ is the main predicate, the entry node. It has three children: ‘i(icl>person)’, with ‘agt’ relation between ‘go’ and ‘i’, ‘university(icl>body)’ with ‘plt’ relation between ‘go’and ‘university’ and scope node ‘01′ with ‘tim’ relation between ‘go’ and ‘01′. The UNL graph shown in Figure 8 shows that the relation ‘agt’ has the highest precedence followed by the relation ‘tim’ and then by the relation ‘plt’ as given in the precedence matrix in Figure 9. Figure 8.UNL graph for the UNL expression of the sentence with a scope node. agt plt tim agt L L plt - - R tim R L - Figure 9.Precedence matrix for the UNL relation ‘agt’, ‘plt’, and ‘tim’. In the UNL expression given in (13), when we consider scope node: 01 as single nodes, the syntactic linearization of the UNL graph considering the UWs will be I: 01 University go (14) This scope node is then replaced by the scope nodes of the UNL graph from the generated output of syntactic linearization. So, the UNL graph of the scope node for UWs of syntactic linearization is Figure 9. Precedence matrix for the UNL relation ‘agt’, ‘plt’, and ‘tim’.In the UNL expression given in (13), when we consider scope node: 01 as single nodes, thesyntactic linearization of the UNL graph considering the UWs will beI: 01 University go (14)Information 2019, 10, 324 12 of 17This scope node is then replaced by the scope nodes of the UNL graph from the generated outputof syntactic linearization. So, the UNL graph of the scope node for UWs of syntactic linearization isafter 40 min (15)The output of the UNL expression (Figure 8) is produced by replacing the scope-node in (1)combining with UWs given in (2). So, the final syntactic linearization of UWs without restrictionsof the UNL graph illustrated in Figure 8 will be produced as follows.I 40 min after university go (16)A Bangla sentence produced by the Bangla DeConverter after morphology, case maker insertion,and syntactic linearization phases is given as follows.আিম ৪০ িমিনট পর ইউিনভািস ট যােবা (17)Pronounced as: Ami 40 min por bisshabiddaloy jabo.7. Experimental Results and DiscussionsThis section shows the experimental results by extracting a Bangla sentence using generationrules. It also presents the evaluation of the proposed system and discussions of results.7.1. Extraction of a Bangla SentenceThis section explains the steps of transformation and the experimental outcomes of extracting aBangla sentence from UNL phrases. The UNL expression of the sentence, I will go to office after eatingrice is shown in (18) using Russian English Language Server. In this expression, agt(agent), plt(placeto), tim (time), and obj (object) are the semantic relations. The relaters go(icl>move>do), i(icl>person),office(icl>body>thing), after(icl>how,</s>
<s>tim<uw,obj>uw), eat(icl>consume>do), rice(icl>grain> thing) aretheUWs [1]. Typically connected to the primary predicate a ribute @entry and@future denotes futuretense sentence. We use our proposed DeConverter tool for our experiment. UNL expression (18), adictionaries file (Table 1), and a set of generation rules (Table 2), are the inputs of the tool for extractinga sentence from the given UNL expression.agt(go(icl>move>do,plt>place).@entry.@future,i(icl>person))plt(go(icl>move>do,plt>place).@entry.@future,office(icl>body>thing))tim(go(icl>move>do,plt>place).@entry.@future,after(icl>how,tim<uw,obj>uw))obj:01.@entry,rice(icl>grain>thing))obj(after(icl>how,tim<uw,obj>uw),:01)(18)Table 1. Dictionary entries for respective Bangla sentence.[আিম] { }“i(icl>person)”(PRO, HPRO,SUB,P1,SG)[ভাত]{ } “rice(icl>grain>thing)”(N, OBJ, NCOM,CEN)[ খ]{ } “eat(icl>consume>do,agt>living_thing,obj>concrete_thing)” (ROOT,VEN,AGT,OBJ,VEG1)[ য়]{ }“VI” (VI,VEN,P1,PRE)[ইউিনভািস ট]{ } “office(icl>body>thing)”(N,[যা]{ } “go(icl>move>do,plt>place,plf>place,agt>thing)”(VR, VEN,AGT,PLT,PLF,P1)[ বা]{ }“VI” (VI,VEN,P1,FUT)Information 2019, 10, 324 13 of 17Table 2. Generation rules for extracting Bangla sentence from UNL expression.Rule 1: (Pronoun insertion):”HPRO,P1,SUB,^@pl,::agt:”{ROOT,VEN,^AL,#AGT,^p1:p1::}Rule 2: R{:::}{SUB:::}Rule 3: {SUB,^blk:blk::}”[],BLK:::”}Rule 4: -R{:::}{SUB:::}Rule 5: -R{SUB:::} {:::}Rule 6: “N,^OBJ:OBJ::”{ROOT,VEN,#OBJ::}Rule 7: R{PRO:::}{N:::}Rule 8: R{N:::} {ROOT:::}Rule 9:{ROOT,VEN,p1,@present,^@progress,^@complete,^vi:vi::}”[[VI]],VI,VEN,P1,PRE,^PRG,^CMPL:::”}Rule 10: R{PRO:::}{N:::}Rule 11: “N,^OBJ:OBJ::”{ROOT,VEN,#OBJ::}P7;Rule 12: {SUB,^blk:blk::}”[],BLK:::”}Rule 13: R{N:::}{:::}Rule 14: {N:::} {ROOT,VEN,#OBJ::}Rule 15: R{ROOT:::} {:::}Rule 16:{ROOT,VEN,p1,@future,^@progress,^@complete,^vi:vi::}”[[VI]],VI,VEN,P1,FUT,^PRG,^CMPL:::”}Rule 17: R{:::} {V:::}Rule 1 describes when root “ খ” is in LCW (left condition window) the pronoun “আিম” is to beinserted in the RGW (right generation window). Rule 2 is used to move the DeConverter windowsto the right. The blank insertion rule (rule 3) is applied to insert a blank space between pronoun androot. After applying right shift rules (rules 4 and 5), rule 6 is to be used to insert noun “ভাত” in theRGW. Rules 7 and 8 are now being implemented to move the windows two steps right. Now the root“ খ” is in LGW. The verbal inflexion insertion rule (rule 9) is to be applied to inset “ য়” in the RGW.Now both root “ খ” and verbal inflexion “ য়” are to be combined to make verb “ খেয়” and will bein the RGW. Noun insertion rule (rule 11) is to be applied to insert noun “ইউিনভািস ট” followed by ablank insertion rule (rule 12) to insert a blank between the verb “ খেয়” and noun “ইউিনভািস ট” afterapplying right shift rule (rule 10). Again, right shift (rule 13) is to be applied followed by root insertionrule (rule 14). Finally, the right shift (rule 15) is to be applied followed by a verbal inflexion insertionrule (rule 16). The right shift (rule 17) completes the text extraction process of DeConverter. Aftercompleting the extraction procedures, DeCo generates the following Bangla sentence,আিম ভাত খেয়ইউিনভািস ট যােবা|We have experimented for various types of Bangla simple and complex texts (sentences) withdiverse subjects, persons, and tenses. Our result shows that Bangla native language texts are extractedacceptably by the proposed Bangla DeCo.7.2. Results and DiscussionsOur projected system was tested by converting a set of 300 Bangla sentences into theircorresponding set of UNL expressions using a Russian–English Language server [18]. The serverincludes English sentences along with their corresponding UNL expressions. For comparison withthe output generated by our Bangla DeConverter, these English phrases were manually convertedinto equivalent Bangla phrases. The UNL expressions were used as input to the Bangla DeConverterfor producing corresponding Bangla sentences. The output of the Bangla DeConverter was comparedwith the corresponding manually translated Bangla sentences from English sentences. The projectedsystemhas been applied to subjective</s>