text
stringlengths
41
31.4k
<s>NetworkHere, an image (as input) is fed into the input layer (Fig.1) then it gets passed through the hidden layers where the features get extracted. Finally, the output layer predicts the best probability [20].III. Dataset We collected more than 4,300 images of Bengali people (including Indian Bengali) of different ages from Google but we made sure the good quality of the pictures. We also ensured that the images were not racially mixed because it may cause a disaster for any CNN model. After collecting images the most tedious part was to label those images which means putting the similar type of images into the same folder so that we can apply CNN. Apart from this, sometimes there may present many irrelevant photos too and we had taken care of all of these to make our mission smoother.IV. PreprocessingWe have got the data, but we can’t feed these raw images right through our CNN network. First, we need to resize these images into the same size, and then we may want to convert them into grayscale. Apparently, the labels of ‘Bengali Male’ and ‘Bengali Female’ are not useful, we want to convert them into one hot array. Fig. 2 is a random sample chosen from the Bengali Female folder. After converting the images into grayscale, it looks like Fig. 2,Fig. 2: In Grayscale View (Clear Image)Now to scale this, we had to select such a size for these images, so that we could recognize the same image (an example is given in Fig.3) as a Bengali female (or male). We fixed the image shape to 99 which means, the shape of the images will be 99x99 during training. Image shape: 99x99; So, this image is not so clear as before but a human can easily recognize.Fig. 3: After resizing (Hazy Image)At the very beginning, we wanted to see how many dense layers, number of nodes and number of convolution layers would work fine and suitable for our proposed model. So, primarily we used the number of dense layers = { 0, 1, 2 }, layer sizes = { 32, 64, 128 } and convolutional layers = { 1, 2, 3 } that means, all and every possible combination from these numbers were run to see how better the new and primary model would work and how far we were from our destination.We used ‘ReLu’ (Rectified Linear Unit) as the activation function in this primary model and Sigmoid in the last layer, for validation we kept 20% data. We used ‘Adam’ optimizer and binary cross-entropy as the loss function. 2 max-pooling of 2D layers were 2x2 in size. After running for 20 epochs, in the batch size of 25, we got around 66% accuracy. That was not bad at all since we did not use any dropout layers or any other advanced processes and functions. The model was run for all the 27 (3x3x3=27) combinations where every single combination had 20 epochs. As a result, it took a bit long time for</s>
<s>training but in the consequences, we got a clear view of our model and we understood that our model was going to perform better for Authorized licensed use limited to: Cornell University Library. Downloaded on August 19,2020 at 10:45:46 UTC from IEEE Xplore. Restrictions apply. 8th International Conference on System Modeling & Advancement in Research Trends, 22nd–23rd November, 2019 College of Computing Sciences & Information Technology, Teerthanker Mahaveer University, Moradabad, India392 Copyright © IEEE–2019 ISBN: 978-1-7281-3245-7more nodes because for the size of 128x3x1 the model gave that accuracy, but the problem was in the loss function. Our validation loss was getting increased. We set and saved the history and we used Tensorboard to visualize the whole process.The epoch validation accuracy is given here (Fig.4) for all the possible combinations (27) we have run so far.Fig. 4: Epoch Validation Accuracyand in the second figure (Fig.5) from Tensorboard, the epoch validation loss is showed. Here it’s very clear that the model is not working fine since it has totally become a case of overfitting (curves are going upwards). Now we will try to reduce this overfitting gradually.Fig. 5: Epoch Validation LossHowever, we have achieved the base of our primary CNN model.V. Transfer LearningGetting quality accuracy during the training of image classification is not a simple task. Because to do so, we need a lot of training data. So, in real life collecting a dataset is not good enough for deep learning because we may need millions of images to build a powerful network. But generally, it’s troublesome and the solution is transfer learning, we can use a pre-trained model that was actually trained in a large dataset. The idea is that the trained model has already learned many features from a huge dataset and it is possible to use these features as a base of learning for new classification problems. We applied Resnet50, Mobilenet, etc. models.B. Resnet50Resnet50 is a trained network on millions of images from the ‘Imagenet’ database. It has many layers (around 50) which are used to classify images in more than 1,000 categories. So, to use the Resnet50 model, we used ‘Imagenet’ for our weights and then we set the top layer false which means, we will not keep the FC (fully connected) layers at the end of the model. For this, we resized the height and width of our image by 200x200 when our base model was Resnet50 and we used 3 channels, not like the grayscale images. After running for 20 epochs we achieved a fine accuracy of 85% when the loss was only 0.33.C. MobilenetMobilenet is such an architecture that is more suitable for different types of computer vision applications especially when there is a lack of enough computational power. The Mobilenet architecture was proposed by Google and nowadays it has been popularly used in computer vision.To use Mobilenet our base model was ‘Mobilenet’ and we again used here the weights of ‘Imagenet’ when the top layer was not included. That means, discarding the last 1,000</s>
<s>layers. As we have seen our model performs better for more nodes so, we have used this time a dense layer of 1,024 nodes and the activation function was ‘Relu’. We added dense layers so that the model can learn more complex functions. Then we added the second dense layer of the same size and the third dense layer of 512 nodes and our final dense layer consisted of 2 nodes where the activation function was ‘Softmax’ (final layer of softmax activation). We used the ‘ImageDataGenerator’ class and included all dependencies. We fixed the image size to 224x224, ‘RGB’ (Red, Green, and Blue) as color mode and the class mode was categorical when we set the shuffle of the dataset to true. Apart from this, we used the ‘Adam’ optimizer, loss function was categorical cross-entropy and the evaluation metric was ‘Accuracy’. We got around 96% accuracy applying this model which was expected.D. MobileNetV2As we had got a high accuracy in Mobilenet so we considered it would be great to try with MobilenetV2 (version 2 of Mobilenet). So, we applied the MobilenetV2 to maximize our accuracy and minimize our loss. We resized the images by 160x160 and we had set it for the batch size of 32. To apply data augmentation we rescaled images by 1/255 and our class mode was binary this time and we also used 3 channels when our base model was MobileNetV2, which will be used as a pre-trained model of training. We again used the ‘Imagenet’ for weights and also added the learning rate of 0.0001, the metric was ‘Accuracy’ and loss was binary cross-entropy. After running for 10 epochs we got the validation accuracy of 85% and validation loss of Authorized licensed use limited to: Cornell University Library. Downloaded on August 19,2020 at 10:45:46 UTC from IEEE Xplore. Restrictions apply. Bengali Ethnicity Recognition and Gender Classification Using CNN & Transfer LearningCopyright © IEEE–2019 ISBN: 978-1-7281-3245-7 39334%. We saved the history and displayed it in the training and validation accuracy figure and in the training and validation loss figure (Fig.6). From Fig.6, it’s very evident and identical that the loss is decreasing gradually which was expectable since the model is overcoming the problem of overfitting. Fig .6: MobileNetV2Now from this stage, we tested furthermore for trying fine-tuning. We set the fine-tune value to 100 which means fine-tune is valid from this layer to onwards. So then, we had to make frozen all the previous layers and we also set these layers trainable value to false, so that they are not trained during the training. Then using Keras we set the ‘RMSprop’ (a gradient-based optimization technique, learning rate = 2e-5), binary cross-entropy for loss and ‘Accuracy’ was set for metrics. After running for further 10 epochs, over the last weights we got better accuracy. It’s more obvious that the training loss is decreasing which was our concern and target.So far we’ve applied 3 transfer learning pre-trained models, their summary is given in table 1,Table 1: Models SummaryModel</s>
<s>Epochs Image- Shape Validation AccuracyResnet50 20 200x200 85%Mobilenet 15 224x224 96%MobilenetV2 10 160x160 85%E. Bottleneck Features in Keras and Tensorflow (for VGG16)This process has been more simplified because we have already available some pre-trained models like VGG16 and their pre-trained weights. Keras has these in the Keras applications package. The core idea of transfer learning is to use the pre-trained model with their weights and at the same time removing the final FC layers from that model. After that, we have to use the remaining part of the model as our own model’s features which will be applied in our dataset. These newly extracted features are often called as the Bottleneck Features. Keras blog has a well-guided document in image classification using this method. So now we are going to build a CNN model for classifying 2 categories of Bengali people like Bengali female and Bengali male. We have our dataset ready for this execution and we shall use the ImageDataGenerator along with the flow_from_directory() functionality of Keras. We have already created the folder called Bengali female and Bengali male where there are respective photos inside the folder. So, we will actually pass through these procedures, • Firstly, we shall have to save the bottleneck features of the VGG16 model. • Secondly, we shall train our network using the pre-saved features of Bottleneck so that we can classify images (this is also known as Top model). • Finally, to make a prediction we will use both of the models (VGG16 and Top model).1). Proposed CNN ModelBut before implementing this operation of Bottleneck features, we immediately need the features. So, we are going to build our proposed CNN model first, then we shall use these features (collected from the proposed CNN model) as Bottleneck features. For this time, we shall also apply the data augmentation. Let’s go through the definition first,F. Adding Data AugmentationTo explain in brief, data augmentation is an efficient process that increases the diversity in the dataset significantly without collecting or using more data. For this purpose, there are many techniques available such as cropping, flipping, padding, scaling, shearing, zooming, etc. These techniques are often used widely in neural networks.We also used dropout to build this CNN model. Because dropout and data augmentation can reduce the problem of overfitting. So, we shaped the images to 150x150 and used 3 conv2D (3x3) layers (32, 32, 64 nodes respectively) in our sequential model. We also added 2 dense layers (size, 64 and 1), 3 max-pooling of 2x2 size, 1 flatten layer, ‘Relu’ as the activation function and the rate of the dropout was 0.5 when our final activation function was ‘Sigmoid’. Binary cross-entropy was set as loss, the optimizer was ‘RMSprop’ and ‘Accuracy’ was the metric as usual. But we added data augmentation this time of shear_range 0.2, zoom_range of 0.2 and finally, the horizontal flip was set to true. We got around 83% (epochs = 50) accuracy for this proposed model. From Fig.7, the loss and accuracy look absolutely</s>
<s>good (no overfitting).Authorized licensed use limited to: Cornell University Library. Downloaded on August 19,2020 at 10:45:46 UTC from IEEE Xplore. Restrictions apply. 8th International Conference on System Modeling & Advancement in Research Trends, 22nd–23rd November, 2019 College of Computing Sciences & Information Technology, Teerthanker Mahaveer University, Moradabad, India394 Copyright © IEEE–2019 ISBN: 978-1-7281-3245-7Fig. 7: Applying CNN Now we shall implement the technique of Bottleneck features because we have already got the result from our proposed CNN model and we have the saved bottleneck features. Now we may use it for further processing. The features were saved in a .npy (extension) file so now, we may load them for use. We resized the image by 224x224 with the batch size of 16, dropout 0.5 when the activation function of the last layer was ‘Sigmoid’, 2 dense layers of 256 nodes and the number of classes were 2 for our dataset. Then one flatten layer was added in our sequential model, ‘RMSprop’ was the optimizer and loss was categorical cross-entropy with the ‘Accuracy’ metrics. After running this model for 50 epochs we got around 85% accuracy. This figure (Fig.8) shows the accuracy of our model which seems good but the second figure (Fig.9) of the model’s loss seems getting overfitted (after 20 epochs). However, we have implemented our proposed bottleneck features model to classify images, which has given a decent validation accuracy.Fig. 8: Accuracy of Bottleneck FeaturesFig. 9: Loss of Bottleneck Features2). Keras Fine-tuning with VGG16(i) Freeze all layers: After loading the VGG16 model with the weights of ‘Imagenet’, we made all the trainable layers frozen. Then for the convolutional base model, we loaded the model in our sequential model. Next, we added 1 flatten layer, 1 dense layer of 1024 nodes with the activation function ‘Relu’ and then we added a dropout layer of rate 0.5 and final dense layer of size 2.So firstly, we tried without data augmentation with categorical cross-entropy as loss and ‘Acc’ was metric, the training batch size was 100 where the validation batch size was 10. After running for 20 epochs we got an accuracy of 86%. In Fig.10 and Fig.11, the validation accuracy and loss have been showed and it can be assumed that the model is not getting overfitted (from Fig.11).Fig. 10: Accuracy After Freezing all LayersFig. 11: Loss After Freezing all LayersNow let’s see some errors of prediction from our model, for each picture the original label and the prediction will be displayed with accuracy at the top. We will also mention the number of total wrong predicted images of the test dataset to grasp the overall concept of what we are going to do, we will try furthermore to reduce this problem and number of total wrong predicted images using various techniques in this paper.Fig. 12: Sample of Wrong Prediction (Female)Authorized licensed use limited to: Cornell University Library. Downloaded on August 19,2020 at 10:45:46 UTC from IEEE Xplore. Restrictions apply. Bengali Ethnicity Recognition and Gender Classification Using CNN & Transfer LearningCopyright ©</s>
<s>IEEE–2019 ISBN: 978-1-7281-3245-7 395Here (Fig.12), the Bengali female was declared as a Bengali male with a confidence of 63%. This could be the effect of her costume.The total number of errors = 115 images (out of 835).(ii) Last 4 layers without Data Augmentation: keeping all the previous state of our model (mentioned in B(i)), we just made the model trainable for the last 4 layers. Because it might help to overcome the problem of getting a huge number of wrong predicted images. We got 86% accuracy for this. Fig.13 and Fig.14 are representing their accuracy and loss respectively.Fig. 13: Accuracy for Last 4 Trainable Layers (without data augmentation)Fig. 14: Loss for Last 4 Trainable Layers (without data augmentation)From figure (Fig. 14), it seems that the model is getting overfitted since after 18 epochs the validation loss jumped suddenly right there.(iii) Training last 7 layers without Data Augmentation: After making the last 7 layers trainable without data augmentation the accuracy gets increased for the same model (from Fig.15), and loss decreased.Fig. 15: Accuracy for Last 7 Trainable Layers (without data augmentation)Wrong prediction’s sample:Fig. 16: Sample of Wrong Prediction (Male)Here, the Bengali male was declared as a Bengali female (confidence: 71%). This could be the effect of his long hair.Total wrong prediction (reduced): 100 images (out of 835).(iv) Last 4 layers with Data Augmentation: Now, let’s observe, how this model will behave with data augmentation, so we applied data augmentation for the last 4 layers and we got around 88% accuracy. Accuracy and loss are visible in Fig.17 and Fig.18 respectively.Fig. 17: Accuracy for L4 Trainable Layers (with data augmentation)Fig. 18: Loss for Last 4 Trainable Layers (with data augmentation)The accuracy seems good (Fig. 17) but from Fig. 18, it’s evident that the model is getting more overfitted after 12 epochs. The wrong prediction is shown below as an example of this,Fig. 19: Sample of Wrong Prediction (Female)Authorized licensed use limited to: Cornell University Library. Downloaded on August 19,2020 at 10:45:46 UTC from IEEE Xplore. Restrictions apply. 8th International Conference on System Modeling & Advancement in Research Trends, 22nd–23rd November, 2019 College of Computing Sciences & Information Technology, Teerthanker Mahaveer University, Moradabad, India396 Copyright © IEEE–2019 ISBN: 978-1-7281-3245-7Here, the Bengali female was declared as a Bengali male with a confidence of 100%. This could be the effect of her different gestures. In this case, the total number of wrong predictions (reduced): 95 images (out of 835).(v) Last 7 layers with Data Augmentation: Similarly, for the same model, after running for 40 epochs, (for the last 7 layers) with data augmentation we got the accuracy of 88% ~ 89%. From the validation accuracy figure (Fig.20), it’s clear that now this proposed model is performing better.Fig. 20: Accuracy for Last 7 Trainable Layers (with data augmentation)The validation accuracy at this stage looks very excellent compared with the all the previous 4 architectures of Keras fine-tuning.To recapitulate, the Keras fine-tuning with VGG16, the summary is given in table 2,Table 2: Keras Fine-tuning SummaryLayers Data Augmentation?</s>
<s>AccuracyAll layers (frozen) No 86%Last 4 layers(Trainable) No 86%Last 7 layers(Trainable) No 85%Last 4 layers(Trainable) Yes 88%Last 7 layers(Trainable) Yes 89%However, we have successfully reduced the number of total wrong predicted images to some extent.VI. ConclusionsIn conclusion, we can presume easily that getting high accuracy using CNN is arduous without transfer learning. Transfer learning has just given computer vision a new dimension. That’s why in this paper we have made a Convolutional Neural Network-based model followed by many transfer learning approaches. The CNN model gave us around 83% accuracy for new testing images and we got terrific accuracy for different transfer learning models. In the future, we shall make improvements to our CNN and bottleneck features architecture.AcknowledgmentWe would like to show our gratitude to all the instructors of DIU NLP & Machine Learning Research Lab, Daffodil International University, for the proper guidance and the pearls of wisdom they shared.References[1] LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11), 2278–2324 (1998).[2] Osadchy, M., Cun, Y., Miller, M.: Synergistic face detection and pose estimation with energy-based models. The Journal of Machine Learning Research 8, 1197–1215 (2007).[3] Ji, S., Xu, W., Yang, M., Yu, K.: 3D Convolutional Neural Networks for Human Action Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 35(1),221–231 (2013).[4] Nisha Srinivas 1, Harleen Atwal 2 , Derek C. Rose 1 ,Gayathri Mahalingam 2 , Karl Ricanek Jr. 2 , and David S. Bolme 1, Age, Gender, and Fine-Grained Ethnicity Prediction using Convolutional Neural Networks for the East Asian Face Dataset, 2017 IEEE 12th International Conference on Automatic Face & Gesture Recognition,pp: 953-960.[5] X. Wang, R. Guo, and C. Kambhamettu. Deeply-learned feature for age estimation. In 2015 IEEE Winter Conference on Applications of Computer Vision, pages 534–541, Jan 2015.[6] X. Yang, B.-B. Gao, C. Xing, Z.-W. Huo, X.-S. Wei,Y. Zhou, J. Wu,and X. Geng. Deep label distribution learning for apparent age estimation. In The IEEE International Conference on Computer Vision(ICCV) Workshops, December 2015.[7] D. Yi, Z. Lei, and S. Z. Li. Age estimation by multi- scale convolutional network. In Asian Conference on Computer Vision, pages 144–158.Springer, 2014.[8] Y. Zhu, Y. Li, G. Mu, and G. Guo. A study on apparent age estimation. In The IEEE International Conference on Computer Vision (ICCV)Workshops, December 2015.[9] G. Levi, T. Hassncer, Age and gender classification using convolutional neural networks, in: Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2015, pp. 34– 42.[10] B.A. Golomb, D.T. Lawrence, T.J. Sejnowski, Sexnet: a neural network identifies sex from human faces, in: Proceedings of the 1990 Conference on Advances in neural information processing systems, 3, 1990, pp. 572–577.[11] B. Moghaddam, M.-H. Yang, Learning gender with support faces, IEEE Trans. Pattern Anal. Mach. Intell. 24 (5) (2002) 707–711, doi: 10.1109/34.10 0 0244.[12] S. Baluja, H.A. Rowley, Boosting sex-identification performance, Int. J. Comput. Vis. 71 (1)(2006) 111–119, oi: 10.1007/s11263- 006- 8910- 9.[13] M. Toews, T. Arbel, Detection, localization and sex classification of faces from</s>
<s>arbitrary viewpoints and under occlusion, IEEE Trans. Pattern Anal. Mach. Intell. 31(9)(2009)1567–1581, doi: 10.1109/TPAMI.2008.233.[14] M.Wang, Y. Iwai, and M. Yachida. Expression Recognition from Time-Sequential Facial-Images by Use of Expression Change Model. Proc. 3rd IEEE International Conference on Automatic Face and Gesture Recognition, pages 324–329,1998.[15] J. F. Cohn, Z. Ambadar, and P. Ekman. Observer-based measurement of facial expression with the facial-action coding system. The handbook of emotion elicitation and assessment, pages 203–221, 2007.[16] Y. Lcun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition, Proc. IEEE 86 (11) (1998) 2278–2324.[17] F. Jialue, X. Wei, W. Ying, G. Yihong, Human tracking using convolutional neural networks, IEEE Trans. Neural Netw. 21 (10) (2010) 1610–1623.[18] Y. Cao, Y. Chen, D. Khosla, Spiking deep convolutional neural networks for energy-efficient object recognition, Int. J. Comput. Vis. 113 (1) (2015) 54–66.[19] Choon-Boon Ng, Yong-Haur Tay and Bok-Min Goi, A Convolutional Neural Network for Pedestrian Gender Recognition, C. Guo, Z.-G. Hou, and Z. Zeng (Eds.): ISNN 2013, Part I, LNCS 7951, pp. 558–564, 2013.[20] Dongmei Hana, Qigang Liu, Weiguo Fan, A New Image Classification Method Using CNN transfer learning and Web Data Augmentation, Expert Systems With Applications (2017), doi: 10.1016/j.eswa.2017.11.028.Authorized licensed use limited to: Cornell University Library. Downloaded on August 19,2020 at 10:45:46 UTC from IEEE Xplore. Restrictions apply.</s>
<s>Classification Benchmarks for Under-resourced Bengali Language based on Multichannel Convolutional-LSTM NetworkClassification Benchmarks for Under-resourcedBengali Language based on MultichannelConvolutional-LSTM NetworkMd. Rezaul KarimFraunhofer FIT, Aachen, GermanyRWTH Aachen University, GermanyBharathi Raja ChakravarthiInsight SFI Research Centre for Data Analytics,Data Science Institute, National University of IrelandGalway, IrelandJohn P. McCraeInsight SFI Research Centre for Data Analytics,Data Science Institute, National University of IrelandGalway, IrelandMichael CochezDepartment of Computer Science,Vrije Universiteit Amsterdam, NetherlandsABSTRACTExponential growths of social media and micro-blogging sites notonly provide platforms for empowering freedom of expressionsand individual voices but also enables people to express anti-socialbehaviour like online harassment, cyberbullying, and hate speech.Numerous works have been proposed to utilize these data for so-cial and anti-social behaviours analysis, document characteriza-tion, and sentiment analysis by predicting the contexts mostly forhighly resourced languages such as English. However, there arelanguages that are under-resources, e.g., South Asian languageslike Bengali, Tamil, Assamese, Telugu that lack of computationalresources for the natural language processing (NLP)1 tasks. In thispaper2, we provide several classification benchmarks for Bengali,an under-resourced language. We prepared three datasets of ex-pressing hate, commonly used topics, and opinions for hate speechdetection, document classification, and sentiment analysis, respec-tively. We built the largest Bengali word embedding models todate based on 250 million articles, which we call BengFastText.We perform three different experiments, covering document clas-sification, sentiment analysis, and hate speech detection. We in-corporate word embeddings into a Multichannel Convolutional-LSTM (MConv-LSTM) network for predicting different types of hatespeech, document classification, and sentiment analysis. Experi-ments demonstrate that BengFastText can capture the semanticsof words from respective contexts correctly. Evaluations againstseveral baseline embedding models, e.g., Word2Vec and GloVe yieldup to 92.30%, 82.25%, and 90.45% F1-scores in case of documentclassification, sentiment analysis, and hate speech detection, re-spectively during 5-fold cross-validation tests.KEYWORDSUnder-resource language, NLP, Deep learning, Sentiment analysis,Hate speech detection, Word embedding, Word2Vec, FastText.1 INTRODUCTIONIn recent years, micro-blogging platforms and social networkingsites have grown exponentially, enabling their users to voice their1List of abbreviations can be found at the end of this paper.2This paper is under review in the Journal of Natural Language Engineering.opinions [25]. At the same time, they have also enabled anti-socialbehavior [14], online harassment, cyberbullying, false political andreligious rumor, and hate speech activities [27, 37]. The anonymityand mobility offered by such media have made the breeding andspread of hate speech [47] – eventually leading to hate crime inmany aspects including religious, political, geopolitical, personal,and gender abuse. This applies to every human being regardlessof languages, geographic locations or ethnicity. While the Webbegan as an overwhelmingly English phenomenon, it now containsextant text in hundreds of languages. In the real world, languagesare becoming extinct at an alarming rate and the world could losemore than half of its linguistic diversity by the year 2100. Certainlanguages are low resource because they are actually low-densitylanguages based on the number of people speaking in the world.Languages are dynamic being spoken by communities whoselives are shaped by the rapidly changing world. The number ofliving languages spoken in the world is about 7,099, which is con-stantly in flux. About a third of these languages are now endangered,often with less than 1,000</s>
<s>speakers remaining. Meanwhile, only23 languages account for more than half the world’s population.Bengali, which is a rich language with a lot of diversity is spoken inBangladesh, the second most commonly spoken language in India,and the seventh most commonly spoken language in the worldwith nearly 230 million speakers (200 million native speakers) [19].Sentiment analysis makes use of the polarity expressed in texts,where lexical resources are often used to look up specific words astheir presence can be predictive features. Linguistic features, on theother hand, utilize syntactic information like Part of Speech (PoS)and certain dependency relations as features. Content-based classi-fication, which is the process of grouping documents into differentclasses or categories, is emerging due to the continuous growth ofdigital data. Apart from these, sentiment analysis in Bengali is pro-gressively being considered as a non-trivial task, for which previousapproaches have attempted to detect the overall polarity of Ben-gali texts. Hate speech, which is defined by the Encyclopedia ofAmerican Constitution [32] is “any communication that disparagesa person or a group on the basis of some characteristic such as race,color, ethnicity, gender, sexual orientation, nationality, religion, orother characteristics”, is also rampant online, e.g., social media,newspapers, and books [38]. Like in other major languages like in, , Karim, Chakravarthi, McCrae, and CochezEnglish, the use of hate speech in Bengali is pervasive and can haveserious consequences, because according to a Special Rapporteur tothe UN Humans Rights Council, failure to monitor and react to suchhate speech in a timely manner can reinforce the subordinationof targeted minorities, making them not only vulnerable to attackbut also influencing majority populations and making them moreindifferent to various manifestations of such hatred [21].Thus, identifying such hate speech and making awareness in avirtual landscape beyond the realms of traditional law enforcementis a non-trivial research contribution. However, a vast amount ofcontent published online does not allow for manual analysis. Foryears, tech giants like Twitter, Facebook, and YouTube have beencombating the issue of hate speech and it has been estimated thathundreds of millions of euros are being invested every year oncounter-measures including manpower [21]. However, they arestill being criticized for not doing enough [33]. Efficient and auto-mated techniques for such useful researches for under-resourcedBengali language is in scarce, which is largely because such mea-sures require significant manual efforts. However, reviewing onlinecontents to identify offensive materials and taking actions needmanual intervention, which is not only labor intensive and timeconsuming but also not scalable [21].To the end, computational linguistics has long been dominatedby techniques specifically geared towards English and other Euro-pean languages in which English language models are often used asbaselines, despite the fact that it is an outlier in a lot of linguistic as-pects. Despite being often unreported if the results hold up in otherlanguages, the techniques are often considered state-of-the-art fornatural language understanding. The field now, has unprecedentedaccess to a great number of low-resource languages, readily avail-able to be studied, but needs to act quickly before political, social,and economic pressures cause these languages to disappear fromthe real and/or virtual worlds. In the absence of these resources, thelack</s>
<s>of good performance across various tasks feeds into a systemicinformation asymmetry in the digital world. The quality of trans-lation systems, the lack of social media corpora, and the generalabsence of language-agnostic tools is going to exacerbate the digitaldivide in coming times at the global level and is a prominent barrierin the way of truly democratizing technology access in the world.Accurate identification of such hate speeches or sentiment anal-ysis requires not only an automated, scalable, and accurate meth-ods but also computational resources such as state-of-the-art ma-chine learning (ML) approaches, word embedding models and richlabelled datasets, which in turn needs natural language process-ing (NLP) resources for the language in question is the core con-tribution of this work. This is obvious, because the NLP resourcesfor Bengali are still scarce [5, 9]. Recent research efforts from boththe NLP and ML communities have proven to be very effective forwell-resourced languages like English. Computational linguisticstoday is largely dominated by supervised machine learning (ML),which requires annotating data for low-resource languages thatcan be time consuming, expensive and needs experts. They identi-fied these research problems as supervised learning tasks, whichcan be divided into two categories. First, there are classic methods,which rely on manual feature engineering such as Support VectorMachines (SVM), Naïve Bayes (NB), KNN, Logistic Regression (LR),Decision Trees (DT), Random Forest (RF) and Gradient BoostedTrees(GBT). Second, there are approaches which employs deepneural networks (DNN) to learn multilayers of abstract featuresfrom raw texts. These approaches are largely based on Convolu-tional (CNN) or Long Short-term Memory (LSTM) networks.These approaches are mostly deep learning (DL) based, referringto the depth of the neural networks used. Nevertheless, despite thegrowth of studies and existing ML approaches, under-resourcedlanguages such as Bengali lacks resources such as rich word embed-ding models and comparative evaluation on NLP tasks. In this work,we built a Bengali word embedding model called BengFastTextand implemented a robust neural network architecture called multi-channel convolutional-LSTM network (MConv-LSTM) by combiningboth CNN and LSTM networks with multiple channels in a singlearchitecture to leverage their benefits. The other key contributionsof this paper can be summarized as follows:• We built the largest Bengali word embedding model calledBengFastText based on 250 million Bengali articles in anunder-resourced setting. BengFastText is a language modelbased on fastText [17], which is computationally-efficientpredictive model for learning word embeddings from rawBengali texts.• We prepared three public datasets of hate speech, sentimentanalysis, and document classification of Bengali texts, whichare larger than any currently available datasets by both quan-tity and subject coverage.• We trained several ML baseline models like SVM, KNN, LR,NB, DT, RF and GBT. Further, we prepared embeddings basedon Word2Vec and GloVe for a comparative analysis betweenML and MConv-LSTM model.• Our approach shows how the automatic feature selection us-ing MConv-LSTM model significantly improves the overheadof manual feature engineering.The rest of the paper is structured as follows: Section 2 reviews re-lated work on hate speech and Bengali word embedding. Section 3describes the data collection and annotation process. Section 4describes the process of Bengali neural embedding, network con-struction, and training. Section 5 illustrates experiment results in-cluding a comparative analysis</s>
<s>with baseline models on all datasets.Section 6 summarizes this research with potential limitations andpoints some possible outlook before concluding the paper.2 RELATEDWORKAlthough linguistic analyses are carried out extensively and wellstudied on other major languages, only a few approaches havebeen explored for Bengali [19], which is due to lack of systematicmethod of text collection, annotated corpora, name dictionaries,morphological analyzer, and overall research outlook. Existingworkon Bengali NLPmostly focus on either document categorization [20]using supervised learning techniques or applying N-gram techniqueto categorize newspaper corpora [28], dataset preparation [36] foraspect-based sentiment analysis [41], and word embeddings fordocument classification [1].Apart from these, some approaches focus on parts of speech (PoS)tagging techniques for parsing texts for rich semantic processing bycombiningMaximum Entropy, Conditional Random Field (CRF) andClassification Benchmarks for Under-resourced Bengali Language based on Multichannel Convolutional-LSTM Network , ,Figure 1: Examples of Bengali hate speech, either directed towards a specific person or entity, or generalized towards a groupSVM classifiers using weighted voting techniques [13] or based onbidirectional LSTMâĂŤCRF based network [2]. In these approaches,classic ML algorithms such as SVM, KNN, NB, RF, and GBT havebeen used. These approaches, either lack comprehensive linguistanalysis or are carried out on a small-scale labeled dataset with lim-ited vocabulary, which diminishes the performance of supervisedlearning algorithms. IN contrast, a growing body of work aims tounderstand and detect hate speech by creating representations forcontent on the web and then classifying these tweets or commentsas hateful or not, drawing insights along the way [14]. Many effortshave been made to classify hate speech using data scraped fromonline message forums and popular social media including Twitter,YouTube, Facebook, and Google, with their Perspective API [39].A LR-based approach is proposed for one-to-four-character n-grams for classifying tweets that are labeled as racist, sexist orneither [43] and for distinguishing hateful and offensive but nothateful tweets with L2 regularization [3]. For the latter use case,the authors applied word-level n-grams and various PoS, sentimentand tweet-level metadata features. However, the accuracy of thesesolutions are not satisfactory [20]. Davidson et al. [11] used DNNapproaches with two binary classifiers: one to predict the presenceof abusive speech more generally, and another one to discern theform of abusive speech or building a classifier composed of twoseparate networks for hate speech detection [27].Existing classifiers are predominantly supervised [44]: LR is themost popular, while other algorithms, e.g. SVM, NB, and RF are alsoused. Despite diverse types of features introduced, little is knownabout the multimodality of different types of features in such a sin-gle classifier. Most methods simply ‘use them all’ by concatenatingall feature types into high-dimensional, sparse feature vectors thatare prone to overfitting, especially on short texts e.g. tweets. Someapplied an automated statistical feature selection process to reduceand optimize the feature space, while others did this manually.Since the impact of feature selection is unknown, whether dif-ferent types of features are contributing to better classificationaccuracy remains questionable. Hence, classic methods that havebeen investigated in the literature employ a abundance of task-specific features. It is not clear what their individual contributionscould be [15]. These approaches in comparison with DL-based ap-proaches is</s>
<s>rather incomparable because of the efficiency of linearmodels at dealing with billions of such texts proven less accurateand unscalable, which is probably another primary reason. DL isapplied to learn abstract feature representations used for the textclassification. The input can take various forms of feature encoding,including any of those used in the classic methods. However, thekey difference is that the input features are not directly used forclassification. Instead, multilayer structures learn abstract represen-tation of the features, which is found more effective for learning,as typical manual feature engineering is then shifted to the net-work topology, which is carefully designed to automatically extractuseful features from a simple input feature representation [15].CNN, Gated Recurrent Unit (GRU) [47], and LSTM networksare the most popular architectures in the literature. In particular,CNN is an effective arsenal to act as ‘feature extractors’, whereasLSTM is a type of powerful recurrent type network for modelingorderly sequence learning problems. We also observe the use ofLSTM or GRU with pre-trained Word2Vec, GloVe, and fastTextembeddings to fed into a CNN with max pooling to produce inputvectors for a neural network [10], because, in the context of textclassification, CNN extracts word or character combinations, e.g.,phrases, n-grams and LSTM learns a long-range word or charac-ter dependencies in texts. In theory, the Conv-LSTM is a powerfularchitecture to capture long-term dependencies between featuresextracted by CNN. In practice, they are found to be more effectivethan structures solely based on CNN or LSTM in tasks such as drug-drug interaction predictions [24], network intrusion detection [26],, , Karim, Chakravarthi, McCrae, and Cochezand gesture/activity recognition where the networks learn temporalevolution of different regions between frames, and Named EntityRecognition (NER) [45] where the class of a word sequence candepend on the class of its preceding word sequence.While each type of network has shown effectiveness for generalpurpose text classification, few works have explored combiningboth structures into a single network [38], except that using thiscombination, and especially when trained using transfer learningachieved higher classification accuracy than either neural networkclassifier alone [21]. In contrast, our approach relies on a wordembedding similar to fastText but with a strong focus on the under-resourced setting by reducing the number of required parametersand length of training required, while still yielding improved per-formance and resilience across related classification tasks. We hy-pothesize that MConv-LSTM can also be effective not only for hatespeech detection but also for sentiment analysis and documentclassification since this deep architecture can capture co-occurringword n-grams as useful patterns for classification [17, 22]. Moreover,we believe that our network will be able to learn flexible vectorrepresentations that demonstrate associations between words typi-cally used not only for hate speech detection but also for documentclassification and sentiment analysis.3 DATASETSOne major limitation of state-of-the-art approaches is the lack ofcomparative evaluation on publicly available data sets for the Ben-gali language. A large majority of the existing works are evaluatedon privately collected datasets, often targeting different problems,and most of them contain only a small Bengali text corpus. A carefulanalysis was carried out while differentiating linguistic features,e.g. abusive language can be different from hate speech but</s>
<s>is oftenused to express hate as shown in examples in fig. 1. To best ofour knowledge, there are only a few works proposed on Bengaliword embeddings. The Polyglot project [4] is one of the first at-tempts, which in which the word embeddings are trained for morethan 100 languages using their corresponding Wikipedia dumpsin which only 55K corpus from the Bengali Wikipedia dump wereused to train the model to generate the embeddings. The secondapproach [1] built an embedding model on 200K Bengali corpusfrom 13 news articles. Third, a neural lemmatizer is proposed by [8]using Bengali word embeddings, which is built on even a smallerdataset of 20K corpus.The fourth one called fastText [17], which is trained on commoncrawl and Wikipedia using CBOW with position-weights, in di-mension 300, with character n-grams of length 5, a window of size5 and 10 negatives. Eventually, fastText distributes the pre-trainedword vectors for 157 languages. To further explore and apply moreextensive linguist analysis on Bengali with flexibly and efficiently,we not only collected the largest corpus of Bengali raw texts fromWikipedia but also prepared three more datasets: documents ofcontemporary Bengali topics, texts expressing hate speech (politi-cal, religious, personal, geopolitical and gender abusive) and publicsentiments (about Cricket, products, movie, and politics).3.1 Raw text corpus collectionFor our corpus, Bengali articles were collected from numeroussources from Bangladesh and India including a Bengali Wikipediadump, Bengali news articles (Daily Prothom Alo, Daily Jugontor,Daily Nayadiganta, Anandabazar Patrika, Dainik Jugasankha, BBC,and Deutsche Welle), news dumps of TV channels (NTV, ETVBangla, ZEE News), books, blogs, sports portal, and social media(Twitter, Facebook pages and groups, LinkedIn).We also categorizedthe raw articles for ease preprocessing for later stage. Facebookpages (e.g. celebrities, athletes, sports, and politicians) and newspa-per sources were scrutinized because composedly they have about50 million followers and many opinions, hate speech, and reviewtexts come or spread out from there. All text samples were collectedfrom publicly available sources. Altogether, our raw text corpusconsists of 250 million articles.Then two linguists and three nativeBengali speakers annotated the samples from separate datasets forhate speech detection, document classification, and sentiment anal-ysis, which is described in subsequent subsections3 to encouragefuture comparative evaluation.Figure 3 shows the frequency word clouds of potential creativehates expressed in our datasets. Since the collected Bengali textcorpus comes with various dimensional, sparse and relatively noisy,before the word embedding step and manual labeling, emoticon,digits (Bengali and English) special characters, hash-tags and excessuse of spaces (due to redundant use of spaces, tabs, shift) wereremoved to give annotators an unbiased-text-only content to makea decision based on criteria of the objective.3.2 Data preprocessingSince the obtained corpus is rather noisy, we perform an automatedcleaning for all datasets before giving it to the annotators. First, weremove HTML markup, links, and image titles. From text, digits,special characters, hash-tags and excessive white space are removed.Then the following further steps are performed:• PoS tagging: an BLSTM-CRF based approach suggested inliterature [2] was applied.• Removal of proper nouns: various proper nouns and nounsuffixes were identified and replaced with tags to provideambiguity.• Hashtags normalization: hastags in tweets, comments,and posts were normalized using a</s>
<s>dictionary-based lookup4, which are often used to compose words (e.g. fig. 2a) wasused to split such hashtags.• Stemming: inflected words are reduced to their stem, baseor root form in order to reduce dimension and obtain morestandard Bengali tokens (e.g. fig. 2b)(English translation:Bangladeshe to Bangladesh, Pakistaner to Pakistan, fromvillage to village).• Stopword removal: commonly used Bengali stopwords (pro-vided by Stopwords ISO5) are removed.• Removing infrequent words: finally, we remove any to-kens with a document frequency less than 5.3The prepared dataset, embedding model, and codes will be made publicly available.4Based on https://www.101languages.net/bengali/bengali-word-list/5https://github.com/stopwords-iso/stopwords-bnhttps://www.101languages.net/bengali/bengali-word-list/https://github.com/stopwords-iso/stopwords-bnClassification Benchmarks for Under-resourced Bengali Language based on Multichannel Convolutional-LSTM Network , ,(a) Hashtags normalization (b) StemmingFigure 2: Example of hashtags normalization and stemmingTable 1: Statistics of document classification dataset. Foreach category, we show the number of documents, numberof words, average number of sentences per document, andaverage number of words per sentenceCategory #Doc #Words ASD AWSState 242,860 57,019,465 18.50 13.356Economy 18,982 4,915,141 20.18 13.378International 32,203 7,096,111 18.47 12.493Entertainment 31,293 6,706,563 21.70 10.236Sports 50,888 12,397,415 22.80 11.069Total 376,226 88,134,695 20.33 12.10Table 2: Statistics of the hate speech detection datasetType of hate #Statement #Words AWSPolitical 7,182 132,867 18.50Religious 6,975 140,756 20.18Gender abusive 7,300 134,831 18.47Geopolitical 6,793 117,180 17.25Personal 6,750 146,475 21.70Total 35,000 672,109 19.223.3 Dataset for document classificationDespite several comprehensive textual datasets available for docu-ment categorization, only a few labeled datasets exist for Bengaliarticles. Sophisticated supervised learning approaches, therefore,were not a viable option to address document classification prob-lems. Eventhough, large-scale corpora for other language are avail-able online, it is rare for under-resourced languages like Bengali.Hence we decided to build our own corpus. As described above,the articles are collected into different categories such that eachbelongs to any one of the following categories: state, economy,entertainment, international, and sports as outlined in table 1.3.4 Hate speech dataset preparationTo prepare the data we used a bootstrapping approach, which startswith an initial search for specific types of texts, articles or tweetscontaining common slurs and terms used pertaining to targetingcharacteristics. We manually identify frequently occurring terms inthe texts containing hate speech and references to specific entities.From the raw texts, we further annotated 100,000 statements, textsor articles, which directly or indirectly express ‘hate speech’.However, we encountered numerous challenges e.g. since thereexists different types of hate in the regions, distinguishing hatespeech from non-hate offensive language is a challenging task,which is because sometimes they overlap but are not the same [46].Collected data samples are annotated in a semi-automated way(after removing mostly similar statements). First, we attemptedto annotate them in an automated way, which is inspired by thecomparing communities technique using creative politicalslang (CSPlang) [18]. We prepare normalized frequency vectorsof 175 abusive terms6 across the raw texts, which are commonlyused to express hates in Bengali language. Then we assign label6https://github.com/rezacsedu/BengFastText‘hate’ if at least one of these terms exists in the text, otherwise, weput label ‘neutral’. This way, however, differentiating ’neutral’ and‘non-hate’ was not possible.The annotations are further validated and corrected by threeexperts (one South Asian linguist and two native speakers) into oneof three categories: hate, non-hate and neutral. To reduce possiblebias, each label</s>
<s>was assigned based on a majority voting on theannotator’s independent opinions. Fortunately, certain types ofhate were easy to identify and annotate based on CSPlang thatare nonstandard word for Bengali, which conveys a positive ornegative attitude towards a person, a group of people, or an issuethat is the subject of discussion in political discourse [18] and can beeasily annotated as a political hates. Finally, non-hate and neutralstatements were removed from the list and hates were furthercategorized into political, personal, gender abusive, geopolitical,and religious hates. We found that from the annotated text 3.5%was classified as hate speech, which resulted in 35,000 statementslabeled as hate. Figure 3a shows the most frequently used termsexpressing hates; whereas, some statistics can be found in table 2.3.5 Dataset for sentiment analysisOur empirical study found that opinions, reviews, comment or reac-tions about news or someone’s status updates are usually providedin Bengali. However, research has observed that about 20% of ex-press sentiments in English [36]. Also, some comments were onlyemoticons or written in non-native scripts. As English and emoti-cons are already investigated elsewhere, we consider this outsideof the scope of the current work.Table 3: Statistics of the dataset for sentiment analysisPolarity #Statement #Words AWSNegative 153,000 727,3200 40.16Positive 167,000 557,3550 30.65Total 320,000 12,846750 35.40The annotation was done manually with the help of native Ben-gali speakers and majority voting (with ambiguous polarity re-moved), which left 320,000 reviews categorized either positive ornegative. Figure 3a shows the most frequently used terms express-ing hate; whereas, related statistics can be found in table 3.Finally, we follow a similar preprocessing steps for extractingmost important words, which is a statement-based observationthat states that in a collection of data, the frequency of a givenword should be inversely proportional to its rank in the corpus–that is most frequent word scores rank 1 in a dataset should occurapproximately twice as the second most frequent word and threetimes as the third most frequent, and so on.https://github.com/rezacsedu/BengFastText, , Karim, Chakravarthi, McCrae, and Cochez(a) Frequently used terms in hate speech (b) Frequently used terms in sentiments(c) Frequently used topics of discussionFigure 3: Frequency word clouds on potential hates, sentiments, and common topics4 METHODSTo validate the usefulness of the dataset and the BengFastTextmodel, we perform several ML tasks, which are discussed includ-ing word embeddings, network construction, training, and hyper-parameter optimization.4.1 Neural word embeddingsGiven a document, e.g., an article, a tweet, or a review of Bengalitext, we apply preprocessing to normalize the text (see section 3.2).The preprocess reduces vocabulary size in the dataset due to thecolloquial nature of the texts and to some degree, addresses thesparsity in word-based feature representations. We also tested bykeeping word inflections, using lemmatization instead of stemming,and lower document frequencies. Empirically we found that us-ing lemmatization to contribute to slightly better accuracy, hencereported here subsequently. Each token from the preprocessed in-put is embedded into a 300-dimensional real-valued vector, whereeach element is the weight for the dimension for the token. Forlater classification tasks, we constrain each sequence to 100 words,truncating long texts and pad shorter ones with zero values.However, one issue with</s>
<s>this design specific choice forces usto consider 300 words per document for document classificationotherwise for the majority of the documents inputs to the convolu-tional layers will be padded with many blank vectors, which causesthe network to perform poorly. First, the Word2Vec model is builtbased on skip-gram method since it is computationally faster andefficient than CBOW for large-scale text corpus [6] and used forBengali sentiment analysis [41]. From a given a sequence of words(w1,w2, ...,wn ) ∈ C, the neural network model aims to maximizethe average log probability Lp (see eq. (1)) according to the contextwithin the fixed-size window, in which c represents a context [29].Lp =n=1−c≤j≤c, j,0logpwn+j |wn(1)logσv ′⊤vwIi=1Ewi ∼Pn (w )logσ−v ′⊤vwI(2)To define pwn+j |wn, we use negative sampling by replacinglogp (wO |wI ) with a function to discriminate target words (wo )from a noise distribution Pn (w) drawing k words from Pn (w) ineq. (2). Eventually, the embedding of a word s occurring in corpusC is the vector vs in eq. (2) derived by maximizing eq. (1).Then we built BengFastTextmodel based on GloVe for creatingthe embeddings [34]. In which, instead of extracting the embeddingsfrom a neural network that is designed to perform a different tasklike predicting neighboring words (i.e., CBOW) or predicting thefocus word (i.e., skip-gram), the embeddings are optimized directly,so that the dot product of two word vectors equals the log of thenumber of times the two words will occur near each other.Finally, we employed fastText for creating the embeddings [22]by the BengFastText model. Instead of learning vectors for wordsdirectly, fastText represents each word as an n-gram of characters,which helps capture the meaning of shorter words and allows theembeddings to understand suffixes and prefixes. Once the wordis represented using character n-grams, a skip-gram model withnegative sampling based on [30] is used to score similarity betweena fixed target wordwt and a word within the contextwc to learn theembeddings. This approach intuitively sounds similar to Word2Vecand can be considered to be a CBOW model with a sliding windowClassification Benchmarks for Under-resourced Bengali Language based on Multichannel Convolutional-LSTM Network , ,Figure 4: t-SNE representation of the embeddings, where closer words are semantically and statistically more relatedover a word because no internal structure of the word is taken intoaccount. As long as the characters are within this window, the orderof the n-grams does not matter. The difference, however, is that thesimilarity function is not a direct dot product between the wordvectors, rather it is a dot product between two words represented bysums of n-gram vectors. Then a latent text embedding is derived asan average of the word embeddings, and the text embedding is usedto predict the label. The overall objective function is to minimizethe following loss function [17, 22]:− 1n=1yn log (f (BAxn )) (3)where xn represents an n-gram feature, A represents the lookuptable to an average text embedding, and B converts the embed-ding to pre-softmax values for each class, while a hierarchical soft-max [16] is applied for the text classification [23] given the largenumber of classes to minimize computational complexity.4.2</s>
<s>Network constructionIn CNN, convolutional filters are used to capture the local depen-dencies between neighbor words. However, a fixed filter lengthsmakes CNN model hard to learn overall dependencies of a wholesentence. Following steps are employed to overcome this limitation:• We consider 3-channel CNN approach by setting a fixedfilter size with different sized kernels. This allows a text tobe processed at different n-grams at a time. The concate-nated vector helps the model to learn how to best integratethese interpretations altogether, which can be consideredas a representation vector constructed from every channelconcatenating local relationship values.• Then employ an LSTM layer, which is because an LSTM canpreserve long-term dependency among features. Hence, afeature vector constructed by LSTM can carry overall depen-dencies of a whole sentence more efficiently [24].• We concatenate both multichannel-CNN and LSTM layersto get a shared feature representation: CNN layers capturelocal relationship, LSTM layers carry overall relationship).We expect the shared representation layer containing two vectorsto model the non-linearity more efficiently for our tasks, e.g., hatespeech detection, document classification, or sentiment analysis.Given a text expressing hates (hopefully), the MConv-LSTM modeleither reuse the weight matrix from the pre-trained BengFastText, , Karim, Chakravarthi, McCrae, and Cochezmodel or generates a word embedding representation by an embed-ding layer. This means that we utilize two types of word embeddingsto train the models: i) we initialize the weights in the embeddinglayer randomly and let our model learn the embeddings on the fly,ii) we use pre-trained word embeddings from the GloVe, Word2Vec,and BengFastText models to set the weights of our embeddinglayer as a custom embedding layer.Using either option, we construct and train the MConv-LSTMnetwork architecture shown in fig. 5 in which the rectified linearunit (ReLU) is used as the activation function for the hidden layersand Softmax in the output layer. Once the word embedding layermaps each text as a ‘sequence’ into a real vector domain, the embed-ding representation is fed into both LSTM and multi-CNN layerstowards getting a shared feature representation. To capture boththe local and global relationships, we extended the Conv-LSTM net-work proposed in literature [45] in which each input X1,X2, ...,Xt ,cell outputs C1,C2, ...,Ct , hidden states H1,H2....,Ht , and gatesit ,ft ,ot of the network [46]. Conv-LSTM determines the future stateof a certain cell in the input hyperspace by the inputs and paststates of its local neighbors, which is achieved by using a convoperator in the state-to-state and input-to-state transitions as [45]:it = σ (Wxi ∗ Xt +Whi ∗ Ht−1 +Wci ◦ Ct−1 + bi ) (4)ft = σWxf ∗ Xt +Whf ∗ Ht−1 +Wcf ◦ Ct−1 + bf(5)Ct = ft ◦ Ct−1 + it ◦ tanh (Wxc ∗ Xt +Whc ∗ Ht−1 + bc ) (6)ot = σ (Wxo ∗ Xt +Who ∗ Ht−1 +Wco ◦ Ct + bo ) (7)Ht = ot ◦ tanh (Ct ) (8)where ‘*’ denotes the conv operator and ‘o’ is the entrywisemultiplication of two matrices of the same dimension. The secondLSTM layer emits an output ‘H’, which is then reshaped into afeature sequence and feed into fully connected layers</s>
<s>to make theprediction at the next step and as an input at the next time step.Intuitively, an LSTM layer treats an input feature space of 100×300and it’s embedded feature vector’s dimension as timesteps andoutputs 100 hidden units per timestep. Once the embedding layerpasses an input feature space 100×300 into three channels (i.e. amulti-1D convolutional layer) with 100 filters each butwith differentkernel-size of 4, 6 and 8, respectively (i.e. 4-grams, 6-grams, and8-grams of hate speech text).We pad the input such that the output has the same length asthe original input. Then the output of each convolutional layer ispass to three separate dropout layers to regularize learning to avoidoverfitting [40]. Intuitively, this can be thought of as randomlyremoving a word in sentences and forcing the classification not torely on any individual words. This involves the input feature spaceinto a 100×100 representation, which is then further down-sampledby three different 1D max pooling layers, each having a pool sizeof 4 along the word dimension, each producing an output of shape25×100. Where each of the 25 dimensions can be considered asan‘extracted feature’. Each max pooling layer follows to‘flatten’ theoutput space by taking the highest value in each timestep dimen-sion. This produces a 1×100 vector, which forces words that arehighly indicative of interest. Then these vectors as two informationchannels (from CNN + LSTM) are then concatenated and fed intothe neural network after passing through another dropout layer.Finally, the concatenated representation is passed to a fully con-nected Softmax to predict a probability distribution over the classesfor the specific tasks.4.3 Network trainingThe first-order gradient-based AdaGrad optimizer is used to learnmodel parameters with varying learning rates and a batch size of 128to optimize the cross-entropy loss (categorical cross-entropyin case of document classification and hate speech detection andbinary cross-entropy loss in case of sentiment analysis) of thepredicted label of the text P vs the actual label of the text T.L =i, j,kTi, j,k log Pi, j,k +1 −Ti, j,klog1 − Pi, j,k(9)We perform hyperparameters optimization with random searchand 5-fold cross-validation. Further, we observed the performanceby adding Gaussian noise layers, followed by Conv and LSTMlayers to improve model generalization and reduce overfitting. Outof vocabulary (OOV), particularly in word embedding models, isa known issue. The OOV cannot be tackled by existing languagemodels like Polyglot, Word2Vec, and GloVe. This potential issuewith the pre-trainedWord2Vec model were addressed based on [44]:i) randomly setting the OOV word vector following a continuousuniform distribution, ii) randomly selecting an in-vocabulary wordfrom the pretrained model to use its vector.For the latter, surrounding-contexts are considered, given that alow-magnitude vector may not be appropriate at the start of train-ing [7]. Both settings are found to be empirically helpful in ourexperiments. However, the latter outperformed the former. Otheroptions could have been employed to get rid of OOV e.g. morpholog-ical classes [31] or character-word LSTM language models [42] butare particularly applied to non-Bengali languages such as Englishand Dutch languages.5 EXPERIMENTSIn this section, we discuss the results of the experiments basedon and without the pre-trained BengFastText model. In addition,we provide a comparative analysis between BengFastText</s>
<s>andbaseline embedding methods: Word2Vec and GloVe.5.1 Experiment setupAll the methods were implemented in Python7 and experimentsare carried out on a machine having a core i7 CPU, 32GB of RAMand Ubuntu 16.04 OS. The software stack consisting of Scikit-learnand Keras with the TensorFlow GPU backend, while the networktraining is performed on a Nvidia GTX 1050 GPU with CUDA andcuDNN enabled to make the overall pipeline faster. Open sourceimplementations of Word2Vec8, fastText9, and GloVe10 are usedto create the embeddings. For each experiment, 80% of the datais used for the training with 5-fold cross-validation and testing7https://github.com/rezacsedu/BengFastText8Word2Vec: https://radimrehurek.com/gensim/models/word2vec.html9fastText: https://github.com/Kyubyong/wordvectors10GloVe: http://nlp.stanford.edu/projects/glove/https://github.com/rezacsedu/BengFastTexthttps://radimrehurek.com/gensim/models/word2vec.htmlhttps://github.com/Kyubyong/wordvectorshttp://nlp.stanford.edu/projects/glove/Classification Benchmarks for Under-resourced Bengali Language based on Multichannel Convolutional-LSTM Network , ,Figure 5: A schematic representation of the MConv-LSTM network, which starts from taking an input into a n-dimensionalembedding hyperspace and passing to both CNN and LSTM layers before getting a concatenated vector representation to fedthrough dense, dropout, and Softmax layers for predicting the hate classthe trained model on 20% held-out data. Results based on besthyperparameters produced through empirical random search arereported here (results based on embedding model are marked with+ in subsequent tables).We trained LR, SVM, KNN, NB, RF, and GBT as baseline modelsusing character n-grams and word uni-grams with TF-IDF weight-ing. Additionally, we report the results based on Word2Vec andGloVe embeddings methods. We report our results using macro-averaged precision, recall, and F1-score since the dataset is imbal-anced. In case of hate speech detection, standard precision, recall,and F1-score are used. For the sentiment analysis, Matthias Cor-relation Coefficient (MCC) and AUC score are measured the per-formance of the classifier. Finally, we perform model averagingensemble (MAE) of the top-3 models to report the final prediction.5.2 Analysis of document classificationAs shown in fig. 6, classical models, specially KNN and NB, oftenconfused economics and international related topics and managedto classify them 45% and 35% of cases correctly. This is because adocument from both these categories contains common words andtopics discussing finance, development, share market, intentionalaffairs, and politics. Further, as shown in table 1, the MConv-LSTMmodel also confused between similar categories plus the sports.However, for state-related topics, both classic and MConv-LSTM clas-sifiers perform quite well.Each model’s performance is then evaluated after initializingnetwork’s weights using the pre-trained BengFastText model. Us-ing the latter approach, the accuracy increased by 2.0 to 9.45 pointsweighted F1 (except NB, which performed even worse with theword embeddings). As shown in table 4, among standalone classi-fier, most significant boost was with MConv-LSTM by 8.75%, which isabout 12% improvement compared to the previous results. Overall,MAE gives 9.45% boost, which is 1.3% better than the second bestMConv-LSTM showing robustness of our approach.5.3 Analysis of hate speech detectionWe evaluated the model with and without initializing networkweight using pretrained BengFastTextmodel. Using the pretrainedword embeddings, the overall accuracy is increased by 2.15 to 7.27points weighted F1 as shown in table 5. Nevertheless, each classi-fier’s performance improved with the pretrained word embeddingsexcept for the NB (performs the worst with only 69.3% accuracy).Among tree-based classifiers, GBT performs the best giving as muchas 86.2% accuracy, which is the best among classic models as well.Overall,</s>
<s>MAE gives 7.27% boost, which is already 1.10% better thanthe second best MConv-LSTM (that performs best as a standaloneclassifier giving 7.15% improvement).This analysis can be further validated by calibrating the best per-forming Conv-LSTM classifier against different embeddingmethodsfor which the output probability of the classifier can be directlyinterpreted as a confidence level in terms of ‘fraction of positives’ asshown in fig. 8. As seen the Conv-LSTM classifier gave a probabilityvalue between 0.82 to 0.93, which means 93% predictions belong totrue positive predictions generated by the BengFastText.5.4 Performance of sentiment analysisTable 6 summarizes the results of sentiment analysis, which againsuggest that each model performs better with the pre-trained wordembeddings, measured by the MCC showing strong positive relation-ships. This result suggests that the predictions were strongly corre-lated with the ground truth, yielding a Pearson product-momentcorrelation coefficient higher than 0.65 for all the classifiers. This, , Karim, Chakravarthi, McCrae, and CochezTable 4: Document classification performance based on embedding methodsEmbedding method Classifier Precision Recall F1BengFastTextLR 0.754+3.8% 0.763+3.9% 0.756+3.85%NB 0.734-2.11% 0.736-2.10% 0.737-2.0%SVM 0.766+3.9% 0.737+3.75% 0.739+3.65%KNN 0.741+2.85% 0.745+2.75% 0.748+2.55%GBT 0.825+5.65% 0.825+5.91% 0.826+5.85%RF 0.813+5.7% 0.815+6.3% 0.822+6.24%MConv-LSTM 0.871+8.75% 0.883+8.62% 0.871+8.75%MAE 0.883+9.45% 0.886+9.25% 0.892+9.45%GloVeLR 0.725+1.7% 0.731+1.5% 0.723+1.7%NB 0.692-3.25% 0.701-3.1% 0.693-3.2%SVM 0.724+2.2% 0.724+2.2% 0.718+2.4%KNN 0.715+2.75% 0.716+2.54% 0.718+2.45%GBT 0.772+2.32% 0.785+2.45% 0.779+2.65%RF 0.792+4.15% 0.792+4.65% 0.795+4.75%MConv-LSTM 0.834+6.4% 0.846+7.12% 0.846+7.31%MAE 0.859+7.4% 0.867+7.2% 0.864+7.5%Word2VecLR 0.746+2.4% 0.752+2.6% 0.749+2.7%NB 0.721-2.15% 0.717-2.1% 0.719-2.2%SVM 0.741+2.7% 0.732+2.5% 0.736+2.6%KNN 0.732+3.4% 0.725+3.6% 0.728+3.5%GBT 0.806+4.854% 0.815+4.9% 0.810+5.1%RF 0.792+5.6% 0.781+5.4% 0.786+5.5%MConv-LSTM 0.851+7.9% 0.862+8.0% 0.856+8.1%MAE 0.865+8.5% 0.876+9.1% 0.870+8.8%(a) Without pretrained word embeddings (b) Using pretrained word embeddingsFigure 6: Confusion matrix of the MConv-LSTM model for document classificationis because MConv-LSTM model learns better with the pre-trainedword embeddings, which why it gives about 5.49% boost comparedto without pre-trained word embeddings in terms of AUC score.Overall, the MEA gives 7.85% improvement.On the other hand, AUC score of the classifier found to be thehighest is also generated by the MConv-LSTM network, which is atleast 2% better than the second-best score by the RF classifier butthe worst performance was recorded by the LR classifier. The ROCcurves generated by the MConv-LSTM model are shown in fig. 7aand fig. 7b. As seen AUC scores are consistent across folds and Thissignifies that the predictions by the MConv-LSTM model are muchbetter than random guessing.5.5 Effects of feature selectionBased on the results for each classification task, it is obvious thatfeature selection can be a very powerful technique to improve theperformance of classic methods especially with NB, SVM, KNN, andtree-based approaches. Contrarily, although LR is intrinsically sim-ple, has low variance, and less prone to over-fitting, feature selectorbased on this may be discriminating features very aggressively.This forces the model to lose some useful features, which resultsin lower performance. While investigating alternative feature se-lection algorithms, e.g., recursive feature elimination or geneticalgorithms is beyond the scope of this research.Another reason could be the size of the training data, which iscomparatively small, i.e., hate speech dataset. However, tree-basedClassification Benchmarks for Under-resourced Bengali Language based on Multichannel Convolutional-LSTM Network , ,Table 5: Performance of hate speech detection based on embedding methodsEmbedding method Classifier Precision Recall F1BengFastTextLR 0.723+2.21% 0.727+2.25% 0.723+2.15%NB 0.691-1.05% 0.694 -1.12%</s>
<s>0.693 - 1.15%SVM 0.735+2.54% 0.729+2.90% 0.728+2.25%KNN 0.716+2.61% 0.711+2.52% 0.712+2.32%GBT 0.842+5.37% 0.845+5.41% 0.845+5.79%RF 0.861+6.25% 0.857+4.91% 0.862+5.32%MConv-LSTM 0.881+7.25% 0.883+7.54% 0.882+7.15%MAE 0.894+7.45% 0.896+7.65% 0.891+7.27%GloVeLR 0.718+1.54% 0.715+1.65% 0.716+1.40%NB 0.672-2.21% 0.673 -2.12% 0.681 - 2.15%SVM 0.724+2.92% 0.726+2.75% 0.726+2.65%KNN 0.715+2.7% 0.716+2.75% 0.714+2.53%GBT 0.823+4.772% 0.825+5.11% 0.824+5.17%RF 0.818+5.93% 0.824+4.72% 0.821+6.10%MConv-LSTM 0.827+6.12% 0.822+6.12% 0.824+6.11%MAE 0.831+6.55% 0.834+6.65% 0.837+6.9%Word2VecLR 0.69+1.75% 0.70+2.6% 0.71+1.25%NB 0.65-2.5% 0.66 -2.2% 0.67 - 1.6%SVM 0.69+2.1% 0.70+1.75% 0.77+2.6%KNN 0.70+2.5% 0.71+2.6% 0.71+2.5%GBT 0.806+4.854% 0.815+4.9% 0.810+5.1%RF 0.74+5.6% 0.75+4.7% 0.74+5.3%MConv-LSTM 0.77+5.7% 0.78+5.5% 0.78+5.8%MAE 0.79+6.45% 0.80+6.25% 0.79+6.7%(a) Without pretrained word embeddings (b) With pretrained word embeddingsFigure 7: ROC curves of cross-validated MConv-LSTMmodel(hate speech detection)approaches such as RF and GBT perform quite better comparedto NB, SVM, and KNN, especially, NB which even performs worsewith pre-trained word embeddings. One potential reason behindsuch worse performance of NB is that it’s conditional independenceassumption,which assumes features are independent of one anotherwhen conditioned upon class labels, was rarely accurate in our case.Compared to classic models, neural network based MConv-LSTMmodels seems to bemuch better (about 3 to 4%) as it outperforms thebest performing classic GBT model in F1-score, which is probablybecause neural networks managed to capture abstract features thatcouldn’t be modeled by classic models.5.6 Effects of number of samplesTo understand the effects of having more training samples, and tounderstand if our classifiers suffer more from variance errors orbias errors, we observed the learning curves of top-3 classifiers (i.e.,RF, GBT, and MConv-LSTM) and SVM (a linear model) for varyingnumbers of training samples in case of document classification.As shown in fig. 9, both the validation and training scores con-verge to a value, which is too lowwith increasing size of the trainingset. Hence, SVM didn’t benefit much from more training samples.However, since RF and GBT are tree-based ensemble methods andMConv-LSTM model learn more complex concepts producing lowerbias, the training scores are much higher than the validation scoresfor the maximum number of samples, i.e., adding more trainingsamples has increased model’s generalization.5.7 Comparison with baseline embeddingsOur empirical study founds that almost every classifier performspoorly across every supervised learning task that are based on, , Karim, Chakravarthi, McCrae, and CochezFigure 8: Calibrating classifiers on word embeddings usingBengFastText showing reliability curvesTable 6: Performance of sentiment analysisEmbedding method Classifier MCC AUCBengFastTextLR 0.652+2.9% 0.76+2.8%NB 0.653 + 3.25% 0.77 + 2.32%SVM 0.694+2.25% 0.79+2.45%KNN 0.701+2.12% 0.80+3.33%GBT 0.715+3.18% 0.85+3.91%RF 0.727+3.67% 0.86+4.26%MConv-LSTM 0.746+5.49% 0.87+7.13%MAE 0.758+7.85% 0.88+7.42%GloVeLR 0.646+2.9% 0.75+1.75%NB 0.644 + 2.10% 0.76 + 2.11%SVM 0.681+1.85% 0.78+2.51%KNN 0.695+2.22% 0.79+2.28%GBT 0.707+2.63% 0.84+2.67%RF 0.719+2.45% 0.85+3.17%MConv-LSTM 0.736+4.76% 0.86+5.78%MAE 0.734+6.62% 0.87+5.29%Word2VecLR 0.634+2.6% 0.73+2.7%NB 0.645 + 2.25% 0.71 + 2.40%SVM 0.677+1.75% 0.77+1.5%KNN 0.662+3.2% 0.78+3.3%GBT 0.683+3.5% 0.83+3.8%RF 0.692+4.5% 0.82+4.3%MConv-LSTM 0.705+6.4% 0.83+6.5%MAE 0.716+8.1% 0.85+7.4%the embeddings generated by BengFastText method compared toWord2Vec and GloVe, as shown in table 4, table 3, and table 2. Inparticular, except for the sentiment analysis task in which classi-fiers performance were consistent, whereas each classifier’s per-formance were clearly boosted based on embeddings generated bythe BengFastText model, whereas. Further, an initial experimentbased on Polyglot embedding method shows that it failed to cap-ture the semantic contexts from texts, which is probably becausePolyglot uses only 64 dimensions for the embeddings.Although classifiers trained on embeddings from Word2Vecmodel work well across</s>
<s>tasks, the online scanning approach usedby Word2Vec is sub-optimal since the global statistical informationregarding word co-occurrences does not fully exploited [34]. GloVe,on the other hand, performed better than that of Word2Vec since itproduces a vector space with meaningful substructure and performsconsistently well in word analogy tasks and outperforms relatedmodels on similarity tasks and named entity recognition [34].Further, Word2Vec and GloVe both failed to provide any vec-tor representation for words that are not in the model dictionary,i.e., not resilient against OOV. Hence, the approach used to tacklethe OOV in our approach (i.e., randomly assigned word out of in-vocabulary words list from the pretrained model by consideringthe surrounding-contexts) won’t give accurate embeddings [7]. Onthe other hand, fastText performed much better compared to otherembedding methods since fastText works well with rare words.Thus, even if a word was not seen during the training, it can bebroken down into n-grams to get it’s corresponding embeddings.6 CONCLUSION AND OUTLOOKIn this paper, we provide three different classification benchmarksfor under-resourced Bengali language, which is based on word em-beddings and a deep neural network architecture called MConv-LSTM.The largest BengFastText word embedding model, which is builton 250 million Bengali articles seems able to sufficiently capturethe semantics of words. To show the effectiveness of our approach,we prepared three datasets of hate speech, sentiment analysis, andcommonly discussed topics. Then we experimented on three usecases: hate speech detection, sentiment analysis, and text classifica-tion, using a method that combines CNN and LSTM networks withdropout and pooling, which is found to empirically improve classifi-cation accuracy across all use cases. We also conducted comparativeevaluations on all of these datasets and show that the proposedmethod outperformed classic models across metrics and use cases.Nevertheless, we provided the results based on other Bengali lan-guage embedding methods like fastText and Polyglot. Our resultsshow that approaches based on BengFastText embeddings clearlyoutperformed Word2Vec and GloVe-based ones. Classic methodsthat depend on pre-engineered features do not require sophisti-cated feature engineering but using automatic feature selectiontechniques on generic features e.g. n-grams can, in fact, producebetter results. As research suggests character-based features to bemore effective than word-based ones, taking this into account, weexperienced better results with MConv-LSTM using only word-basedfeatures, which is likely due to the superiority in the network ar-chitecture. Results of different experiments showed that featureselection with drop-out, Gaussian noise layer, and pooling can sig-nificantly enhance the learning capabilities to MConv-LSTM towardsthrough better training and learning.Research also found that misspellings of words often appeared inup to 15% of web search queries [35]. Such misspellings words arealso responsible for the OOV (apart from other issues like abbrevia-tions, slang, etc.) in language models. Likewise, a recent languagemodel calledMisspellingObliviousWord Embeddings (MOWE) [35]in the context of Bengali language to generate word embeddings.MOWE, which is a combination of fastText and a supervised learn-ing technique tries to embed misspelled words close to their correctcontext and variants. This makes MOWE resilient to misspellings.In the future, we intend to employ MOWE-based approach togenerate the embeddings and explore other possibilities such asi) building and training a likely more robust neural network bystacking multiple</s>
<s>conv and bidirectional-LSTM layers, which aregood for extracting hierarchical features, ii) using BidirectionalEncoder Representations from Transformers (BERT) [12] (based onClassification Benchmarks for Under-resourced Bengali Language based on Multichannel Convolutional-LSTM Network , ,(a) SVM (b) GBT(c) RF (d) Conv-LSTMFigure 9: Learning curves with validation and training scores of top classifiersdeep bidirectional representations from unlabeled text by jointlyconditioning on both left and right context in all layers) for solvingsimilar problem even without using word embeddings, iii) inte-grating user-centric features, e.g., frequency of a user detected forposting hate speech and the userâĂŹs interaction with others, iv)studying and quantifying the difference between hate speech, abu-sive or offensive language, and cyber-bullying, v) further breaksentiment polarities down into specific categories, e.g., sentimenttoward Cricket, electronics, food, or celebrities.ACRONYMSAcronyms and their full forms used in this paper are as follows:BERT Bidirectional Encoder Representa-tions from TransformersCNN Convolutional Neural NetworksCSPlang Creative Political SlangCRF Conditional Random FieldDT Decision TreesDL Deep LearningGBT Gradient Boosted TreesGRU Gated Recurrent UnitLR Logistic RegressionLSTM Long Short-term MemoryMConv-LSTM MultichannelConvolutional-LSTMML Machine LearningMCC Matthias Correlation CoefficientMAE Model Averaging EnsembleMOWE Misspelling Oblivious Word Em-beddingsNER Named Entity RecognitionNB Naïve BayesNLP Natural Language ProcessingOOV Out-of-vocabularyPoS Part-of-speechRF Random ForestReLU Rectified Linear UnitSVM Support Vector Machines.REFERENCES[1] Adnan Ahmad and Mohammad Ruhul Amin. 2016. Bengali word embeddingsfor solving document classification problem. In 19th IEEE Intl. Conf. on ICCIT.425–430.[2] Chowdhury Alam. 2016. Bidirectional LSTMsâĂŤCRFs networks for bangla POStagging. In 19th IEEE intl. Conf. on ICCIT. 377–382.[3] Wafa Alorainy, Pete Burnap, Han Liu, and Matthew Williams. 2018. Cyber HateClassification:’Othering’Language And Paragraph Embedding. arXiv preprintarXiv:1801.07495 (2018).[4] Rami Alrfou and Steven Skiena. 2013. Polyglot: DistributedWord Representationsfor Multilingual NLP. In Proc. of the 17th Intl. Conference on Computational NaturalLanguage Learning. Association for Computational Linguistics, 183–192. http://aclweb.org/anthology/W13-3520[5] Laurent Besacier, Etienne Barnard, Alexey Karpov, and Tanja Schultz. 2014. Au-tomatic speech recognition for under-resourced languages: A survey. SpeechCommunication 56 (2014), 85–100.[6] Jiang Bian, Bin Gao, and Tie-Yan Liu. 2014. Knowledge-powered deep learningfor word embedding. In Joint European conf. on ML-KDD. Springer, 132–148.[7] Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. En-riching Word Vectors with Subword Information. Transactions of the Associationfor Computational Linguistics 5 (2017), 135–146.[8] Abhisek Chakrabarty and Utpal Garain. 2016. BenLem (A Bengali Lemmatizer)and Its Role in WSD. ACM Transactions on Asian and Low-Resource LanguageInformation Processing 15, 3 (2016), 12.[9] Bharathi Raja Chakravarthi, Mihael Arcan, and John P. McCrae. 2018. Improv-ing Wordnets for Under-Resourced Languages Using Machine Translation. InProceedings of the 9th Global WordNet Conference.[10] Long Chen and Fajie Yuan. 2018. Jose, and Weinan Zhang. 2018. Improvingnegative sampling for word representation using self-embedded features. In Proc.http://aclweb.org/anthology/W13-3520http://aclweb.org/anthology/W13-3520, , Karim, Chakravarthi, McCrae, and Cochezof the Eleventh ACM Intl. Conf. on Web Search and Data Mining. ACM. 99–107.[11] Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017.Automated hate speech detection and the problem of offensive language. arXivpreprint arXiv:1703.04009 (2017).[12] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert:Pre-training of deep bidirectional transformers for language understanding. arXivpreprint arXiv:1810.04805 (2018).[13] Asif Ekbal, Md Hasanuzzaman, and Sivaji Bandyopadhyay. 2009. Voted approachfor part of speech tagging in Bengali. In Proc. of the 23rd Pacific</s>
<s>Asia Conf. onLanguage, Information and Computation, Vol. 1.[14] Mai ElSherief, Vivek Kulkarni, Dana Nguyen, William Yang Wang, and Eliza-beth M. Belding. 2018. Hate Lingo: A Target-Based Linguistic Analysis of HateSpeech in Social Media. In Proceedings of the Twelfth International Conference onWeb and Social Media, ICWSM 2018, Stanford, California, USA, June 25-28, 2018.42–51. https://aaai.org/ocs/index.php/ICWSM/ICWSM18/paper/view/17910[15] Paula Fortuna and Sérgio Nunes. 2018. A Survey on Automatic Detection of HateSpeech in Text. ACM Computing Surveys (CSUR) 51, 4 (2018), 85.[16] Joshua Goodman. 2001. Classes for fast maximum entropy training. arXiv preprintcs/0108006 (2001).[17] Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and TomasMikolov. 2018. Learning Word Vectors for 157 Languages. In Proceedings of theInternational Conference on Language Resources and Evaluation (LREC 2018).[18] Nabil Hossain and Henry Kautz. 2018. Discovering Political Slang in ReadersâĂŹComments. New York Times 16 (2018), 873–421.[19] MS Islam. 2009. Research on Bangla language processing in Bangladesh: progress& challenges. In 8th Intl. Language & Development Conf. 23–25.[20] Md Islam, Fazla Elahi Md Jubayer, Syed Ikhtiar Ahmed, et al. 2017. A comparativestudy on different types of approaches to Bengali document categorization. arXivpreprint arXiv:1701.08694 (2017).[21] R. IzsÂťak. 2015. Hate speech and incitement to hatred against minorities in themedia. UN Humans Rights Council, A/HRC/28/64 (2015).[22] Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou,and Tomas Mikolov. 2016. FastText.zip: Compressing text classification models.arXiv preprint arXiv:1612.03651 (2016).[23] Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bagof tricks for efficient text classification. arXiv preprint arXiv:1607.01759 (2016).[24] Md Rezaul Karim, Michael Cochez, Joao Bosco Jares, Mamtaz Uddin, Oya Beyan,and Stefan Decker. 2019. Drug-Drug Interaction Prediction Based on KnowledgeGraph Embeddings and Convolutional-LSTM Network. In Proceedings of the10th ACM International Conference on Bioinformatics, Computational Biology andHealth Informatics. ACM, 113–123.[25] Muhammad Khan, Md Karim, and Yangwoo Kim. 2018. A Two-Stage Big Data An-alytics Framework with Real World Applications Using Spark Machine Learningand Long Short-Term Memory Network. Symmetry 10, 10 (2018), 485.[26] Muhammad Ashfaq Khan, Md Karim, Yangwoo Kim, et al. 2019. A Scalable andHybrid Intrusion Detection System Based on the Convolutional-LSTM Network.Symmetry 11, 4 (2019), 583.[27] Rohan Kshirsagar, Tyus Cukuvac, Kathleen McKeown, and Susan McGregor. 2018.Predictive Embeddings for Hate Speech Detection on Twitter. arXiv preprintarXiv:1809.10644 (2018).[28] Munirul Mansur. 2006. Analysis of n-gram based text categorization for Bangla ina newspaper corpus. Ph.D. Dissertation. BRAC University.[29] Tomas Mikolov, Kai Chen, and Jeffrey Dean. 2013. Efficient estimation of wordrepresentations in vector space. arXiv preprint arXiv:1301.3781 (2013).[30] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013.Distributed representations of words and phrases and their compositionality. InAdvances in neural information processing systems. 3111–3119.[31] Thomas Müller and Hinrich Schütze. 2011. Improved modeling of out-of-vocabulary words using morphological classes. In Proc of the 49th Annual Meetingof the ACL: Human Language Technologies: Vol. 2. ACL, 524–528.[32] John Nockleby. 2000. Hate speech. Encyclopedia of American constitution 3 (2000).[33] Alexandra Olteanu, Carlos Castillo, Jeremy Boy, and Kush R Varshney. 2018.The Effect of Extremist Violence on Hateful Speech Online. arXiv preprintarXiv:1804.05704 (2018).[34] Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove:Global vectors for word representation. In Proceedings of</s>
<s>the 2014 conference onempirical methods in natural language processing (EMNLP). 1532–1543.[35] Aleksandra Piktus, Necati Bora Edizel, Piotr Bojanowski, Edouard Grave, RuiFerreira, and Fabrizio Silvestri. 2019. Misspelling Oblivious Word Embeddings. InProceedings of the 2019 Conference of the North American Chapter of the Associationfor Computational Linguistics: Human Language Technologies, Volume 1 (Long andShort Papers). 3226–3234.[36] Md Atikur Rahman and Emon Kumar Dey. 2018. Datasets for Aspect-BasedSentiment Analysis in Bangla and Its Baseline Evaluation. Data 3, 2 (2018), 15.[37] Manoel Horta Ribeiro, Pedro H. Calais, Yuri A. Santos, Virgilio A. F. Almeida,and Wagner Meira Jr. 2018. Characterizing and Detecting Hateful Users onTwitter. In Proceedings of the Twelfth International Conference on Web and SocialMedia, ICWSM 2018, Stanford, California, USA, June 25-28, 2018. 676–679. https://aaai.org/ocs/index.php/ICWSM/ICWSM18/paper/view/17837[38] Joni Salminen, Hind Almerekhi, and Milenkovic. 2018. Anatomy of OnlineHate: Developing a Taxonomy and Machine Learning Models for Identifying andClassifying Hate in Online News Media.. In ICWSM. 330–339.[39] Joni Salminen, Hind Almerekhi, Milica Milenkovic, and Jung. 2018. Anatomyof Online Hate: Developing a Taxonomy and Machine Learning Models forIdentifying and Classifying Hate in Online News Media.. In ICWSM. 330–339.[40] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and RuslanSalakhutdinov. 2014. Dropout: a simple way to prevent neural networks fromoverfitting. The Journal of Machine Learning Research 15, 1 (2014), 1929–1958.[41] Sakhawat Hosain Sumit, Md Zakir Hossan, Tareq Al Muntasir, and Tanvir Sourov.2018. Exploring Word Embedding For Bangla Sentiment Analysis. In Intl. Conf.on Bangla Speech and Language Processing (ICBSLP), Vol. 21. 22.[42] Lyan Verwimp, Joris Pelemans, Patrick Wambacq, et al. 2017. Character-wordLSTM language models. arXiv preprint arXiv:1704.02813 (2017).[43] Zeerak Waseem and Dirk Hovy. 2016. Hateful symbols or hateful people? predic-tive features for hate speech detection on twitter. In Proc. of the NAACL studentresearch workshop. 88–93.[44] Xiaocong Wei, Hongfei Lin, Liang Yang, and Yuhai Yu. 2017. A convolution-lstm-based deep neural network for cross-domain MOOC forum post classification.Information 8, 3 (2017), 92.[45] SHI Xingjian, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, andWang-chun Woo. 2015. Convolutional LSTM network: A machine learning ap-proach for precipitation nowcasting. In Advances in neural information processingsystems. 802–810.[46] Ziqi Zhang and Lei Luo. 2018. Hate Speech Detection: A Solved Problem? TheChallenging Case of Long Tail on Twitter. arXiv preprint arXiv:1803.03662 (2018).[47] Ziqi Zhang, David Robinson, and Jonathan Tepper. 2018. Detecting Hate Speechon Twitter Using a Convolution-GRU Based Neural Network. In ESWC. Springer,745–760.https://aaai.org/ocs/index.php/ICWSM/ICWSM18/paper/view/17910https://aaai.org/ocs/index.php/ICWSM/ICWSM18/paper/view/17837https://aaai.org/ocs/index.php/ICWSM/ICWSM18/paper/view/17837 Abstract 1 Introduction 2 Related Work 3 Datasets 3.1 Raw text corpus collection 3.2 Data preprocessing 3.3 Dataset for document classification 3.4 Hate speech dataset preparation 3.5 Dataset for sentiment analysis 4 Methods 4.1 Neural word embeddings 4.2 Network construction 4.3 Network training 5 Experiments 5.1 Experiment setup 5.2 Analysis of document classification 5.3 Analysis of hate speech detection 5.4 Performance of sentiment analysis 5.5 Effects of feature selection 5.6 Effects of number of samples 5.7 Comparison with baseline embeddings 6 Conclusion and outlook References</s>
<s>See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/314666043Semi supervised keyword based bengali document categorizationConference Paper · September 2016DOI: 10.1109/CEEICT.2016.7873040CITATIONSREADS4 authors, including:Some of the authors of this publication are also working on these related projects:Sentiment Analysis and Opinion Mining in Bangla View projectArtificial Intelligence Lab View projectAbdullah Al MarufEast West University (Bangladesh)73 PUBLICATIONS 366 CITATIONS SEE PROFILEMd Saiful IslamShahjalal University of Science and Technology70 PUBLICATIONS 237 CITATIONS SEE PROFILEAll content following this page was uploaded by Md Saiful Islam on 25 December 2018.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/314666043_Semi_supervised_keyword_based_bengali_document_categorization?enrichId=rgreq-8f01674020559c0493dbc1d997c51ee0-XXX&enrichSource=Y292ZXJQYWdlOzMxNDY2NjA0MztBUzo3MDc3MzU3Nzk4Mjc3MTdAMTU0NTc0ODc2NzIzNQ%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/314666043_Semi_supervised_keyword_based_bengali_document_categorization?enrichId=rgreq-8f01674020559c0493dbc1d997c51ee0-XXX&enrichSource=Y292ZXJQYWdlOzMxNDY2NjA0MztBUzo3MDc3MzU3Nzk4Mjc3MTdAMTU0NTc0ODc2NzIzNQ%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Sentiment-Analysis-and-Opinion-Mining-in-Bangla?enrichId=rgreq-8f01674020559c0493dbc1d997c51ee0-XXX&enrichSource=Y292ZXJQYWdlOzMxNDY2NjA0MztBUzo3MDc3MzU3Nzk4Mjc3MTdAMTU0NTc0ODc2NzIzNQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Artificial-Intelligence-Lab?enrichId=rgreq-8f01674020559c0493dbc1d997c51ee0-XXX&enrichSource=Y292ZXJQYWdlOzMxNDY2NjA0MztBUzo3MDc3MzU3Nzk4Mjc3MTdAMTU0NTc0ODc2NzIzNQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-8f01674020559c0493dbc1d997c51ee0-XXX&enrichSource=Y292ZXJQYWdlOzMxNDY2NjA0MztBUzo3MDc3MzU3Nzk4Mjc3MTdAMTU0NTc0ODc2NzIzNQ%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Abdullah_Maruf12?enrichId=rgreq-8f01674020559c0493dbc1d997c51ee0-XXX&enrichSource=Y292ZXJQYWdlOzMxNDY2NjA0MztBUzo3MDc3MzU3Nzk4Mjc3MTdAMTU0NTc0ODc2NzIzNQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Abdullah_Maruf12?enrichId=rgreq-8f01674020559c0493dbc1d997c51ee0-XXX&enrichSource=Y292ZXJQYWdlOzMxNDY2NjA0MztBUzo3MDc3MzU3Nzk4Mjc3MTdAMTU0NTc0ODc2NzIzNQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/East_West_University_Bangladesh?enrichId=rgreq-8f01674020559c0493dbc1d997c51ee0-XXX&enrichSource=Y292ZXJQYWdlOzMxNDY2NjA0MztBUzo3MDc3MzU3Nzk4Mjc3MTdAMTU0NTc0ODc2NzIzNQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Abdullah_Maruf12?enrichId=rgreq-8f01674020559c0493dbc1d997c51ee0-XXX&enrichSource=Y292ZXJQYWdlOzMxNDY2NjA0MztBUzo3MDc3MzU3Nzk4Mjc3MTdAMTU0NTc0ODc2NzIzNQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Saiful_Islam14?enrichId=rgreq-8f01674020559c0493dbc1d997c51ee0-XXX&enrichSource=Y292ZXJQYWdlOzMxNDY2NjA0MztBUzo3MDc3MzU3Nzk4Mjc3MTdAMTU0NTc0ODc2NzIzNQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Saiful_Islam14?enrichId=rgreq-8f01674020559c0493dbc1d997c51ee0-XXX&enrichSource=Y292ZXJQYWdlOzMxNDY2NjA0MztBUzo3MDc3MzU3Nzk4Mjc3MTdAMTU0NTc0ODc2NzIzNQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Shahjalal_University_of_Science_and_Technology?enrichId=rgreq-8f01674020559c0493dbc1d997c51ee0-XXX&enrichSource=Y292ZXJQYWdlOzMxNDY2NjA0MztBUzo3MDc3MzU3Nzk4Mjc3MTdAMTU0NTc0ODc2NzIzNQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Saiful_Islam14?enrichId=rgreq-8f01674020559c0493dbc1d997c51ee0-XXX&enrichSource=Y292ZXJQYWdlOzMxNDY2NjA0MztBUzo3MDc3MzU3Nzk4Mjc3MTdAMTU0NTc0ODc2NzIzNQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Saiful_Islam14?enrichId=rgreq-8f01674020559c0493dbc1d997c51ee0-XXX&enrichSource=Y292ZXJQYWdlOzMxNDY2NjA0MztBUzo3MDc3MzU3Nzk4Mjc3MTdAMTU0NTc0ODc2NzIzNQ%3D%3D&el=1_x_10&_esc=publicationCoverPdfSemi Supervised Keyword Based Bengali DocumentCategorizationFahim Quadery∗, Abdullah Al Maruf†, Tamjid Ahmed‡, Md. Saiful Islam§Department of Computer Science and Engineering, Shahjalal University of Science and TechnologyKumargaon, Sylhet-3114, BangladeshEmail: ∗fahim@student.sust.edu, †maruf@student.sust.edu, ‡tamjid@student.sust.edu,§saiful-cse@sust.eduAbstract—Document Categorization is an area ofimportant research over the last couple of decades. Thebasic task in document categorization is classifying agiven document in some predefined classes. Bengali isamong the top ten most spoken languages in the worldand is spoken by more than 200 million people, butthe candid truth is, it still lacks significant researchefforts in the area of Bengali Document Categorization.In the first phase of this paper a model has beendesigned that extracts keywords from a Bengali doc-ument. We crawled over 35000 news documents formpopular Bengali newspapers and journals. Those doc-uments have been stemmed and less significant wordsare removed using stemmer and Parts-of-Speech(POS)tagger.Statistical approach is used to extract keywordsform the documents. Then probabilistic distributionand semi supervised learning with Naïve Bayes algo-rithm is used to approximate the category of a givenBengali document.Result and statistical data show theeffectiveness of this model.Index Terms—Chi-square, Probability distribution,Bengali Document, Word frequency, Naïve Bayes, su-pervised learning, semi supervised learning, TrainingDataset, Pipilika, Chorki, Co-occurrence matrix, Clas-sifier, Stemmer, POS tagger, Corpus.I. IntroductionDocument categorization has a vast application in thiscomputer dominated time. In this paper we will discusswhat document categorization is, why it is important andstep by step how a model has been built that classifies aBengali document in some predefined classes.Document classification or document categorization isa problem in library science, information science andcomputer science. The task is to assign a document toone or more classes or categories. Document categoriza-tion has versatile applications. Spam filtering, sentimentanalysis, language identification, finding similar articles,automatically determining the genre of a text.II. Bengali Document Categorization andRelated WorksDocument categorization is now an important topic oflibrary science. There is a healthy number of veteranresearchers working on this topic all over the globe. But allthese research are mostly focused on English documentscategorization. Despite of being one of the most widelyspoken language, for Bengali document categorization,there is a little work and resource available sporadicallyon the World Wide Web.Most of the online news portals categorize their newsmanually. However few research works have been done onBengali document categorization. [1] Proposed a n-grambased technique for document categorization. In [2] Naive-Bayes classification is used to predict category for newsarticles. Over 7000 documents were crawled from news-papers in [2]. Documents were tokenized, stemmed andstop words were removed. Then documents were treatedas a vector and using Naive Bayes Classifier documentswere categorized. There are</s>
<s>two Bengali search enginescalled ”Pipilika” and ”Chorki” that use Bengali documentcategorization. This paper tested and learned about [1], [2]and Pipilika search engine’s approach, robustness of theirmodels, i.e. pros and cons. And also compared their workwith this paper.III. Background StudiesThere are different approaches available for English andBengali language for the keyword extraction. There isstatistical approach which uses statistical models like chi-squared [3] , N-gram [4] [1] etc. There is linguistic approachthat mainly analyze the linguistic feature of the languageto find out the important keyword of the documents [5].Machine learning approach which is supervised or semi-supervised learning approach [6]. Noun phrase extractionapproach extracts noun phrase from the document andtries to find out the important keyword from the documentusing statistical analysis known as Conditional RandomField (CRF) model [7]. Graph base approach is to builda graph using the linguistic analysis of the document.Then give a leap word and propagate the value to theother node of the graph [8]. Then finds out the mostweighted keyword for the documents. In this paper a semisupervised machine learning based approach on top ofstatistical analysis is introduced which required studyingseveral machine learning models. And finally found a goodmodel that suits Bengali language perfectly. Study resultson machine learning models and statistics is described inthe paper.A. Chi-square DistributionChi-Square distribution can be used to measure thedifference between the actual count and the expectedcount of distinct elements [3]. In this paper Chi-Squarewas used to determine important words in a document.Taking degree of freedom one the equation of Chi-Squaredistribution isχ2 =i=0{oi − εi}2(1)Where,χ2 = Chi-Square Valueoi = Observed frequencyεi = Expected frequencyB. Naïve Bayes ClassifierIn machine learning, Naive Bayes classifiers are a familyof simple probabilistic classifiers based on applying Bayes’theorem with strong (naïve) independence assumptionsbetween features [9]. Naive Bayes probalilistic classifica-tion model was first used for text classification by D. Lewis[10].Let U a Universe with all the possible outcomes (of anexperiment for instance), and we are interested in somesubset of them, namely some event. Say we are studyingevent A and B.Probability of event A and B happeningindividually isP (A) =|A||U |(2)and,P (B) =|B||U |(3)now, given B probability of event AB isP (A|B) =P (AB)P (B)(4)and vice versa, given event A, probability of event ABP (B|A) =P (AB)P (A)(5)Putting this two equations together we get,P (A|B)P (B) = P (B|A)P (A) (6)and finally,P (A�B) =P (B|A)P (A)P (B)(7)Which is Bayes theorem. What actually the insight hereis to train this algorithm with some features against someclasses, and when we feed the model some features, itreturns us classes.IV. MethodologyIn this paper, classifier was trained with more than35,000 Bengali documents mostly collected from news-papers and journals. First each of these were Stemmedand less important words(i.e. অব য়) were omitted fromthe document. then the most frequent words (top 30%)were filtered out. After finding out the frequent words,the Chi-Squared distribution value of each terms with thefrequent words was calculated. Those apparently sporadicterms having higher Chi-Squared value are tend to be moreimportant in that document.Once the important words for all documents under acategory are found, then all those important words aretreated</s>
<s>like a single document for that particular category.Another Chi-square was used on important words to filterout the keywords for that particular category. This secondChi-square trimmed our training data set, streamlined themodel and had a big impact on category approximationaccuracy and pre-processing CPU time.When a new document is to be classified, keywordsof that documents are extracted and for that documentwe used Naïve Bayes classifier on those keywords witheach category. And this approach captivated documentcategorization.A. Extracting KeywordsAccording to [3] Chi-Squared value for each of the termsof the document with the most frequent terms,χ2 =gϵG{freq(w, g)− pgnw}2Pgnw(8)A document contains sentences of different lengths. Ifa term appears in a long sentence, it is more likely toco-occur with many terms. If a term appears in a shortsentence, it is less likely to co-occur with other terms [3].Here,Pg As (the sum of the total number of terms in sentenceswhere g appears) divided by (the total number of termsin the document).nw As the total number of terms in sentences where wappears.Again nwPg represents the expected frequency of co-occurrence. However, it’s value becomes more sophisti-cated. A term co-occurring with a particular term g ϵ Ghas a high χ2 value. However, these terms are sometimesadjuncts of term g and not important terms. [3]Here is the value of Chi-square on sample data-set.Table IChi-Square values of wordsNo Word Chi-Square value1. পুিলশ 46901.2482. উপেজলা 33105.52583. গতকাল 29368.58104. রাত 28220.1905. জানা 28218.628After the calculation of first Chi-Square on the inputset, top 20% words were filtered out according to highervalue. On those set of words, Chi-Square was used secondtime to filter out once again. And hence end up with thekey words.Table IIChi-Square values after 2nd passNo Word Chi-Square value1. পুিলশ 1324.222. উপেজলা 966.983. গতকাল 914.414. রাত 619.345. জানা 469.90B. Classifying DocumentsAs mentioned previously, once the key words are ex-tracted, Naïve Bayes classifier is used on those key wordsto train the model. To serve this purpose, the classifier istrained with over 35,000 documents of 12 categories.1) Training the Naïve Bayes Model: The occurrenceof each keyword under each class is counted. From thisa table that contains the count of features against allclasses is made. A sample table is shown III. This tableshows some frequent key words and their correspondingoccurrences under different classes [3]. From this table thePrior Probability, Probability of features and Probabilityof Likelihood are calculated. All these terms are essentialfor calculating Naïve Bayes and are described here in gist.Table IIIKeyword co-occurrence tableXXXXXXXXClassFeatureগতকাল েদশ পুিলশ জানান বাংলােদশ েমাটAccident 1064 42 761 877 71 1938Opinion 291 3948 1049 284 2885 8457Crime 1611 197 1679 1571 211 5269Total 2966 4187 3489 2732 3057 15664Now the Prior probability is guessing a category blindfolded. For instance, base rate or prior probability of theabove category are calculated below.P (accident) =193815664P (opinion) =845715664P (crime) =526915664Probability of Evidence or Feature Probability is theprobability of getting an individual feature from all thefeatures. Feature Probability of first 3 features is calcu-lated.P(গতকাল)= 2966/15664P(পুিলশ) = 3489/15664P(বাংলােদশ) = 3057/156642) Calculating the Probability Distribution: Probabilityof Likelihood of a feature is nothing but the conditionalprobability of finding a feature, while</s>
<s>a class is given. Let’scalculate the likelihood of first few features on OpinioncategoryP(পুিলশ⁄Opinion) = 1049⁄8457P(জানান⁄Opinion)= 284⁄8457P(বাংলােদশ⁄Opinion)= 2885⁄8457Let’s calculate the probability distribution of a set offeatures F. Which is the multiplication of likelihood ofeach feature from the feature set over the probability ofevidence. For example, after calculating the probabilityof the feature set for all the classes, classes are sortedin descending order of probability value. Category withhighest probability in the classifier is more likely beingthe category of that document.V. Training Data and StatisticsA. Classes or CategoriesThis paper used 12 different categories. These categoriescover a major portion of Bengali news categories. Thecategories that were worked with are given.Table IVCategory ListAccident Art Crime EconomicsEnvironment International Education EntertainmentOpinion Science-tech Politics SportsThe other existing Bangla news categorization modelfrom ”Pipilika” search engine worked on four categories.Categories that Pipilika used are shown in VTable VPipilika Search Engine’s Category ListEconomics Entertainment International SportsFrom the Naïve Bayes equation it is trivial to show thatincreasing in number of classes will reduce the performanceof the classifier. Thus the number of classes and theperformance is reversely related.B. Training ModelNaïve Bayes model is trained with a total of 35,580unique text files. Twelve categories of Bengali documentsare used.This dataset is used to train the classification model bycalculating the prior probability and evidence probability.Then for every category the conditional probability iscalculated using the key words of that category. Once allthe conditional probabilities are calculated, then it is verystraightforward to calculate the probability of a certainclass for a given feature set. The number of training sethas an almost linear relation with performance.Figure 1. Training Data-set.C. Statistical AnalysisHere some statistics of the training data is presented.For the lack of working space, the category names inEnglish alphabets are mapped. The mapping is shown inTable VI.Table VIClass Mapped to AlphabetA Accident B Art C CrimeD Economics E Education F EntertainmentG Environment H International I OpinionJ Science-Tech K Politics L SportsNow here in Table VIII are the most frequent wordsof each category. Table VII shows the top 5 words withhighest Chi-Square value for each class.Table VIITop Five Words With Maximum Chi-Square Value for FiveCategoryA Word উপেজলা িনহত গতকাল পুিলশ জানানχ2 Value 20 15 14 13 12B Word কথা মানুষ েশষ েদখা মনχ2 Value 175 163 155 154 150C Word পুিলশ উপেজলা গতকাল রাত জানাχ2 Value 469 331 293 282 282D Word বাংলােদশ টাকা ব াংক েদশ হাজারχ2 Value 50 45 42 39 34L Word ম াচ দল িব কাপ রান বাংলােদশχ2 Value 35 32 28 27 25VI. Performance AnalysisA. Accuracy of The ModelFrom observation here it is clear that there is almost alinear relation between the size of training data-set andTable VIIIMost Frequent WordsA word িদক উপেজলা গতকাল জানান িনহতcount 1220 1069 1064 877 871B word কথা মন মেধ মানুষ পাCount 600 582 509 497 488C Word পুিলশ গতকাল জানান থানা উপেজলাCount 1679 1611 1571 1510 1477D Word বাংলােদশ মেধ েদশ হেব টাকাCount 1552 1483 1390 1300 1264E Word িশ াথী হেব আজ ঢাকা অংশCount 2976 2560 2509 2507 2326F Word ছিব শু হেব কেরেছন মেধCount 1547 1369 1352 1202 1187G</s>
<s>Word েগেছ েদখা মেধ এলাকা পাCount 418 399 389 370 357H Word গত খবর গতকাল েদশ মেধCount 1963 1920 1683 1677 1592I Word হেব েদশ মেধ হল কথাCount 4104 3948 3648 3554 3444J Word কথা কেরন গতকাল হেব সরকারCount 3795 3648 3433 3354 3253K Word তথ কাজ ব াবহার ৈতির নতুনCount 885 825 823 798 796L Word ম াচ দল িব কাপ েশষ প মCount 1317 1300 1000 976 880output efficiency. Higher number of training data for agiven category tends to give more accurate approximation.This data-set vs. accuracy graph is shown in figure 2.From the linear relationship in this graph it shows thatthis model with a larger dataset can provide even moreaccurate category approximation. So a way of improvingthis model is increasing the training dataset.It is easy to observe the relation between training setFigure 2. Relation Between Training Data-set and Efficiency.and category approximation is not strictly linear. Fromobservation, performance depends on availability of key-words in training set under each category. This concludes,a better stemming of dataset will increase the performanceof this model.Figure 3. Performance Comparison Between this Paper and PipilikaSearch Engine’s model.B. Result of Other Related Works[1] Used one year ”Daily Prothom-Alo” news corpus.It tested the model with 25 randomly selected documents.Accuracy ranges from 20% to 100% (for 3 categories usingtri-gram) over 6 categories. [2] Used more than seventhousand documents extracted form ”Daily Prothom-Alo”and manually tagged six hundred twenty eight news wheretwo hundred eight items were used as training set. Itachieved a highest precision rate of 0.8.Figure 3 shows the performance analysis between thismodel and Pipilika search engine’s model which is de-veloped under “SUST NLP research lab”. Pipilika searchengine achieved a average accuracy of 82% over fourcategory. It shows this paper outperforms Pipilika’s modelin every aspects.VII. ConclusionDocument categorization is a dabbling topic these days.However for Bengali, a significant approach is yet toemerge. Here we developed a prolific model that approx-imates adequately. We achieved a average accuracy ofalmost 93% with maximum of 97.98% accuracy. Thispaper discussed the problem, what is Bengali Documentcategorization application, what already have done byothers in this topic. Then the gist of proposed approach isgiven. It described out derived approach which is dedicatedto describe the in-depth of this classification model andstep by step described Chi-Square value of probabilitydistribution and Naïve Bayes classifier. Next section wasdedicated to depict different properties of the trainingdata and statistical analysis of data-set. Then it showedthe performance analysis of the model. Overall this paperproposed the importance of document categorization forBengali language and developed an efficient approach forBengali document categorization.References[1] M. Mansur, N. UzZaman, and M. Khan, “Analysis of N-GramBased Text Categorization for Bangla in a Newspaper Corpus,”in 9th International Conference on Computer and InformationTechnology, Dhaka, Bangladesh, 2006.[2] A. N. Chy, M. H. Seddiqui, and S. Das, “Bangla News Clas-sification using Naive Bayes classifier,” in 16th InternationalConference on Computer and Information Technology (ICCIT),Khulna, Bangladesh, March 2014, pp. 336–371.[3] Y. Matsuo and M. Ishizuka, “Keyword Extraction from a SingleDocument using Word Co-ocurrence Statistical Information,”International Journal on Artificial Intelligence Tools, vol. 13,no. 1, pp. 157–169, 2004.[4] K.</s>
<s>Sarkar, “An N-Gram Base Method for Bengali Key phraseExtraction,” in International Conference of Information Sys-tems for Indian Languages, Patiala, India, March 2011, pp. 36–41.[5] J. Kaur and V. Gupta, “Effective Approaches For Extraction OfKeywords,” International Journal of Computer Science Issues,vol. 7, no. 6, November 2010.[6] P. D. Turney, “Learning algorithm for keyphrase extraction,”Journal of Information Retrieval, vol. 2, no. 4, pp. 303–336,2000.[7] D. B. Bracewell, F. Ren, and S. Kuriowa, “Multilingual sin-gle document keyword extraction for information retrieval,” inInternational Conference on Natural Language Processing andKnowledge Engineering, Wuhan, China, 2005, pp. 517–522.[8] Y. Ohsawa, N. E. Benson, and M. Yachida, “Keygraph: Au-tomatic indexing by co-occurrence graph based on buildingconstruction metaphor,” in Advances in Digital Libraries Con-ference, Washington DC, USA, 98, pp. 12–18.[9] S. Ting, W. Ip, and A. H. Tsang, “Is Na�ve Bayes a GoodClassifier for Document Classification,” International Journalof Software Engineering and its Applications, vol. 5, no. 3, July2011.[10] D. Lewis, “Naive (bayes) at forty: The independence assumptionin information retrieval,” in European Conference on MachineLearning(ECML), Chemnitz, Germany, April 1998, pp. 4–15.View publication statsView publication statshttps://www.researchgate.net/publication/314666043</s>
<s>Proceedings of the...D S Sharma, R Sangal and E Sherly. Proc. of the 12th Intl. Conference on Natural Language Processing, pages 247–253,Trivandrum, India. December 2015. c©2015 NLP Association of India (NLPAI)Automated Analysis of Bangla Poetry for Classification andPoet IdentificationGeetanjali Rakshit1,2,3 Anupam Ghosh2Pushpak Bhattacharyya2 Gholamreza Haffari31IITB-Monash Research Academy, India, 2IIT Bombay, India3Monash University, Australia{geet,pb}@cse.iitb.ac.in, anupam.ghsh@gmail.comgholamreza.haffari@monash.eduAbstractComputational analysis of poetry isa challenging and interesting task inNLP. Human expertise on stylisticsand aesthetics of poetry is generallyexpensive and scarce. In this work,we delve into the data to automati-cally extract stylistic and linguistic in-formation which are useful for anal-ysis and comparison of poems. Wemake use of semantic (word) features toperform subject-based classification ofBangla poems, and various stylistic aswell as semantic features for poet iden-tification. We have used a MulticlassSVM classifier to classify Tagore’s col-lection of poetry into four categories:devotional, love, nature and national-ism. We identified the most usefulword features for each category of po-ems. The overall accuracy of the classi-fier was 56.8%, and the analysis led usto conclude that for poetry classifica-tion, word features alone do not suffice,due to allusions often being used as apoetic device. We, next, used these fea-tures along with stylistic features (syn-tactic, orthographic and phonemic), forpoet identification on a dataset of po-ems from four poets and achieved aperformance of 92.3% using a Multi-class SVM classifier. While content-based and stylometric analysis of prosein Bangla has been done in the past,this is a first such attempt for poetry.1 IntroductionPoetry is a creative expression of languagethat often makes use of one or more of thecrafts of diction, sound, rhythm, imagery andsymbolism. Processing creative writing suchas poetry by computers is challenging, as op-posed to ordinary everyday text, for comput-ers are efficient in carrying out tasks of a morelogical nature, as compared to those involv-ing creativity. The volume of research in au-tomated analysis of poetry has generally beenlow, and no work has been reported on Banglapoetry. Bangla is the seventh most spokenlanguage in the world and has a rich literarytradition. While work on stylometry for prose(Chakraborty and Bandyopadhyay, 2011) andauthor identification (Das and Mitra, 2011)has been reported, our work is the first of itskind to analyse Bangla poetry.The computational analysis of poetry is im-portant, for not only can it lead to a better un-derstanding of what makes rich literature, butit also has applications such as making recom-mendations to readers based on their literarytastes, as also in the psychological effects ofpoetry (Stirman and Pennebaker, 2001). Iden-tifying the poet is also important for plagia-rism detection.We explore various kinds of features fromBangla poems to carry out specific analyses.Firstly, we perform a subject-based classifica-tion of poems into pre-determined categoriesfrom Tagore’s poems using semantic features,the categories being pooja (devotional), prem(love), prokriti (nature), and swadesh (nation-alism). With our experiments, we establishthe fact that the words can help only so far,due to frequent use of poetic devices such asallusion and symbolism, which often leave po-ems open to multiple interpretations. Second,we observe that word features do fairly well forpoet identification. The results improve whenstylistic features (orthographic, syntactic andphonemic) are also introduced.The paper is organized as</s>
<s>follows. We dis-247cuss the literature in Section 2, and describeour approach in Section 3. The system ar-chitecture and its details have been describedin 4. The experimental setup and results arecovered in sections 5 and 6, respectively. Wedelve into analysis of the results in Section 7.We conclude our work and discuss scope forfuture work in Section 8.2 Related WorkComputational understanding of poetry hasbeen previously studied for languages such asEnglish (Kaplan and Blei, 2007), (Kao andJurafsky, 2012), Chinese (Voigt and Jurafsky,2013) and Malay (Jamal et al., 2012).Kaplan and Blei (2007) analyse Americanpoems in terms of style and visualise them asclusters. Kao and Jurafsky (2012) use vari-ous stylistic features to categorise poems intoones written by professional and amateur po-ets, and establish the importance of Imagismin poetry of high-quality. Lou et al. (2015) useof a SVM to classify poems in English into 3main categories and 9 subcategories by com-bining tf-idf and Latent Dirichlet Allocation.All this work has been done for English. Voigtand Jurafsky (2013) observed through com-putational analysis the decline of the classi-cal nature of Chinese poetry. Li et al. (2004)use a technique based on term connections forstylistic analysis of Chinese poetry. Jamal etal. (2012) have used a Support Vector Ma-chine model to classify traditional Malay po-etry, called pantun, into various themes.No work in Bangla poetry has been so farreported in the literature. Chakraborty andBandyopadhyay (2011) have used low-level,chunk-level and context-level features for semi-supervised detection of stylometry in Banglaprose on the writings of Rabindranath Tagore.Das and Mitra (2011) conducted experimentson author identification of Bangla prose on theworks of three authors, namely RabindranathTagore, Bankim Chandra Chattopadhyay andSukanta Bhattacharyay. They have used aNaive Bayes classifier using simple unigramand bigram features.3 Our ApproachWe use both word features and stylistic fea-tures that have reported in the literature. Inthe following section, we briefly describe themand go on to explain how they have beenadapted to be used in our system for Bangla.One feature not previously reported for sty-lometry is reduplication, which is common,though not exclusive, to Indian languages. Ascompared to Indian languages, however, its lit-erary merits in English might be arguable. Us-age like ha ha doesn’t generally act a poeticdevice.3.1 FeaturesThe stylometric features used for classificationcan be broadly classified into three kinds: or-thographic, syntactic and phonemic. Theseare the same categories as reported in (Kaplanand Blei, 2007). Besides these, lexical featureshave been used.Orthographic Features: Orthographicfeatures deal with the measurements of vari-ous units of the poem. These features includeword count, number of lines, number of stan-zas, average line length, average word length,and average number of lines per stanza.Syntactic Features: The syntactic fea-tures deal with the frequencies of the variousparts of speech (POS) in the poem.Phonemic Features: Sound plays an im-portant role in poetry. Phonemic features dealwith the sound devices used in a poem. Rhymeand metre are essential poetic devices. Wemake use of the following phonemic features:rhyme scheme, alliteration and reduplication.Some common kinds of rhyme has been tab-ulated in Table 1.Lexical Features: Each word type is a fea-ture and its value is the tf-idf.4 System OverviewThe high-level view of the system is shownin</s>
<s>Figure 1. The basic blocks of the systemare: Alliteration and Reduplication, RhymeScheme Detector, Document Statistics, Shal-low Parser and SVM classifier. Each one hasbeen described in the subsequent subsections.4.1 Alliteration and ReduplicationAlliteration is a poetic device which refers tothe repetition of consonant sounds in the be-ginning of consecutive words. An examplefor this in Bangla would be অনাদের অবেহলায়248Rhyme Type ExamplesIdentical Rhyme: cat-cat,Identical phoneme বাঁেক-বাঁেকsequence (baanke-baanke)Perfect Rhyme: cat-rat,Same phoneme sequence বাঁেক-থােকfrom the ultimate (baanke-thaake)stressed vowel onwards,but differing inthe previous consonantSemi Rhyme: stick-picketA perfect rhyme where জবা - অবাকone word has an (joba-obaak)additional syllable atthe endSlant Rhyme: queen-afternoonEither identical ultimate কেল্াল - েকালাহলstressed vowels or (kallol-kolahol)identical phonemesequences following theultimate stressed vowel,but not bothTable 1: Types of Rhyme(anaadore abohelay). To detect alliteration,we check the beginning sound of each word forevery pair of consecutive words in a line.Reduplication refers to the repetition ofany linguistic unit such as a phoneme, mor-pheme, word, phrase, clause or the utteranceas a whole (Chakraborty and Bandyopadhyay,2010). It is mainly used for emphasis, gener-ality, intensity or to show continuation of anact. It may be partial (খাওয়া দাওয়া khaawadaawa) or complete (আকােশ আকােশ akaasheakaashe). We check only for complete redu-plication. We use a simple algorithm that ba-sically checks if two consecutive words in thepoem are identical.4.2 Rhyme Scheme DetectionA rhyme scheme is the pattern of rhymes atthe end of each line of a poem or song. Therhyme scheme of the poem can be determinedby looking at the end word in each line of apoem. Various rhyme schemes are used. Ex:abab, aabb, ababcc and so on.In the event of absence of Bangla Pronunci-ation Dictionary, we wrote the following algo-rithm. A character in Indian language scriptsFigure 1: System Overviewis close to a syllable and there is one-to-onecorrespondence between what is spoken andwhat is written (Kishore and Black, 2003). Inmost cases, Bangla words are spoken as theyare written. We also accomodate certain non-compliant cases, for instance for the case of হ-ending words, as explained in the subsequentalgorithms.In our system, we check for perfect rhymeand identical rhyme only. We grouped similarsounding vowels and consonants into groups,to allow for similar sounds to rhyme in caseof perfect rhyme. This grouping was done asshown in Table 2. A detailed study of Banglaphonemics can be found in (Barman, 2011).The algorithm to detect the rhyme scheme isshown in Algorithm 1. The algorithm to checkfor rhyming words is described in Algorithm 2.অ আ ই,ঈ উ, ঊ এ, ঐও, ঔ ক, খ গ, ঘ চ, ছ জ, ঝ, যট, ঠ ড, ঢ ত, থ, ৎ দ, ধ ণ, নপ, ফ ব, ভ ম র, ড় লশ, ষ, স হ য়Table 2: Sound GroupingsThe Find-rhyme-scheme algorithm takes apoem and the length of a stanza in the poemas input, and returns the rhyme scheme forthe poem. We first initialise a string vari-able rhyme_scheme to a sequence of consec-utive English alphabets, which denotes therhyme scheme. Next, we pick the end wordfor the first line and check if it rhymes withthe end word of the next line (by calling Check-rhyme()). We keep checking until the last line,or</s>
<s>until, a rhyming line is found. We thenupdate the rhyme_scheme variable and check249Algorithm 1: Find-rhyme-schemeInput: poem, len_of_stanzaOutput: rhyme_schemeInitialise: rhyme_scheme = ”abcdefgh..”1. for i in range(0, len_of_stanza − 1)2. Read line and pick last word word[i]3. for j in range(i + 1, len_of_stanza)4. Read line and pick last word word[j]5. Check-rhyme(word[i], word[j])6. If true7. rhyme_scheme[j] = rhyme_scheme[i]8. break9. return rhyme_scheme heightfrom the next line onwards, and repeat theprocess until the last but one line is processed.Algorithm 2: Check-rhymeInput: word1, word2Output: flagV denotes vowel, C denotes consonantInitialise: flag = 01. Pick last character z1 and z2 of word1and word2, respectively2. if similar_sounding(z1, z2) or if eitherof z1 or z2 is C while the other is 'ে◌া'3. pick the last but one character y1and y2 of word1 and word2, respectively4. if both y1 and y2 are V or both y1and y2 are C5. if similar_sounding(y1, y2)6. flag = 17. if both z1 and z2 are C8. if y1 and y2 are C9. flag = 110. if y1 and y2 are V11. if similar_sounding(y1, y2)12. flag = 113. return flagThe Check-rhyme algorithm takes as inputtwo Bangla words, and returns whether or notthey rhyme. It basically compares the last twocharacters of both words. The last two char-acters should either be identical to each other,or should be similar sounding, based on thegroupings we made in Table 2. Thus words likeমােঝ(maajhe) and লােজ (laaje)would rhyme.Also, there is the special case of handling'ে◌া' (the vowel o). In most cases, when thelast character in a Bangla word is a consonant,they have an implied o sound. This is kind ofthe reverse of the inherent vowel suppressionin Hindi (Kishore and Black, 2003). Hence,words like েদেহা (deho) and েকহ (keho) wouldrhyme. Thus, if one of the last character is aconsonant, we need to check if the other wordends in 'ে◌া'.4.3 Document StatisticsThe Document Statistics module basicallytakes as input a poem, and returns its ortho-graphic features by counting the number ofcharacters, words and stanzas. It also returnsthe tf-idf scores of the words.4.4 Shallow ParserThe shallow parser gives the analysis of a sen-tence in terms of morphology, POS tagging,chunking, etc. We use the POS tags as fea-tures in our classification. The shallow parserfor Bangla from IIIT Hyderabad has beenused.4.5 SVMA Support Vector Machine (SVM) classifierwas used for classification (Vapnik, 1998).based on the idea of learning a linear hyper-plane from the training set that separates pos-itive examples from negative examples. Thehyperplane must be at the maximum distancepossible from data instances of either class inorder to obtain the best generalization. TheSVM implementation of SVMLight was usedfor our experiments (Joachims, 1999).5 Experimental SetupFor classification of poems into various cate-gories, a bag of words model was trained usingonly lexical features. Five-fold cross-validationwas done on 1341 poems, for training and test-ing. A linear kernel was used.We crawled data from the website of TheComplete Works of Tagore 1 to collect poemsby Rabindranath Tagore in four categories:pooja, prem, prokriti, and swadesh. The datastatistics are shown in Table 3.For the poet identification task, we crawleddata from the website of Bangla Kobita 2</s>
<s>by1http://tagoreweb.in/2http://www.bangla-kobita.com/250Category Number of PoemsPooja(Devotional) 617Prem(Love) 395Prokriti(Nature) 283Swadesh(Nationalism) 46Total 1341Table 3: Datafour poets: Rabindranath Tagore, JibananadaDas, Kazi Narul Islam, and Sukumar Roy.The data statistics are shown in Table 4.Poet Number of PoemsRabindranathTagore 382JibananandaDas 348Kazi NazrulIslam 198Sukumar Roy 130Total 1058Table 4: DataWe trained a Multiclass SVM classifier witha linear kernel for poet identification, us-ing just lexical features (Model-lex) and us-ing lexical as well as stylometric features(Model-lex+style). Five-fold cross-validationwas done on the 1058 poems.6 ResultsThe results for subject-based poem classifica-tion have been tabulated in Table 5 in termsof Precision, Recall and F-measure. The classpooja has the best score, and lowest score isfor swadesh. The confusion matrix has beenshown in Table 6. The precision for swadeshis high, but the recall is very low, which meansinstances of swadesh are often predicted to beof some other class. The overall performanceis 56.8%.The results for poet identification are shownin Table 7. We compare the results from theSVM classifier, with a Naive Bayes Classifier,in terms of lexical as well as stylistic features.The SVM trained on both lexical and stylis-Class P R F-measurepooja 73.6 84.3 78.6prem 58.9 55.4 57.1prokriti 61.9 53.3 57.3swadesh 83.3 21.7 34.4Table 5: Results for Poem Classificationpooja prem prokriti swadeshpooja 521 71 24 2prem 110 219 66 0prokriti 56 76 151 0swadesh 27 6 3 10Table 6: Confusion Matrixtic features was found to have the best perfor-mance. When using a Multiclass SVM for clas-sification, introducing stylistic features helpedimprove the overall performance by 2.2%.Model P R F-measureNaive-Bayes-lex 90.3 89.2 89.5Naive-Bayes 91.0 90.1 90.4lex+styleSVM-lex 92.0 87.9 89.9SVM-lex+style 91.4 93.2 92.3Table 7: Results for Poet IdentificationIn Table 8, we tabulate the effect of usingvarious types of stylistic features on the predic-tion task. The syntactic features alone helpedincrease the performance by 1.2% over lexicalfeatures. Introducing orthographic and phone-mic features further increased the performanceslightly, by 0.5% and 0.7%, respectively.7 Analysis and DiscussionFrom the confusion matrix in poem classifi-cation (Table 6), we observe that swadesh isoften confused with pooja. A closer inspectionof the poems from the category swadesh re-veals that the presence of words like জপমালা(japmala), পিবতর্ (pobitro), তীথর্ (teertho), etc.,which mean rosary, holy, pilgrimage, respec-tively, might have caused the misclassification.One might note that in these poems, the wordsof worship such as pilgrimage, rosary, etc.,have been used in the context of worship ofone’s motherland, and hence actually belongto the category swadesh or nationalism. Onthe other hand, in poems from pooja misclas-251Features P R F-measurelex+syn 91.2 92 91.1lex+syn+orth 91.4 92.5 92lex+syn+orth 91.4 93.2 92.3+phonemicTable 8: Effect of various stylistic featuressified as swadesh, words like আঘাত (aaghaat),,ভয় (bhoy), which mean hit and fear, respec-tively, might have caused the misclassification.Similarly, prem is most often confused withpooja, while prokriti is most often confusedwith prem.Category Most useful wordsPooja hridoy(heart), jibon(life),(Devotional) gobhir(deep), anando(joy),alo(light), alok(light),dhulo(dust)Prem sokhi(friend), hridoy(heart),(Love) pran(life), haashi(smile),madhur(sweet), nayan(eyes),aakulo(eager), aakhi(eyes)Prokriti akash(sky), megh(cloud),(Nature) hawa(breeze), phool(flower),baanshi(flute), gagan(sky),chhaaya(shadow)Swadesh poth(road), bangla(Bangla),(Nationalism) jaagrat(awake),bhai(brother), bharat(India)Table 9: Most distinguishing words from eachcategoryTable 9 shows the most useful word featuresin identifying each category of poems.It is observed that lexical features are veryuseful for poet identification, as poets oftenhave a tendency to use the same set of or</s>
<s>sim-ilar words. Stylistic features help only to asmall extent, particularly, orthographic andphonemic features vary a lot across poems bythe same poet, and hence are not much of adistinguishing feature in identifying the poet.8 Conclusion and Future WorkWe conducted what we presume to be the firstreported computational analysis of Bangla po-etry. With some preliminary investigation, weobserved that words alone aren’t always suffi-cient for classifying poems into categories, be-cause of poets often resorting to symbolism.It would be interesting to further investigateif this problem could be helped with WordSense Disambiguation (WSD). We were ableto determine the poet correctly 92.3% of thethe time using the SVM classifier. The set oflexical and stylometric features could also beused to categorise poems into ones written byprofessional and amateur poets, which couldthrow some light on poetry appreciation, like(Kao and Jurafsky, 2012). The phonemic fea-tures could be further enhanced with checkingof presence of rhyming words in the same lineas also checking for style where each line ina poem begins with the same word. For ex-ample: অেনক কীিতর্ , অেনক মূিতর্ , অেনক েদবা-লয (onek keerti, onek smriti, onek debalaya).The phonemic features may also be extendedto detect metre and prosody (Dastidar, 2013),involving syllabification of the verse.ReferencesBinoy Barman. 2011. A contrastive analysis of En-glish and Bangla phonemics. Dhaka UniversityJournal of Linguistics, 2(4): 19–42.Tanmoy Chakraborty and Sivaji Bandyopadhyay.2011. Inference of Fine-grained Attributes ofBengali Corpus for Stylometry Detection. Poli-bits, Vol(44): 79–83. Instituto Politécnico Na-cional, Centro de Innovación y Desarrollo Tec-nológico en Cómputo.Tanmoy Chakraborty and Sivaji Bandyopadhyay.2010. Identification of reduplication in Bengalicorpus and their semantic analysis: A rule-basedapproach. NAACL Workshop on ComputationalLinguistics for Literature.Suprabhat Das and Pabitra Mitra. 2011. Au-thor Identification in Bengali Literary Works.Pattern Recognition and Machine Intelligence,Vol(6744): 220–226. Springer Berlin Heidelberg.Rimi Ghosh Dastidar. 2013. Symmetry inProsodic Pattern of Rhyme and Daily Speech:Pragmatics of Perception. Mining Intelligenceand Knowledge Exploration, 823–830. Springer.Noraini Jamal, Masnizah Mohd and Shahrul Az-man Noah. 2012. Poetry classification usingsupport vector machines. Journal of ComputerScience, Vol 8(9):1441.T. Joachims. 1999. Making large-Scale SVMLearning Practical. Advances in Kernel Meth-ods - Support Vector Learning, 169–184 MITPress. Cambridge, MA.252David M. Kaplan and David M. Blei. 2007. Acomputational approach to style in american po-etry. In Data Mining, 2007. ICDM 2007. Sev-enth IEEE International Conference on. 553–558. IEEE.Justine Kao and Dan Jurafsky. 2012. A com-putational analysis of style, affect, and imageryin contemporary poetry. In Proceedings of theNAACL-HLT 2012 Workshop on ComputationalLinguistics for Literature. Montreal, Canada, 8–17.S. Prahallad Kishore and Alan W. Black. 2003.Unit size in unit selection speech synthesis. IN-TERSPEECH.Noraini Jamal, Masnizah Mohd and Shahrul Az-man Noah. 2004. Poetry stylistic analysistechnique based on term connections. MachineLearning and Cybernetics, 2004. Proceedings of2004 International Conference on, Vol(5):2713–2718. IEEE.Andres Lou, Diana Inkpen, and Chris Tanasescu.2015. Multilabel Subject-Based Classification ofPoetry. The Twenty-Eighth International FlairsConference.Shannon Wiltsey Stirman and James W. Pen-nebaker. 2001. Word use in the poetry ofsuicidal and nonsuicidal poets PsychosomaticMedicine, Vol 63(4):517–522. LWW.Vladimir Naumovich Vapnik and Vlamimir Vap-nik. 1998. Statistical learning theory, Vol 1.Wiley New York.Rob Voigt and Dan Jurafsky. 2013. Traditionand Modernity in 20th Century Chinese Po-etry. 23rd International Conference on Com-putational Linguistics.Bengali Shallow Parserhttp://ltrc.iiit.ac.in/analyzer/bengali/. LTRC,IIIT Hyderabad.253</s>
<s>See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/328214545BARD: Bangla Article Classification Using a New Comprehensive DatasetConference Paper · August 2018DOI: 10.1109/ICBSLP.2018.8554382CITATIONSREADS1,1932 authors, including:Some of the authors of this publication are also working on these related projects:A Gray Box Interpretable Visual Debugging Approach for Deep Sequence Learning Model View projectVIM: A Big Data Analytics Tool for Data Visualization and Knowledge Mining View projectMd Mofijul IslamUniversity of Dhaka19 PUBLICATIONS 112 CITATIONS SEE PROFILEAll content following this page was uploaded by Md Mofijul Islam on 17 December 2018.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/328214545_BARD_Bangla_Article_Classification_Using_a_New_Comprehensive_Dataset?enrichId=rgreq-656c684295ed24cbbf5eb21b97f62897-XXX&enrichSource=Y292ZXJQYWdlOzMyODIxNDU0NTtBUzo3MDQ4NTczMTI4NjIyMTJAMTU0NTA2MjQ4NzYwMw%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/328214545_BARD_Bangla_Article_Classification_Using_a_New_Comprehensive_Dataset?enrichId=rgreq-656c684295ed24cbbf5eb21b97f62897-XXX&enrichSource=Y292ZXJQYWdlOzMyODIxNDU0NTtBUzo3MDQ4NTczMTI4NjIyMTJAMTU0NTA2MjQ4NzYwMw%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/A-Gray-Box-Interpretable-Visual-Debugging-Approach-for-Deep-Sequence-Learning-Model?enrichId=rgreq-656c684295ed24cbbf5eb21b97f62897-XXX&enrichSource=Y292ZXJQYWdlOzMyODIxNDU0NTtBUzo3MDQ4NTczMTI4NjIyMTJAMTU0NTA2MjQ4NzYwMw%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/VIM-A-Big-Data-Analytics-Tool-for-Data-Visualization-and-Knowledge-Mining?enrichId=rgreq-656c684295ed24cbbf5eb21b97f62897-XXX&enrichSource=Y292ZXJQYWdlOzMyODIxNDU0NTtBUzo3MDQ4NTczMTI4NjIyMTJAMTU0NTA2MjQ4NzYwMw%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-656c684295ed24cbbf5eb21b97f62897-XXX&enrichSource=Y292ZXJQYWdlOzMyODIxNDU0NTtBUzo3MDQ4NTczMTI4NjIyMTJAMTU0NTA2MjQ4NzYwMw%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Mofijul_Islam3?enrichId=rgreq-656c684295ed24cbbf5eb21b97f62897-XXX&enrichSource=Y292ZXJQYWdlOzMyODIxNDU0NTtBUzo3MDQ4NTczMTI4NjIyMTJAMTU0NTA2MjQ4NzYwMw%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Mofijul_Islam3?enrichId=rgreq-656c684295ed24cbbf5eb21b97f62897-XXX&enrichSource=Y292ZXJQYWdlOzMyODIxNDU0NTtBUzo3MDQ4NTczMTI4NjIyMTJAMTU0NTA2MjQ4NzYwMw%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/University_of_Dhaka?enrichId=rgreq-656c684295ed24cbbf5eb21b97f62897-XXX&enrichSource=Y292ZXJQYWdlOzMyODIxNDU0NTtBUzo3MDQ4NTczMTI4NjIyMTJAMTU0NTA2MjQ4NzYwMw%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Mofijul_Islam3?enrichId=rgreq-656c684295ed24cbbf5eb21b97f62897-XXX&enrichSource=Y292ZXJQYWdlOzMyODIxNDU0NTtBUzo3MDQ4NTczMTI4NjIyMTJAMTU0NTA2MjQ4NzYwMw%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Mofijul_Islam3?enrichId=rgreq-656c684295ed24cbbf5eb21b97f62897-XXX&enrichSource=Y292ZXJQYWdlOzMyODIxNDU0NTtBUzo3MDQ4NTczMTI4NjIyMTJAMTU0NTA2MjQ4NzYwMw%3D%3D&el=1_x_10&_esc=publicationCoverPdfBARD: Bangla Article Classification using a NewComprehensive DatasetMd Tanvir Alam, Md Mofijul IslamDepartment of Computer Science and Engineering, University of Dhaka, Dhaka, Bangladesh.tanvircsedu15@gmail.com, akash.cse.du@gmail.comAbstract—In the literature, automated Bangla article classifica-tion has been studied, where several supervised learning modelshave been proposed by utilizing a large textual data corpus.Despite several comprehensive textual datasets are available fordifferent languages, a few small datasets are curated on Banglalanguage. As a result, a few works address Bangla documentclassification problem, and due to the lack of enough trainingdata, these approaches could not able to learn sophisticatedsupervised learning model. In this work, we curated a largedataset of Bangla articles from different news portals, whichcontains around 3,76,226 articles. This huge diverse dataset helpsus to train several supervised learning models by utilizing a set ofsophisticated textual features, such as word embeddings,TF-IDF.In this works, our learning model shows promising performanceon our curated dataset, compared to state-of-the-art works inBangla article classification.Furthermore, we deployed our proposed Bangla contentclassifier as a web application: bard2018.pythonanywhere.comand the video demo of this application is available here:bit.ly/BARD_VIDEO_DEMO. Additionally, we open-sourced theBARD dataset(bit.ly/BARD_DATASET) and source code of thiswork(bit.ly/BARD_SC).Index Terms—Document Classification, Machine Learning,Bangla Article DatasetI. INTRODUCTIONThe proliferation of unstructured textual data attracts thenatural language research community to extract knowledgeby mining textual data. The most common and well-studiedproblem is to categorize the textual documents. Documentclassification has paramount importance on several appli-cations like searching, filtering, and organizing the textualdocuments.During the last decades, several statistical and machinelearning approaches have been utilized to extract meaning-ful features to accurately classify the textual documents.For example, Bag of words, TF-IDF [13], Word embedding(Word2Vec [11]) features have been extracted from the textdata in order to train some supervised learning model, forinstance, SVM, KNN or Naive Bayes, for classifying thedocuments. Moreover, various deep learning algorithms, suchas Convolutional Neural Network(CNN) and Long Short TermMemory(LSTM) [5], have also been utilized to extract valu-able features from the unstructured textual data in order tocategorize the documents. Thus, a considerable amount oftextual data is required to train these supervised learningmodels.Despite several large dataset datasets on different otherlanguages are available, a few datasets are constructed for theBangla document classification purpose. For this reason, onlya couple of works addressed the Bangla text categorizationtask. However, all of this works utilized small datasets witharound a thousand of articles, which is not adequate totrain a supervised learning model. Furthermore, most of thestate-of-the-art works, which considered the Bangla articleclassification task, only consider Bag of words or TF-IDFfeatures and thus did not consider the semantic features</s>
<s>basedtechniques like Word2Vec.In this work, we address the above-mentioned problems anddeveloped a large corpus of Bangla documents, BARD: BanglaArticle Dataset. Additionally, we also perform an extensivestatistical analysis to extract the textual relationships acrossthe different documents categories. Furthermore, we utilizedseveral textual features, such as TF-IDF and Word2Vec, inorder compare the performance of various supervised learningapproaches, for instance, Logistic Regression, Neural Net-work, Naive Bayes, and ensemble approaches.The major contributions of this work are summarized below:• To the best of our knowledge, we curated the largestBangla articles dataset, which consists of around 3,76,226articles labeled with five document categories.• We performed an extensive statistical analysis in order toidentify the feasibility of the word based features usagein training the supervised learning model.• We conduct several experiments to compare the per-formance of different supervised learning models byemploying various textual features, specifically TF-IDFand Word2Vec.• We also developed a web application which helps toclassify Bangla text document and provides the statisticalanalysis of an article.The remaining parts of the paper are organized as follows:Related Works have been presented in Section II. After that,In Section III, we discuss the details attributes of BARDdataset as well as the textual statistical analysis of BARDdataset articles. Subsequently, the proposed Bangla articleclassification model and its web application are presented inSection IV. Then, in Section V, we discussed the performanceof different supervised learning models and the textual features978-1-5386-8207-4/18/$31.00 c©2018 IEEEin classifying Bangla articles. Finally, we conclude this workwith future directions in Section VI.II. RELATED WORKSAutomated classification of text documents in various lan-guages is a well-studied area. Several supervised learning algo-rithms have been used to classify text documents, for example,in [6] the author utilizes Support Vector Machine to learn thetextual features for document classification. Moreover, NaiveBayes [12] and [2] and KNN [14] have been employed inthe state-of-the-art works. In addition, word2vec embeddingwith SVM is utilized [8] for categorizing English documents.Furthermore, deep learning models, such as CNN [7], havealso been introduced in the literature for this purpose.Compared to other languages, a very few works havebeen addressed done for Bangla article classification task.In [10], author analyzes the efficiency of n-gram based textcategorization for Bangla language with one=year news corpusof Prothom Alo newspaper. Moreover, Naive Bayes basedbangla text classification model has been proposed in [3].In addition, in [9], author compares the performance offour supervised learning Methods: Decision Tree, K-NearestNeighbor, Naive Bayes, and Support Vector Machine (SVM)for Bangla documents categorization. In this work, TF-IDFhave been used to construct feature vector by utilizing 1000web documents. Similarly, in [4], author uses TF-IDF to traina supervised learning model on 1960 web documents.To the best of our knowledge, all the state-of-art works, whoaddressed the Bangla text classification task, utilized a verysmall corpus of articles which is not very effective to traina supervised learning model. Furthermore, word2vec basedsemantic textual features have not been studied to build asupervised model for Bangla article categorization task.III. DATASET AND TEXTUAL STATISTICAL ANALYSISIn this section, we present the details of the proposed datasetBARD: Bangla Article Dataset. Subsequently, an extensivetextual statistical analysis of the BARD dataset article has beenpresented.Category No. ofDocumentsNo.</s>
<s>ofWordsAverageSentencesperDocumentAveragewordsperSentenceState 242860 57019465 18.50 13.356Economy 18982 4915141 20.18 13.378International 32203 7096111 18.47 12.493Entertainment 31293 6706563 21.70 10.236Sports 50888 12397415 22.80 11.069TABLE I. Text Corpus DetailsA. BARD: Bangla Article DatasetWe fetched Bangla articles from various Bangla online newsportals using Google Search API. We only considered thenews articles which belong to five predefined classes: State,International, Economy, Entertainment, Sports. The articleswere labeled with its class at the time of fetching. Afterapplying different filtering approaches to remove the duplicatearticles as well as irrelevant articles we gathered 3,76,226Bangla news articles. The distribution of 3,76,226 articleson five categories is shown in Table I, along with the totalnumber of words in each category. Except the State categorythe distribution of articles on different are quite balance.However, in the statistical analysis, presented in SectionIII-B,we have found out that the State category’s articles have thealmost unique textual property which can help the supervisedlearning model to accurately separate this articles from theother category. Additionally, the average number of sentenceper document and the average number of words per sentencein each category are quite consistent, which are depicted inTable I.B. Statistical AnalysisWe performed textual statistical analysis on the BARDdataset articles and results are presented in Fig. 1 and 2. Thefrequency distributions of the top 20 most frequent words foreach of the five categories are depicted in Fig. 1. From thisanalysis, we can easily identify that all the categories havealmost similar most frequent words. In other words, thesemost frequent words do not help to categorize the articles.Hence, we removed around 25 most frequent words from allthe articles and performed the statistical analysis again onthe filtered dataset, which is presented in Fig. 2. Now, thisfiltered frequency distribution shows that each categories havesome unique distribution of words, which may contribute tocategorize the articles.Fig. 1: Most frequent words in each categories.IV. PROPOSED MODELWe developed a Bangla article classifier utilizing the BARDdataset. Also, TF-IDF [13] and Word2Vec [11] have beenutilized to train the supervised learning models for articleclassification. The Bangla article classification model is de-picted in Fig. 3, which has two separate phase: predictionand training phase. Before utilizing the articles in the BARDdataset, our model first conducted some preprocessing on thearticles contents, such tokenizing, removing stop words andconstruction of feature vector for the whole corpus usingboth TF-IDF [13] and Word2Vec [11] approaches. In thetraining phase, we first divided the processed dataset intotraining and test set. The details of preprocessing and featureextraction is described in the following subsections. Afterbuilding the feature vector, we employed the train dataset totrain models with various supervised classification algorithms.Finally we compared the performances of the classifiers on testset and choose the best classifier with best feature to performprediction task. The different phases of our proposed modelis depicted in Fig. 3.A. PreprocessingBefore tokenizing, we removed all the punctuations anddigits(1,2,3...). We tokenized the articles using space as de-limiter. At the end of this step, we got a set of words andtheir frequency in each article.B. Feature Extraction and Classifier TrainingAfter completing the preprocessing steps, we extractedthe feature vector using TF-IDF and Word2Vec approaches.Fig. 2: Most frequent</s>
<s>words after removing stop words.Fig. 3: Overview of the model.Subsequently, we trained several supervised classifiers, whichis presented in details in the subsequent sections.1) Word2Vec Feature: Word2Vec [11] is a technique usedto attain vector representation of words that preserves bothsemantic and syntactic relationships.Here for each unique word in a document, we multipliedthe corresponding vector with its frequency in the document.Then we summed up the vectors. We used word2vec trained byfacebook research [1]. Each vector has a dimension of 300.Let document d contain words W1 ,W2 ... Wi ... Wn withmultiplicity C1,C2 ... Ci ... Cn and Word2Vec representationof Wi be W2V(Ci). Then the feature vector for document dis,Vd =i=1(W2V (Wi) ∗ Ci)2) TF-IDF Feature: TF-IDF [13] is a widely used tech-nique in Information Retrival. It is a technique for assigningscores to words in a document. TF-IDF score for a word w ina document D in a corpus can be calculated as below:TF =Frequency of w in DTotal number of words DIDF = logeTotal number of documentsNumber of documents containing wTF − IDF = TF ∗ IDFIn TF-IDF approach, we considered the most frequent wordsfor feature vector. For each document, we calculated the TF-IDF scores for the words under consideration only. From thesescores we built the feature vector. We considered both featurevector of size 300 and 3000 words for comparison.After extracting these features we trained several supervisedlearning models by utilizing these features. Then we chose thebest model with a feature, either TF-IDF or Word2Vec, whichperformed better than any other models. Using this best modelwe conducted the prediction in our web application.C. Bangla Article Classifier Web Application ImplementationWe have built a web application for Bangla text classifica-tion utilizing the proposed model, which is depicted in Fig. 4.The application named BARD - Bangla Article Classifier canbe accessed from http://bard2018.pythonanywhere.com.1) User Interface: The user interface of the application isvery simple, which contains only two pages. In the first page,the user can input the content they want to classify. Fromthis page, they will be redirected to the second page, whichpresents the result of the classification.In the result page, the user will be able to see the categoryof the document and word statistics as shown in Figure 4.Among other statistics, this page shows the probability fordifferent other categories, word frequency table and a bar chartof frequency distribution of the top 20 most frequent words inthat text content.2) Backend: In the experiment evaluation, we found thatusing word2vec features with neural net is the best predictionmodel. We used that trained model for the web application.The process of predicting classes is almost identical to theprediction phase of Figure 3.Fig. 4: Web Application of BARDV. EXPERIMENTAL RESULTWe have used 10 fold cross validation for our experi-mental evaluation. We trained five supervised classificationmodels: Logistic Regression, Neural Network, Naive Bayes,Random Forest and Adaboost. Moreover, we also assessed theperformance of each supervised learning model using threetextual features: word2vec, TF-IDF(3000 word vector), andTF-IDF(300 word vector).To asses the performance of the prediction model weutilized Precision, Recall and F1 score as evaluation metricswhich are defined asFeatures Learning</s>
<s>Model Precision Recall F1-scoreWord2VecLogistic Regression 0.95 0.95 0.95Neural Network 0.96 0.96 0.96Naive Bayes 0.70 0.71 0.74Random Forest 0.91 0.91 0.90Adaboost 0.92 0.92 0.92TF-IDF*Logistic Regression 0.94 0.94 0.94Neural Network 0.96 0.96 0.96Naive Bayes 0.87 0.87 0.88Random Forest 0.93 0.93 0.92Adaboost 0.89 0.89 0.88TF-IDF**Logistic Regression 0.85 0.85 0.83Neural Network 0.92 0.91 0.91Naive Bayes 0.75 0.75 0.78Random Forest 0.87 0.87 0.85Adaboost 0.82 0.82 0.80* TF-IDF feature with 3000 word vector size.** TF-IDF feature with 300 word vector size.TABLE II. Performance comparison of different supervisedlearning model and textual features(Word2Vec and TF-IDF)Precision =TP + FPRecall =TP + FNF1− score =2 ∗ (Recall ∗ Precision)Recall + PrecisionHere, TP = True Positives , FP = False Positives and FN =False Negatives.The performance measures of the models for differenttextual features are presented in Table II.A. Comparison Between Different Learning AlgorithmsIn the experiment evaluation result, which is depicted inTale II, we can easily identify that neural network withword2vec is superior to other supervised classification models.This is the indication that we can utilize the deep neuralnetwork learning model to improve the article classificationperformance. Furthermore, Logistic Regression also performedbetter, specially with the word2vec features.B. Comparison Between Word2Vec and TF-IDF FeaturesAmong the three approaches, word2vec feature in neuralnetwork is providing the best performance having a precisionof 0.96. Moreover, TF-IDF with 3000 features gave almost thesame performance with the same precision and other metrics.However, if we compare the performance in different foldsthen neural network with word2vec performed better thanthe neural network with TF-IDF of 3000 word features. Thereason behind is that word2vec can capture the semantic andsyntactical features of the words in the text. On the other hand,if we compare the approaches for same size feature vector,we can see that TF-IDF with 300 features(same as Word2vecapproach) has a precision of 0.92 only.Features Learning Model Precision Recall F1-scoreWord2Vec Logistic Regression 0.95 0.95 0.95Neural Network 0.96 0.96 0.96TF-IDF* Logistic Regression 0.94 0.94 0.94Neural Network 0.96 0.96 0.96TF-IDF [9] SVM 0.89 0.89 0.89TF-IDF [4] LIBLINEAR 0.93 - -* TF-IDF feature with 3000 word vector size.TABLE III. Performance comparison of different state-of-the-art-worksC. Performance Comparison of State-of-the-art ModelsIn Table III, we compared the performance of our modelwith other state-of-the-art works. We choose logistic regressionand neural network with both Word2Vec and TF-IDF(3000features) from our trained models to perform the comparison,as these four models showed better performances in theexperiments evaluation. We compare our models with TF-IDF based SVM(Support Vector Machine) Bangla text clas-sification model [9]. This work utilize a dataset of 1000 webdocuments of five classes. The average performance measuresof the model for five classes is shown on III. Moreover, in[4], author used 1960 bangla web documents of five classes.Among the models they introduced, LIBLINEAR performedthe best and the average precision is 93%.From Table III, we can clearly observe that all of ourfour classification models have outperformed the state-of-the-art Bangla article classification models.VI. CONCLUSIONIn this work, we curated the largest Bangla article datasetand performed extensive experimentation to compare theperformance of different supervised learning model usingword2vec and TF-IDF features. In the experimentation, wefound out that neural network with word2vec features, havingrelatively</s>
<s>low dimensional feature vector, performed betterthan the other supervised models and textual features. Ad-ditionally, we deployed our prediction model as a web appli-cation to classify Bangla articles.We expect that this dataset will help the Bangla naturallanguage research community to further extend the article clas-sification task. Moreover, this dataset can be utilized in otherBangla NLP(natural language processing) tasks. In future, wehave a plan to extend this dataset so that it can be used tosolve other Bangla NLP related problems. Furthermore, Deeplearning model, such as CNN, LSTM, can also be consideredto improve the prediction model.ACKNOWLEDGEMENTSWe gratefully acknowledge the support of NVIDIA Corpo-ration with the donation of the Titan Xp GPU used for thisresearch.REFERENCES[1] P. Bojanowski, E. Grave, A. Joulin, and T. Mikolov, “Enriching wordvectors with subword information,” CoRR, vol. abs/1607.04606, 2016.[2] J. Chen, H. Huang, S. Tian, and Y. Qu, “Feature selection for textclassification with naïve bayes,” Expert Syst. Appl., vol. 36, no. 3, pp.5432–5435, Apr. 2009.[3] A. N. Chy, M. H. Seddiqui, and S. Das, “Bangla news classification us-ing naive bayes classifier,” in 16th Int’l Conf. Computer and InformationTechnology, March 2014, pp. 366–371.[4] A. Dhar, N. S. Dash, and K. Roy, “Application of tf-idf feature forcategorizing documents of online bangla web text corpus,” in IntelligentEngineering Informatics. Singapore: Springer Singapore, 2018, pp.51–59.[5] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” NeuralComput., vol. 9, no. 8, pp. 1735–1780, Nov. 1997.[6] T. Joachims, “Text categorization with support vector machines: Learn-ing with many relevant features,” in Proceedings of the 10th EuropeanConference on Machine Learning, ser. ECML’98. Berlin, Heidelberg:Springer-Verlag, 1998, pp. 137–142.[7] Y. Kim, “Convolutional neural networks for sentence classification,” inProceedings of the 2014 Conference on Empirical Methods in NaturalLanguage Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar,A meeting of SIGDAT, a Special Interest Group of the ACL, 2014, pp.1746–1751.[8] J. Lilleberg, Y. Zhu, and Y. Zhang, “Support vector machines andword2vec for text classification with semantic features.” in ICCI*CC,N. Ge, J. Lu, Y. Wang, N. Howard, P. Chen, X. Tao, B. Zhang, andL. A. Zadeh, Eds. IEEE Computer Society, 2015, pp. 136–140.[9] A. K. Mandal and R. Sen, “Supervised learning methods for bangla webdocument categorization,” CoRR, vol. abs/1410.2045, 2014.[10] U. N. K. M. Mansur, M., “Analysis of n-gram based text categorizationfor bangla in a newspaper corpus.” in Proceedings of InternationalConference on Computer and Information Technology, 2006.[11] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean,“Distributed representations of words and phrases and their composi-tionality,” in Advances in Neural Information Processing Systems 26,C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q.Weinberger, Eds. Curran Associates, Inc., 2013, pp. 3111–3119.[12] I. Rish, “An empirical study of the naive bayes classifier,” Tech. Rep.,2001.[13] G. Salton and C. Buckley, “Term-weighting approaches in automatictext retrieval,” Inf. Process. Manage., vol. 24, no. 5, pp. 513–523, Aug.1988.[14] V. Tam, A. Santoso, and R. Setiono, “A comparative study of centroid-based, neighborhood-based and statistical approaches for effective doc-ument categorization,” in ICPR, 2002.View publication statsView publication statshttps://www.researchgate.net/publication/328214545</s>
<s>Aggression Identification in English, Hindi and Bangla Text using BERT, RoBERTa and SVMProceedings of the Second Workshop on Trolling, Aggression and Cyberbullying, pages 76–82Language Resources and Evaluation Conference (LREC 2020), Marseille, 11–16 May 2020c© European Language Resources Association (ELRA), licensed under CC-BY-NCAggression Identification in English, Hindi and Bangla Text using BERT,RoBERTa and SVMArup Baruah♦, Kaushik Amar Das♦, Ferdous Ahmed Barbhuiya♦, Kuntal Dey♥♦IIIT Guwahati,♥ IBM Research♦Assam India, ♥New Delhi India{arup.baruah, kaushikamardas}@gmail.com,ferdous@iiitg.ac.in, kuntadey@in.ibm.comAbstractThis paper presents the results of the classifiers that the team ‘abaruah’ developed for the shared tasks in aggression identificationand misogynistic aggression identification. These two shared tasks were held as part of the second workshop on Trolling, Aggressionand Cyberbullying (TRAC). Both the subtasks were held for English, Hindi and Bangla language. In our study, we used EnglishBERT (En-BERT), RoBERTa, DistilRoBERTa, and SVM based classifiers for the English language. For Hindi and Bangla language,multilingual BERT (M-BERT), XLM-RoBERTa and SVM classifiers were used. Our best performing models are EN-BERT for EnglishSubtask A (Weighted F1 score of 0.73, Rank 5/16), SVM for English Subtask B (Weighted F1 score of 0.87, Rank 2/15), SVM for HindiSubtask A (Weighted F1 score of 0.79, Rank 2/10), XLMRoBERTa for Hindi Subtask B (Weighted F1 score of 0.87, Rank 2/10), SVMfor Bangla Subtask A (Weighted F1 score of 0.81, Rank 2/10), and SVM for Bangla Subtask B (Weighted F1 score of 0.93, Rank 4/8).It is seen that the superior performance of the SVM classifier was achieved mainly because of its better prediction of the majority class.BERT based classifiers were found to predict the minority classes better.Keywords: Aggression Identification, Offensive Language, Multilingual, BERT, SVM, RoBERTa1. IntroductionPartisan antipathy in politics is on the rise. All over theworld, societies are getting more and more politically po-larized (Thomas Carothers, 2019). It is partly fuelled bythe echo chamber and filter bubble effect of social me-dia. Anger is fast becoming a tool to lure voters. As theworld gets polarized, the popularity and convenience of thesocial media platforms are turning them to a modern-daybattlefield. This has led to an increase in aggressive con-tent in social media. Some of the world leaders are alsousing social media as a platform for displaying their ag-gressiveness. An example of this is the following tweet ad-dressed to North Korean leader Kim Jong-un by U.S. Pres-ident Donald Trump, “Will someone from his depleted andfood starved regime please inform him that I too have a Nu-clear Button, but it is a much bigger & more powerful onethan his, and my Button works!”Social media sites are grappling to remove aggressive con-tent from their sites both to promote healthy discussionsand also to comply with legal laws. However, the scale in-volved makes manual moderation a difficult task. The needof the hour is automated methods for detecting aggressivecontent.The second workshop on Trolling, Aggression, and Cyber-bullying (TRAC-2) (Kumar et al., 2020) is an attempt topromote research in automated detection of aggression intext. This workshop had two shared tasks titled “Aggres-sion Identification” (Subtask A) and “Misogynistic Aggres-sion Identification” (Subtask B). Aggression identificationis a 3-way classification problem where it</s>
<s>is required to de-termine if a given comment is overtly, covertly or not ag-gressive. Misogynistic aggression is a binary classificationproblem where it is required to determine if the commentis gender-based or not. Both the subtasks were held for En-glish, Hindi, and Bangla language.We participated in both the subtasks for all the three lan-guages. The classifiers we used in this study include En-BERT, M-BERT, RoBERTa, DistilRoBERTa, and XLM-RoBERTa.2. Related WorkApart from automatic detection of aggression in text, con-siderable research has been performed for detection of of-fensive language, abusive language, hate speech, cyberbul-lying, profanity, and insults. Fortuna and Nunes (2018) pro-vides definitions of the terms mentioned above, providesstatistics of research performed for the detection of hatespeech, lists the features, classification methods, and chal-lenges in automated hate speech detection. Schmidt andWiegand (2017) too discusses the different classificationmethods, features and the challenges involved in the de-tection of hate speech.Davidson et al. (2017) mentions that not all offensive lan-guage is hate speech. Their classifier was able to reduce thenumber of offensive tweets misclassified as hate speech to5%. Malmasi and Zampieri (2017) worked on differenti-ating hate speech from profanity by using an SVM classi-fier trained on features such as character n-grams (2 to 8),word n-grams (1 to 3), and word skip-grams. Malmasi andZampieri (2018) extended the above work to include Browncluster features, ensemble classifiers and meta-classifiers inaddition to single classifiers.Zampieri et al. (2019a) introduces a new dataset called Of-fensive Language Identification Dataset (OLID) where thedata has been categorized as offensive or not, targeted oruntargeted, and targets individual, group or other. SVM,BiLSTM and CNN classifiers were used in this study topredict the type and target of offensive posts. Zampieri etal. (2019b) summarizes the results from the shared task onLanguage Type Total NAG CAG OAG NGEN GEN Max Length belowLength 50 wordsEnglish Train 4263 3375 453 435 3954 309 806 93.31%(79.17%) (10.63%) (10.20%) (92.75%) (7.25%)English Dev 1066 836 117 113 993 73 457 93.34%(78.42%) (10.98%) (10.60%) (93.15%) (6.85%)English Test 1200 690 224 286 1025 175 1390 77.41%(57.50%) (18.67%) (23.83%) (85.42%) (14.58%)Hindi Train 3984 2245 829 910 3323 661 557 95.41%(56.35%) (20.81%) (22.84%) (83.41%) (16.59%)Hindi Dev 997 578 211 208 845 152 230 93.98%(57.97%) (21.16%) (20.86%) (84.75%) (15.26%)Hindi Test 1200 325 191 684 633 567 669 89.92%(27.08%) (15.92%) (57.00%) (52.75%) (47.25%)Bangla Train 3826 2078 898 850 3114 712 154 98.64%(54.31%) (23.47%) (22.22%) (81.39%) (18.61%)Bangla Dev 957 522 218 217 766 191 182 98.64%(54.55%) (22.78%) (22.68%) (80.04%) (19.96%)Bangla Test 1188 712 225 251 986 202 113 99.24%(59.93%) (18.94%) (21.13%) (83.00%) (17.00%)Table 1: Dataset Statisticsidentification and categorization of offensive language heldas part of Semantic Evaluation 2019. The best perform-ing system in subtask A of OffensEval 2019 used a BERTbased model (Liu et al., 2019b) and obtained a macro F1score of 0.8286. Zhu et al. (2019) also used a BERT basedmodel and obtained the 3rd rank in subtask A of OffensEval2019 with a macro F1 score of 0.8136.The results of the TRAC-1 has been summarized in Kumaret al. (2018). As can be seen, both deep learning (LSTM,BiLSTM, CNN) and traditional machine learning</s>
<s>classi-fiers (SVM, Logistic Regression, Random Forest, NaiveBayes) were used in this shared task.Similarly, the HASOC 1 (Mandl et al., 2019) workshoporganized at FIRE2019 was also aimed at stimulating re-search the aforementioned areas in Hindi, English and Ger-man languages respectively. They note that the most widelyused approach was LSTMs coupled with word embeddings.In this workshop, the participants used a wide variety ofmodels such as BERT, SVM, CNN, LSTM with Attention,etc.3. DataThe dataset for subtask A has been labelled as either overtlyaggressive (OAG), covertly aggressive (CAG) or not ag-gressive (NAG). The dataset for subtask B has been labelledas gendered (GEN) or non-gendered (NGEN). The datasetis further described in Bhattacharya et al. (2020).Table 1 shows the statistics of the dataset used for the twoshared tasks. As can be seen, the dataset is imbalanced withNAG (for subtask A) and NGEN (for subtask B) occurringmore frequently in all the three languages. The NGEN cat-egory occurred as high as 93.15% in the English develop-ment dataset. This, however, is a true reflection of the pro-portion of aggressive and non-aggressive comments in real1https://hasocfire.github.io/hasoc/2019/life as has been mentioned in Gao et al. (2017). The onlyexception is the Hindi test dataset. In this dataset, OAG isthe most frequently occurring class for subtask A and thisdataset is almost balanced for subtask B.As can be seen, the comments were also of varied length(in terms of the number of words). The longest comment of1390 words occurred in the English test dataset. However,as can be seen from the table, the majority of the commentswere of length less than 50 words.4. Methodology4.1. PreprocessingIn our work, before performing tokenization, the text wasconverted to lower case. This conversion to lower-casewas performed through the BERT tokenizer and the TFIDFvectorizer. As mentioned in section 3, except for Englishand Hindi test set, more than 93% of the comments wereof length less than 50 tokens. Hence, for En-BERT andM-BERT, the maximum sequence length of 50 was used.Comments of length beyond 50 tokens were truncated. Inthe RoBERTa models, the long sentences were split intomultiple samples 3.4.2. Classifiers4.2.1. English BERT (En-BERT)English BERT (Devlin et al., 2019) is a bi-directionalmodel based on the transformer architecture. The trans-former architecture is an architecture based solely on atten-tion mechanism (Vaswani et al., 2017). The transformer ar-chitecture overcomes the inherent sequential nature of Re-current Neural Networks (RNN) and hence they are moreconducive for parallelization.In our study, we used the uncased large version of En-BERT2. This version has 24 layers and 16 attention heads. This2 https://github.com/google-research/berthttps://hasocfire.github.io/hasoc/2019/https://github.com/google-research/bertmodel generates 1024 dimensional vector for each word.We used 1024 dimensional vector of the Extract layer asthe representation of the comment. Our classification layerconsisted of a single Dense layer.For subtask A, the dense layer consisted of 3 units and thesoftmax activation function was used. The loss functionused was sparse categorical crossentropy. For subtask B,the dense layer consisted of 1 unit and the sigmoid acti-vation function was used. The loss function used was bi-nary crossentropy. The Adam optimizer with a learningrate of 2e-5 was used for training the model. The modelwas trained for 15 epochs.</s>
<s>Early stopping with patience of5 was used for both the subtasks. Sparse categorical accu-racy was monitored for early stopping.4.2.2. Multilingual BERT (M-BERT)Multilingual BERT is BERT trained for multilingual tasks.It was trained on monolingual Wikipedia articles of 104 dif-ferent languages. It is intended to enable M-BERT fine-tuned in one language to make predictions for another lan-guage. In our study, we used the M-BERT model having12 layers and 12 heads. This model generates 768 dimen-sional vector for each word. We used the 768 dimensionalvector of the Extract layer as the representation of the com-ment. Just like for the English language subtasks, a singleDense layer was used as the classification model. The hy-perparameters used for training the model is the same asmentioned for the English language.Algorithm 1 Naive Checkpoint Ensemble1: A← True labels2: P ←Model predictions at each epoch3: N ← Num samples, C ← Num classes4: reverse← boolean5: function ENSEMBLE(P,A,N,C, reverse)6: models← {}, val← 07: Z[N ][C]← Zero Matrix8: ε← len(P ) . Num Epochs9: if reverse then10: range← ε to 011: else12: range← 0 to ε13: end if14: for (e← range) do15: temp← Z16: temp← temp+ P [e]17: if metric(A, temp) > val then18: Z ← Z + P19: models← models ∪ e20: val← metric(A, temp)21: else22: continue23: end if24: end for25: return models, val26: end function4.2.3. RoBERTa and DistilRoBERTaRoBERTa (Liu et al., 2019c) improves upon BERT byadding a few modifications to the original model such asAlgorithm 2 Make Prediction1: m← model ids chosen for ensemble2: E[N ][C]← Zero Matrix3: for i in m do4: Load model with weights at epoch i5: p← model.predict(samples)6: E ← E + p7: end for8: preds← Index of max element in each row of Ntraining on a larger dataset, dynamically masking out to-kens compared to the original static masking, etc. Distil-RoBERTa (Sanh et al., 2019) is a compressed version ofthe same which trains faster and preserves up to 95% of theperformance of the original. For both of these models, wemake use of the pre-trained base versions made available bythe HuggingFace Transformers library (Wolf et al., 2019).We make use of the RoBERTa model for English Task Aand DistilRoBERTa for English Task B. We use an atten-tion layer (Zhou et al., 2016) on top of the embeddings ofthe underlying pre-trained model. However, instead of thetanh activation function used in the original work, we usedpenalized − tanh which is demonstrated to work betterfor NLP tasks (Eger et al., 2019) combined with a cross-entropy loss function. We also do not apply softmax onthe output of the classifying layer as done in the originalwork and instead use argmax directly on the final layeroutputs to make the prediction. We make use of the RangerOptimizer which is a combination of RAdam (Liu et al.,2019a) wrapped with Lookahead (Zhang et al., 2019) totrain the model. The entire model is fine-tuned with a tinylearning rate of 1e− 4 for both of the English classificationtasks. For task A and task B, lookahead’s (k, α) is set to(5, 0.5) and (6, 0.5) with a weight decay</s>
<s>of 1e − 5 respec-tively. The models were set to run for 20 epochs with earlystopping patience of 4. We made use of a naive checkpointensembling method (Chen et al., 2017) where we save themodel weights and dev-set predictions (i.e. the final layeroutput) at each epoch. The method is given in Algorithm1. The method is called once with reverse set to True andonce with False. The ensembled model which maximizeour chosen metric (weighted–f1) value is chosen. If the en-semble does not improve the metric, we simply choose thebest model found during training. Once we have chosen themodel, we use Algorithm 2 to make the final prediction onthe test set. This Algorithm 2 simply describes adding theweights of the final classifying layer of the model and usingargmax along each row to get the prediction. Naive ensem-bling increases the weighted f1 on the dev–set on Englishtask A from 0.8070 to 0.8124. We did not use it for Englishtask B as it degraded the performance.4.2.4. XLM-RoBERTaXLM-RoBERTa (Conneau et al., 2019) is a cross-lingualmodel that aims to tackle the curse-of-multilinguality prob-lem of cross-lingual models. It is inspired by (Liu etal., 2019c) and is trained on up-to 100 languages and out-performs M-BERT in multiple cross-lingual benchmarks.Similar to Section 4.2.3, we use3 the base version cou-pled with an attention head classifier, the same optimizer,epochs, and early stopping. Lookahead’s (k, α) is set to(6, 0.5) with weight–decay of 1e − 5. Batch-size is set to(22,24) for Bangla tasks (A, B) and 32 for both Hindi tasks.This model is used in the sub-tasks of the Hindi and Banglalanguages. For the Hindi models, we use the naive check-point ensembling method described in Section 4.2.3. Thisincreased the weighted f1 from 0.7146 to 0.7160 for Hinditask A and from 0.8908 to 0.8969 for Hindi task B. Naiveensembling did not yield any performance boosts in Banglatasks.4.2.5. SVMWe also used the Support Vector Machine (SVM) model forboth the subtasks in all the 3 languages. The SVM modelwas trained using TF-IDF features of word and character n-grams. Word n-grams of size 1 to 3 and character n-gramsof size 1 to 6 were used. The linear kernel was used for theclassifier and hyperparameter C was set to 1.0.5. ResultsAs has been mentioned in section 4, the classifiers we usedinclude En-BERT, RoBERTa, DistilRoBERTa and SVM forthe subtasks in the English language, and M-BERT, XLM-RoBERTa and SVM for the subtasks in Hindi and Banglalanguage.Table 2 and 3 show the results we obtained on the devel-opment and test set respectively. Both the table showsthe precision, recall, macro F1, weighted F1, and accu-racy. Weighted F1 score is the metric that has officiallybeen used to rank the submissions. As can be seen fromtable 2, the best performing classifiers on the developmentset were RoBERTa for English subtask A, En-BERT forEnglish subtask B, XLM-RoBERTa for Hindi subtask A,Bangla subtask A, and Bangla subtask B, and M-BERT forHindi subtask B.As can be seen from table 3, the SVM classifier which wasnot the best on the development set, actually performedwell on the test set</s>
<s>for English subtask B (ranked 2nd),Hindi subtask A (ranked 2nd), Bangla subtask A (ranked2nd), and Bangla subtask B (ranked 4th). The other best-performing classifiers are En-BERT for English subtaskA (ranked 5th), and XLM-RoBERTa for Hindi subtask B(ranked 2nd). The results of M-BERT for Hindi subtask Aare not shown as an error was made for this run (binaryclassification was performed instead of performing 3-classclassification).It can also be seen from table 3 that for subtask B, the bestperformance of all the classifiers (SVM, BERT-based, andRoBERTa-based) was obtained for the Bangla language.For subtask B, the SVM classifier had the weighted F1score of 0.87, 0.84 and 0.92, the RoBERTa-based classifiershad a score of 0.86, 0.87 and 0.92, and the BERT-basedclassifiers had a score of 0.85, 0.84 and 0.92 for English,Hindi and Bangla language respectively. Even for sub-task A, the classifiers obtained better score for the Bangla3Code for this particular model available at https://github.com/cozek/trac2020_submissionlanguage (except for RoBERTa-based classifier which ob-tained a slightly better score for Hindi language as com-pared to Bangla language).The confusion matrices of the classifiers on the test setare shown in table 4 to 9. As can be seen from table4, the strength of En-BERT which was our best perform-ing classifier for English subtask A, was that it predictedthe minority classes better than the other two classifiers.In fact, it was the worst in predicting the majority NAGclass. But because of its correct predictions for the minorityclasses, it was our best performing classifier for this sub-task. RoBERTa too predicted the OAG class better thanSVM. However, RoBERTa did not perform well in pre-dicting the CAG class. Detecting covertly aggressive com-ments is very difficult and En-BERT performed better thanthe other two classifiers in predicting this class.As can be seen from table 7, SVM which was our bestperforming classifier for English subtask B, predicted themajority class better than the other two classifiers. SVM,however, was the worst in predicting the minority class. En-BERT again was the best in predicting the minority class.En-BERT also had the best recall score for this subtask.As mentioned in section 3, for Hindi subtask A, OAG wasthe majority class. XLM-RoBERTa performed better thanSVM in predicting the majority class. However, SVM per-formed better in predicting the CAG and NAG class andhence was the best performing classifier in this subtask. ForHindi subtask B, the dataset was quite balanced, and in thisdataset, XLM-RoBERTa performed the best.For Bangla subtask A, SVM performed the best in predict-ing the majority NAG class as well as the CAG class. Assuch, it was the best performing classifier in this subtask.For Bangla subtask B, SVM again performed better in pre-dicting the majority class. In this subtask, M-BERT andXLM-RoBERTa performed better than SVM in predictingthe minority class. The best performing classifier for thissubtask was SVM.6. Error AnalysisOn analysis of the predictions made by our classifiers on thedevelopment set, we found that our classifiers were not ableto handle intentional or unintentional orthographic varia-tions of toxic words and spelling mistakes. For example,both the SVM and En-BERT classifiers wrongly classifiedthe comment “Fuuck your music” as not aggressive.</s>
<s>Thiscomment has been labelled by the annotators as overtly ag-gressive. However, after changing the toxic word ‘Fuuck’to ‘Fuck’, both the classifiers were able to make the correctprediction for the comment. Similarly, both the classifierswere not able to handle the spelling mistake for the word‘prostitute’ in the comment ‘So sad she is a professionalprostatiut’. The comment was wrongly classified as notgendered. After correcting the spelling mistake, both theclassifiers were able to classify the comment correctly.Annotators have labelled comments such as ’Im homosex-ual and really proud of it’ and ’I. Gay’ where the user isattributing homosexuality to oneself as not gendered. How-ever, our SVM wrongly classifies these comments as gen-dered based on the presence of the words homosexual andgay. So, the SVM classifier has not been able to detect thehttps://github.com/cozek/trac2020_submissionhttps://github.com/cozek/trac2020_submissionTask System Precision (Macro) Recall (Macro) F1 (macro) F1 (weighted) AccuracyEnglish A SVM 0.6415 0.4807 0.5170 0.7729 0.8105English A RoBERTa 0.6418 0.5883 0.6106 0.8070 0.8148English A En-BERT 0.5866 0.5884 0.5871 0.7878 0.7858English B SVM 0.8060 0.6056 0.6490 0.9244 0.9390English B DistilRoBERTa 0.7201 0.6866 0.7016 0.9260 0.9289English B En-BERT 0.8274 0.6962 0.7423 0.9400 0.9467Hindi A SVM 0.6682 0.6249 0.6409 0.7074 0.7192Hindi A XLM-RoBERTa 0.6602 0.6376 0.6472 0.7146 0.7207Hindi A M-BERT 0.6147 0.6167 0.6151 0.6846 0.6871Hindi B SVM 0.8415 0.6906 0.7346 0.8765 0.8917Hindi B XLM-RoBERTa 0.8125 0.7565 0.7801 0.8908 0.8959Hindi B M-BERT 0.7977 0.7781 0.7874 0.8919 0.8937Bangla A SVM 0.7096 0.6557 0.6747 0.7197 0.7304Bangla A XLM-RoBERTa 0.7203 0.7121 0.7137 0.7539 0.7513Bangla A M-BERT 0.6805 0.6891 0.6844 0.7279 0.7252Bangla B SVM 0.8792 0.7396 0.7826 0.8723 0.8851Bangla B XLM-RoBERTa 0.8580 0.8319 0.8439 0.9020 0.9039Bangla B M-BERT 0.8585 0.7998 0.8242 0.8920 0.8966Table 2: Dev Set ResultsTask System Precision Recall F1 F1 Accuracy Rank(Macro) (Macro) (macro) (weighted)English A SVM 0.7923 0.6077 0.6489 0.7173 0.7450English A RoBERTa 0.6722 0.5921 0.6130 0.6986 0.7233English A En-BERT 0.6880 0.6415 0.6501 0.7289 0.7350 5thEnglish B SVM 0.7980 0.6744 0.7121 0.8701 0.8850 2ndEnglish B DistilRoBERTa 0.7277 0.7101 0.7183 0.8623 0.8650English B En-BERT 0.6980 0.7226 0.7089 0.8503 0.8458Hindi A SVM 0.7252 0.7592 0.7363 0.7944 0.7867 2ndHindi A XLM-RoBERTa 0.7129 0.7269 0.7188 0.7927 0.7892Hindi B SVM 0.8597 0.8373 0.8395 0.8408 0.8433Hindi B XLM-RoBERTa 0.8704 0.8673 0.8683 0.8689 0.8692 2ndHindi B M-BERT 0.8395 0.8363 0.8372 0.8379 0.8383Bangla A SVM 0.8385 0.7171 0.7586 0.8083 0.8199 2ndBangla A XLM-RoBERTa 0.7434 0.7136 0.7264 0.7880 0.7938Bangla A M-BERT 0.7265 0.6945 0.7074 0.7740 0.7820Bangla B SVM 0.9299 0.8167 0.8600 0.9258 0.9310 4thBangla B XLM-RoBERTa 0.8431 0.8617 0.8519 0.9153 0.9141Bangla B M-BERT 0.8619 0.8648 0.8633 0.9227 0.9226Table 3: Official Results on Test SetSVM RoBERTa En-BERTPred Pred Pred Pred Pred Pred Pred Pred PredCAG NAG OAG CAG NAG OAG CAG NAG OAGTrue CAG 86 135 3 64 132 28 122 83 19True NAG 3 677 10 26 645 19 48 624 18True OAG 26 129 131 38 89 159 97 53 136Table 4: Confusion Matrix on Test Set for English Subtask ASVM XLM-RoBERTaPred Pred Pred Pred Pred PredCAG NAG OAG CAG NAG OAGTrue CAG 121 52 18 101 53 37True NAG 42 273 10 54 257 14True OAG 64 70 550 46 49 589Table 5:</s>
<s>Confusion Matrix on Test Set for Hindi Subtask ASVM XLM-RoBERTa M-BERTPred Pred Pred Pred Pred Pred Pred Pred PredCAG NAG OAG CAG NAG OAG CAG NAG OAGTrue CAG 116 101 8 115 82 28 100 90 35True NAG 14 691 7 42 647 23 53 645 14True OAG 16 68 167 33 37 181 26 41 184Table 6: Confusion Matrix on Test Set for Bangla Subtask ASVM RoBERTa En-BERTPred Pred Pred Pred Pred PredGEN NGEN GEN NGEN GEN NGENTrue GEN 66 109 86 89 96 79True NGEN 29 996 73 952 106 919Table 7: Confusion Matrix on Test Set for English Subtask BSVM XLM-RoBERTa M-BERTPred Pred Pred Pred Pred PredGEN NGEN GEN NGEN GEN NGENTrue GEN 413 154 473 94 453 114True NGEN 34 599 63 570 80 553Table 8: Confusion Matrix on Test Set for Hindi Subtask BSVM XLM-RoBERTa M-BERTPred Pred Pred Pred Pred PredGEN NGEN GEN NGEN GEN NGENTrue GEN 130 72 158 44 157 45True NGEN 10 976 58 928 47 939Table 9: Confusion Matrix on Test Set for Bangla Subtask Bbenign use of these words. The En-BERT classifier how-ever correctly classified these comments correctly as notgendered.Our classifiers were not able to correctly classify com-ments such as ’There are only 2 genders’ that require worldknowledge. The above comment was labelled by the anno-tators as gendered. However, because of the absence of anytoxic words, the above comment was classified by both theSVM and En-BERT classifier as not gendered.There were also certain comments such as ’Hot’ that werelabelled as gendered by the annotators. These commentsare ambiguous and can belong to either of the two cate-gories. Most likely, these comments we labelled so basedon some contextual information. In the absence of contex-tual information, our classifiers did not classify these com-ments correctly.7. ConclusionWe used BERT, RoBERTa and SVM based classifiers fordetection of aggression in English, Hindi and Bangla text.Our SVM classifier performed remarkably well on the testset and obtained 2nd rank in the official results for 3 of the 6tests and obtained 4th in another. However, on closer anal-ysis, it is seen that the superior performance of the SVMclassifier was mainly due to the better prediction of the ma-jority class. BERT based classifiers were found to predictthe minority classes better. It was also found that our clas-sifiers did not handle spelling mistakes and intentional or-thographic variations correctly. FastText word embeddingsare better in handling orthographic variations. As a fu-ture study, it can be checked if FastText embeddings im-prove performance on this dataset. Another option wouldbe to use automatic methods for correcting grammatical andspelling mistakes. Use of contextual information and worldknowledge for automatic detection of aggression needs fur-ther investigation.8. Bibliographical ReferencesBhattacharya, S., Singh, S., Kumar, R., Bansal, A., Bhagat,A., Dawer, Y., Lahiri, B., and Ojha, A. K. (2020). Devel-oping a multilingual annotated corpus of misogyny andaggression.Chen, H., Lundberg, S., and Lee, S.-I. (2017). Check-point ensembles: Ensemble methods from a single train-ing process.Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V.,Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettle-moyer, L., and Stoyanov, V. (2019). Unsupervisedcross-lingual</s>
<s>representation learning at scale.Davidson, T., Warmsley, D., Macy, M., and Weber, I.(2017). Automated Hate Speech Detection and the Prob-lem of Offensive Language. In Proceedings of ICWSM.Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K.(2019). BERT: Pre-training of deep bidirectional trans-formers for language understanding. In Proceedings ofthe 2019 Conference of the North American Chapterof the Association for Computational Linguistics, pages4171–4186, Minneapolis, Minnesota, June. Associationfor Computational Linguistics.Eger, S., Youssef, P., and Gurevych, I. (2019). Is it timeto swish? comparing deep learning activation functionsacross nlp tasks.Fortuna, P. and Nunes, S. (2018). A Survey on AutomaticDetection of Hate Speech in Text. ACM Computing Sur-veys (CSUR), 51(4):85.Gao, L., Kuppersmith, A., and Huang, R. (2017). Recog-nizing Explicit and Implicit Hate Speech Using a WeaklySupervised Two-path Bootstrapping Approach. In IJC-NLP 2017, pages 774–782, Taipei, Taiwan.Kumar, R., Ojha, A. K., Malmasi, S., and Zampieri, M.(2018). Benchmarking Aggression Identification in So-cial Media. In Proceedings of the First Workshop onTrolling, Aggression and Cyberbulling (TRAC), SantaFe, USA.Kumar, R., Ojha, A. K., Malmasi, S., and Zampieri, M.(2020). Evaluating Aggression Identification in SocialMedia. In Ritesh Kumar, et al., editors, Proceedings ofthe Second Workshop on Trolling, Aggression and Cy-berbullying (TRAC-2020), Paris, France, may. EuropeanLanguage Resources Association (ELRA).Liu, L., Jiang, H., He, P., Chen, W., Liu, X., Gao, J., andHan, J. (2019a). On the variance of the adaptive learningrate and beyond. arXiv preprint arXiv:1908.03265.Liu, P., Li, W., and Zou, L. (2019b). NULI at SemEval-2019 task 6: Transfer learning for offensive language de-tection using bidirectional transformers. In Proceedingsof the 13th International Workshop on Semantic Evalua-tion, pages 87–91, Minneapolis, Minnesota, USA, June.Association for Computational Linguistics.Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D.,Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V.(2019c). Roberta: A robustly optimized bert pretrainingapproach.Malmasi, S. and Zampieri, M. (2017). Detecting HateSpeech in Social Media. In Proceedings of the Interna-tional Conference Recent Advances in Natural LanguageProcessing (RANLP), pages 467–472.Malmasi, S. and Zampieri, M. (2018). Challenges in Dis-criminating Profanity from Hate Speech. Journal of Ex-perimental & Theoretical Artificial Intelligence, 30:1–16.Mandl, T., Modha, S., Majumder, P., Patel, D., Dave, M.,Mandlia, C., and Patel, A. (2019). Overview of thehasoc track at fire 2019: Hate speech and offensive con-tent identification in indo-european languages. In Pro-ceedings of the 11th Forum for Information RetrievalEvaluation, FIRE ’19, page 14–17, New York, NY, USA.Association for Computing Machinery.Sanh, V., Debut, L., Chaumond, J., and Wolf, T. (2019).Distilbert, a distilled version of bert: smaller, faster,cheaper and lighter.Schmidt, A. and Wiegand, M. (2017). A Survey on HateSpeech Detection Using Natural Language Processing.In Proceedings of the Fifth International Workshop onNatural Language Processing for Social Media. Associ-ation for Computational Linguistics, pages 1–10, Valen-cia, Spain.Thomas Carothers, A. O. (2019). How to Understandthe Global Spread of Political Polarization. https://carnegieendowment.org/2019/10/01/how-to-understand-global-spread-of-political-polarization-pub-79893. [On-line; accessed 15-April-2020].Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones,L., Gomez, A. N., Kaiser, L., and Polosukhin, I. (2017).Attention is All you Need. In I. Guyon, et al., editors,Advances in Neural Information Processing Systems 30,pages 5998–6008. Curran Associates, Inc.Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue,C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtow-icz, M.,</s>
<s>and Brew, J. (2019). Huggingface’s transform-ers: State-of-the-art natural language processing. ArXiv,abs/1910.03771.Zampieri, M., Malmasi, S., Nakov, P., Rosenthal, S., Farra,N., and Kumar, R. (2019a). Predicting the Type and Tar-get of Offensive Posts in Social Media. In Proceedingsof NAACL.Zampieri, M., Malmasi, S., Nakov, P., Rosenthal, S., Farra,N., and Kumar, R. (2019b). SemEval-2019 Task 6:Identifying and Categorizing Offensive Language in So-cial Media (OffensEval). In Proceedings of The 13thInternational Workshop on Semantic Evaluation (Se-mEval).Zhang, M. R., Lucas, J., Hinton, G., and Ba, J. (2019).Lookahead optimizer: k steps forward, 1 step back.Zhou, P., Shi, W., Tian, J., Qi, Z., Li, B., Hao, H., and Xu,B. (2016). Attention-based bidirectional long short-termmemory networks for relation classification. In Proceed-ings of the 54th Annual Meeting of the Association forComputational Linguistics (Volume 2: Short Papers),pages 207–212, Berlin, Germany, August. Associationfor Computational Linguistics.Zhu, J., Tian, Z., and Kübler, S. (2019). UM-IU@LINGat SemEval-2019 task 6: Identifying offensive tweets us-ing BERT and SVMs. In Proceedings of the 13th Inter-national Workshop on Semantic Evaluation, pages 788–795, Minneapolis, Minnesota, USA, June. Associationfor Computational Linguistics.https://carnegieendowment.org/2019/10/01/how-to-understand-global-spread-of-political-polarization-pub-79893https://carnegieendowment.org/2019/10/01/how-to-understand-global-spread-of-political-polarization-pub-79893https://carnegieendowment.org/2019/10/01/how-to-understand-global-spread-of-political-polarization-pub-79893https://carnegieendowment.org/2019/10/01/how-to-understand-global-spread-of-political-polarization-pub-79893 Introduction Related Work Data Methodology Preprocessing Classifiers English BERT (En-BERT) Multilingual BERT (M-BERT) RoBERTa and DistilRoBERTa XLM-RoBERTa SVM Results Error Analysis Conclusion Bibliographical References</s>
<s>Bangla Document Categorisationusing Multilayer Dense NeuralNetwork with TF-IDFManisha ChakrabortyDepartment of Computer Science and EngineeringUnited International UniversityA thesis submitted for the degree ofMSc in Computer Science & EngineeringNovember 2019mailto:email_address@gmail.comAbstractDocument categorisation is a quintessential example of a natural languageprocessing quest which includes sorting documents by their content intoone or more predefined classes. This thesis proposes a model which consistsof multilayer Dense Neural Network with Term Frequency - Inverse Docu-ment Frequency (TF-IDF) as feature selection technique in terms of Banglatext document categorisation. This proposed system is divided into threeconsecutive steps: i) preprocessing raw text data and extracting feature us-ing TF- IDF, ii) designing the model architecture and fitting the model totraining set, and iii) evaluating model performance on test set by measuringaccuracy and weighted average of F1-score. It is observed from experimentsthat the proposed method exhibits higher accuracy (85.208%) and weightedF1 score (0.85) compared to the other well-known classification algorithms(K Nearest Neighbor, Decision Tree, Support Vector Machine, StochasticGradient Descent, Multinomial Näıve Bayes, and Logistic Regression) forBangla text document classification.Published PapersWork relating to the research presented in this thesis has been publishedby the author in the following peer-reviewed conference:1. Manisha Chakraborty, and Mohammad Nurul Huda, “Bangla Doc-ument Categorization using Multilayer Dense Neural Network withTF-IDF”, International Conference on Advances in Science, Engineer-ing Robotics Technology (ICASERT 2019), May 3-5, 2019, Dhaka,Bangladesh, pp. 1-4.AcknowledgementsI would like to convey my thankfulness toward all those who have aided mewith their help for completing my MSc in Computer Science and Engineer-ing Thesis. Firstly, I am thankful to my supervisor Dr. Mohammad NurulHuda, Professor and MSCSE Director, Department of Computer Scienceand Engineering at United International University for his constant guid-ance and insight throughout the thesis work. I am also grateful to HeadExaminer Dr. Swakkhar Shatabda, Associate Professor, Department ofComputer Science and Engineering at United International University, forreviewing the thesis book and his helpful suggestion on thesis book. I wouldlike to thank Dr. Dewan Md. Farid, Associate Professor, Department ofComputer Science and Engineering at United International University, forreviewing my thesis book and his valuable remarks. I also thank RubaiyaRahtin Khan, Assistant Professor, Department of Computer Science andEngineering at United International University for her valuable time to re-view my thesis book. Lastly, I would like to convey my gratitude towardthose individuals who have supported me throughout the process.ContentsList of Figures viiList of Tables viii1 Introduction 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.4 Objectives of the Thesis . . . . . . . .</s>
<s>. . . . . . . . . . . . . . . . . . . 31.5 Brief Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.6 Thesis Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.7 Organization of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Background and Literature Review 62.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.1.1 Feature Selection Methods . . . . . . . . . . . . . . . . . . . . . . 72.1.1.1 Term Frequency - Inverse Document Frequency (TF-IDF) 72.1.1.2 Word Embedding . . . . . . . . . . . . . . . . . . . . . 72.1.2 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.1.2.1 Support Vector Machine . . . . . . . . . . . . . . . . . 82.1.2.2 Stochastic Gradient descent . . . . . . . . . . . . . . . . 82.1.2.3 Logistic Regression . . . . . . . . . . . . . . . . . . . . 92.1.2.4 Näıve Bayes . . . . . . . . . . . . . . . . . . . . . . . . 102.1.2.5 K-Nearest Neighbor . . . . . . . . . . . . . . . . . . . . 102.1.2.6 Decision Tree . . . . . . . . . . . . . . . . . . . . . . . . 112.1.2.7 Neural Network . . . . . . . . . . . . . . . . . . . . . . 12CONTENTS2.1.3 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . 142.2 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.2.1 Language . . . . . . . . . . . . . . . . . . . . . . . . .</s>
<s>. . . . . . 162.2.2 Data Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.2.3 Feature Selection methods . . . . . . . . . . . . . . . . . . . . . . 202.2.4 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.2.5 Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . . 233 Proposed Method 263.1 Block Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263.1.1 Data Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . 273.2 Feature extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273.3 Model Fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283.3.1 Classification Algorithms . . . . . . . . . . . . . . . . . . . . . . 283.3.2 Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . 283.4 Model Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 Experimental Analysis 334.1 Data Collection and Preprocessing . . . . . . . . . . . . . . . . . . . . . 334.2 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344.2.1 Feature Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . 344.2.2 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . .</s>
<s>. . . . . 354.2.3 Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Conclusion and Future works 475.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475.2 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485.3 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48Bibliography 49List of Figures2.1 Logistic Curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.1 Block Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263.2 Process Diagram for Classification Algorithms . . . . . . . . . . . . . . . 293.3 Convolutional Neural Network (CNN) architecture . . . . . . . . . . . . 303.4 Dense Neural Network (DenseNN1) architecture . . . . . . . . . . . . . 303.5 The Proposed Method: Dense Neural Network (DenseNN2) architecture 314.1 Cross Validation Scores Of Neural Network Models . . . . . . . . . . . . 374.2 Confusion Matrix of Proposed model: DenseNN2 . . . . . . . . . . . . . 384.3 Confusion Matrix: Support Vector Machine . . . . . . . . . . . . . . . . 394.4 Confusion Matrix: Stochastic Gradient Descent . . . . . . . . . . . . . . 404.5 Confusion Matrix: Logistic Regression . . . . . . . . . . . . . . . . . . . 414.6 Confusion Matrix: Multinomial Näıve Bayes . . . . . . . . . . . . . . . . 424.7 Confusion Matrix: Convolutional Neural Network . . . . . . . . . . . . . 434.8 Confusion Matrix: K Nearest Neighbor . . . . . . . . . . . . . . . . . . . 444.9 Confusion Matrix: Dense Neural Network . . . . . . . . . . . . . . . . . 454.10 Confusion Matrix: Decision Tree . . . . . . .</s>
<s>. . . . . . . . . . . . . . . 46viiList of Tables2.1 Confusion matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154.1 Performance Evaluation Scores . . . . . . . . . . . . . . . . . . . . . . . 37viiiChapter 1IntroductionThis chapter provides a descriptive viewpoint of the introductory aspects of thesiswork which includes problem statement of the work, objective and motivation behindit. It also includes a brief portrayal of the experiments carried out in this thesis.Furthermore, thesis contribution is discussed and finally this chapter concludes withbook organization which gives an outline for the rest of the book.1.1 IntroductionDocument categorization, a classification problem by nature, is an interesting researchaspect of natural language processing (NLP) involving a broad range of real-life applica-tions such as sentiment analysis, spam detection, review rating prediction, e-commerce,online library management, and so on. The task of automatic document categorisationis defined by assorting documents into one or more pre-determined classes by integratingcomputer program on the basis of the content of those documents. This task can also bemanoeuvred to analyse hierarchically structured datasets [1] and multiple label contain-ing datasets [2]. The rapid growth of internet activity and exponentially rising socialmedia usage has not only necessitated but also implied this task of great importance.On the subject of document classification tasks, supervised classification algorithmshave been a popular choice among NLP practitioners [3], [4]. Furthermore, neuralnetworks (NN) with word vectors have accomplished remarkable results regarding textclassifications in recent times [5]. However, myriads of studies have been carried out onEnglish in contrast to Bengali. Bangla, also known as Bengali, is one of the most widely1.2 Problem Statementspoken language throughout the world. Hence, with a gigantic amount of internet userswho use Bangla as their native language, almost every form of digital text documentssuch as blogposts, newspaper articles, social media posts are noticeably escalating allover the web. Subsequently, the indispensability of automated Bangla document cat-egorisation becomes a critical issue to address. Various methods of Bangla documentclassification have been proposed in recent studies. Henceforth, in this thesis, a systemconsisting Dense Neural Network model with Term Frequency-Inverse Document Fre-quency (TF-IDF) as feature selection technique is designed aiming to classify Banglaelectric text documents. However, a comparative analysis among the proposed modeland other well-established methods can aid to the better comprehension of model’sapplicability in terms of Bangla document categorization. Therefore, nine experimentsare designed with two different feature selection method (TF-IDF and word embedding)for experimentation and performance evaluation in this thesis.1.2 Problem StatementThe increased popularity of internet usage in people who exercise Bangla on a dailybasis has led to the expansion of Bangla text documents on the web. Moreover, in onlinelibraries which contain books of various genres, different kinds of books are added tothe collection in large quantity. Manual sortation of these documents is laboriousand expensive approach to implement. Alongside, storage</s>
<s>capacity of computers hasupgraded radically and the ease of data storing and retrieving has increased manifoldover time. However, this amelioration in volume of text data can be proven beneficial tothe experts in various fields if categorized according to task-specificity. Earlier approachof tackling textual information was to analyze and process text data by domain specificexperts. Nonetheless, the constant increment in ever-so massive amount of text data hasmade it almost impossible to rely upon the earlier manual approach of categorization.Furthermore, toxic comments and contents has become an integral part of social mediawith the increasing use of social networking platforms such as Facebook, Twitter andso on. Continuous inspection of contents prior to posting on social media should bethe course of action, however, human scrutinizing of every content is a daunting task.Document classification problem has multiple applicability and bringing automationin this task can address variety of issues revolving this problem. Thus, automatic1.3 Motivationdocument classification is a task of great importance and a solve for various tasks inmodern days.1.3 MotivationCategorising documents in automated approach has become a prominent task in thefield of natural language processing that achieved much popularity over the recent yearsdue to the skyrocketing accretion of textual information. Applicability is one of theimportant facet to opt for the task of automatic Bangla text documents categorisation,since distinction among text data of different fields is necessitated in various domain ofexpertise such as education, healthcare, law, consumer goods, and the list is unending.Furthermore, text data of numerous field is also accruing due to the increasing internetusage among native Bengali people. Apart from applicability, in terms of choosingalgorithms for the task, handful number of methods have been comprised of well-known supervised algorithms. However, applying neural networks in case of Bangladocument categorisation is less of a common practice. There has been some recent worksconducted incorporating neural network [6], [7] but there is no established comparisonamong all of the well-known and widely used classification algorithms. Besides, most ofthese experiments are conducted on large dataset but in the matter of new real-worldprojects making a fresh start, there might be fewer number of training examples [8].Hence, performance evaluation on small scale dataset with existing methods should beconsidered as essential as on the large dataset.1.4 Objectives of the ThesisThis thesis aims to incorporate a method that categorizes electronic copy of Banglatext documents. For this purpose, previously labeled documents are collected from twoonline Bangla newspaper (Prothom Alo and Bdnews24). These documents are dividedintro train set and test set and train set is used in variety of experiments includingthe proposed model. After experimentation, comparison among the result of differentexperiments are conducted for model evaluation and prediction is made on test set.1.5 Brief Methodology1.5 Brief MethodologySix classification algorithms namely K Nearest Neighbour, Decision Tree, Support Vec-tor Machine, Stochastic Gradient Descent, Multinomial Näıve Bayes, Logistic Regres-sion combined with 10-fold cross validation and grid search, and three different neuralnetwork architectures with 10-fold cross validation are experimented for Bangla textdocument classification. After comparing all the experiments, experimental resultsshowed that our proposed method performed better than other experimented models.1.6</s>
<s>Thesis ContributionsContributions of this thesis work is outlined in brief as follows:• This thesis includes an extensive literature review on text classification task.Literature review is arranged on the basis of varying languages, different datasources, manifold feature selection techniques, miscellaneous algorithm imple-mentations and various evaluation measurement techniques.• We have created a Bangla text dataset from two popular online newspaper knownas Prothom-Alo and Bdnews24 containing 16 categories where each category con-sists of 100 articles.• We have designed nine experiments with six supervised classification algorithmsnamely K Nearest Neighbor, Decision Tree, Support Vector Machine, StochasticGradient Descent, Multinomial Näıve Bayes, Logistic Regression and three differ-ent neural network architectures: one is a Dense Neural Network with TF-IDF asFeature extractor, other two incorporates embedding layer as part of architecturewhere one is a Convolutional Neural Network and another one is a Dense NeuralNetwork.• Hyper parameters are optimised for traditional classification algorithms throughgrid searching.• We have included 10-fold cross validation in all the experiments conducted in thisthesis for estimation purpose.1.7 Organization of the Thesis• The performance of all nine experiments are evaluated on the basis of accuracy,weighted average scores of precision, recall and F1-measure. Experimental resultsdepicted that our proposed method outperformed the other two Neural Networkarchitectures and other traditional classification algorithms.1.7 Organization of the ThesisThe rest of this thesis is distributed into the following chapters:Chapter 2 presents background study and literature review required for this thesis.Chapter 3 provides methodology of the proposed model and other experiments con-ducted in this thesis.Chapter 4 empirical analysis of the experiments is described.Chapter 5 conclusion and future work of this thesis is depicted.Chapter 2Background and LiteratureReviewReviewing existing literatures provide substantial knowledge on the topic of interestoffering guidance to follow throughout the thesis work. This chapter discusses back-ground and literature review of the work conducted and provides an insight into thenecessary preliminaries of this thesis.2.1 PreliminariesLanguage, in general, seems difficult to learn as the tenancy of its complex character-istics and evolutionary change, yet we human beings perceive and utilise it since earlychildhood to communicate with our family and other persons. Language has been anessential part of our community as a tool of communication since the dawn of civilisa-tion. However, our communication these days not only includes oral and written formof language usage, but also constitutes thousands of electronic texts that has becomeintegral part of our daily life like online newspaper, social media, e-books, etc., andfor computer to analyse the complicated structure of natural language and providinguseful outcome is always a challenging task. This task requires the implementation ofthe preliminary knowledge of various mathematical appliances which are also experi-mented in this thesis. In this chapter, these introductory topics will be discussed forease and coherence.2.1 Preliminaries2.1.1 Feature Selection MethodsFeature selection method is an indispensable aspect for classifying tasks. This sectiondescribes feature selection techniques applied in this thesis work briefly.2.1.1.1 Term Frequency - Inverse Document Frequency (TF-IDF)Term Frequency - Inverse Document Frequency, shortly known as TF-IDF is a statisticalweight measurement which is widely used in text mining tasks. TF-IDF is often optedas a tool for feature extraction in</s>
<s>a variety of Natural Language Processing (NLP) tasksor text mining tasks [9],[10]. The TF-IDF value denotes the significance of a word ina document or a collection of documents called corpus. Term Frequency(TF) indicatesthe frequency of a term t occurring in a document d. Inverse Document Frequency(IDF) provides information about the importance of a term t. The TF-IDF weight ofa term i can be calculated by the following equation:wi = (TFi × log (N ÷ ni))÷√√√√ n∑i=1(TFi × log (N ÷ ni))2.1.1.2 Word EmbeddingWord embedding is a form of vector representation of words where similar words havesimilar encoding in the vector space. This method learns vector representation from atext corpus for a prior fixed-size vocabulary. For this experiment, we used embeddinglayer in the neural network which learns the vector representations of the words jointlywith the model in the training process. Including embedding layer as a part of thenetwork helps the network to learn the useful combination of the word vectors from theinput which might play a role in prediction [11]. These kind of word vectors are alsoknown as distributional similarity based word representations. The embedding layer,also known as lookup layer, provides a matrix E from words tokens. The matrix E canbe described by the following equation:E ∈ R|vocab×d|where each row is correlated with a different word from vocabulary. Afterwards, lookupoperation is conducted by indexing word to the matrix. Embedding layer produces aContinuous Bag of Words (CBOW) feature representation , where matrix E is the2.1 Preliminariesembedding matrix [11]. In case of Continuous Bag of Words (CBOW), for a giventarget word, context can be represented by various words.The CBOW approach iscomparable to traditional bag of words approach.2.1.2 AlgorithmsAlgorithm selection is an integral part of machine learning algorithms. In this section,a brief description of all the algorithms selected for this thesis is provided as follows.2.1.2.1 Support Vector MachineSupport vector machine algorithm is well-suited for text classification tasks [3], [12].A Support Vector Machine (SVM) is a classifier that distinguishes between data bya separating hyperplane. In other words, when trained on labeled training dataset insupervised approach, it yields an optimal hyperplane that groups new examples ontesting dataset. If considered a two dimensional feature space, the hyperplane becomesa line that divides the plane into two regions, each region signifying a class which isthe essence of a binary classification task. In support vector machine, linear kernelis used for linearly separable data while polynomial and radial basis function (RBF)kernels are applied to linearly inseparable data. There are two essential parameters forimplementing Support Vector Machines: parameter c handles the trade-off between thesize of hyperplane and correct classification of training points, and gamma parameterregulates the curvature of decision boundary.2.1.2.2 Stochastic Gradient descentStochastic gradient descent, often abbreviated as SGD, is a significant optimisationtechnique in terms of machine learning. It is iterative by nature, which optimises anobjective function equipped with the parameters of a model and updates parametersfor each training sample. From [13], it can be stated that SGD works faster thanbatch gradient descent since batch gradient descent does redundant computation in caseof</s>
<s>large datasets by re-calculating gradients before each parameter update and SGDescapes this redundancy by executing one update at a time. Thus, SGD optimisationtechnique can be particularly helpful in big data applications since it works fasterwith reduced computational load. In scikit-learn library used in python programming2.1 Preliminarieslanguage, a model can be selected by changing loss parameter to employ the SGDoptimisation technique. By default, the value of loss parameter is ‘hinge’ and it fitsa linear support vector machine with SGD optimisation technique. Log loss providesa logistic regression with SGD training. Alpha is a regularisation parameter used inSGD approach.2.1.2.3 Logistic RegressionLogistic Regression is a classification algorithm which is widely known for its ability towork with categorical data. Logistic regression’s prediction is highly dependent on thecategorical variable as it predicts probability of an outcome, P (Y = 1) as a function ofinput data, X. It constructs a logistic curve providing values between 0 and 1. LogisticRegression is broadly exercised in case of binary response data in data modelling such asspam detection, positive/negative movie review analysis, tumour malignancy detection,etc. It is a part of a group of models called generalised linear models. Logistic regressionand linear regression are almost similar except for the fact that the curve constructedby logistic regression uses the natural log-odds of target variable. There are three typesof logistic regression: binomial, ordinal and multinomial. In binomial or binary logisticregression, there are only two possible types of outcome. Multinomial logistic regressionis applied where three or more categories are possible that are not ordered. Ordinallogistic regression works well with ordered dependent variables. Figure-2.1 illustratesan example of a simple binary logistic regression.Figure 2.1: Logistic Curve - Comparison Between Linear model and Logistic model2.1 PreliminariesIn scikit-learn library of python programming language, if the value of ‘multi class’parameter is ’multinomial’ then multinomial loss is calculated even when data is binary.The ’ovr’ value of ’multi class’ treats the problem as binary problem by incorporating‘One vs Rest’ scheme.2.1.2.4 Näıve BayesNäıve Bayes algorithm is based on Bayes’ theorem which presumes attribute indepen-dence [14]. Although the base assumption of conditional independence rarely holdstruth in terms of real world applications, its driven performance in classification tasksis quite astounding. Näıve Bayes is reviewed comprehensively by several researchers interms of text classification tasks [15],[16].Text classification from Näıve Bayes perspective necessitates the assumption thattext data was manufactured by a parametric model and this approach uses training datato compute Bayes-optimal approximates of the model parameters. Afterwards, based onthese estimates of model parameters, this method uses Bayes’ rule to calculate posteriorprobability of new test documents. Subsequently, it selects the most probable class forclassifying new documents. The Bayes’ rule is provided in the following equation:P (A|B) =P (B|A)P (A))P (B)where P (A|B) is the posterior probability that event A occurs when given event B.P (B|A) is the likelihood probability of occurring event B when event A is given. P (B)is the probability of event B happening independently.In [15], multinomial Näıve Bayes model is compare with Multi-variate BernoulliNäıve Bayes to contrast the efficiency of different Näıve Bayes models in support</s>
<s>oftext classification. In this study, the multinomial model offered a 27% decline in errorover the multi-variate Bernoulli on average across the experimented datasets. Fromstudies [15] and [16], both indicate that Näıve Bayes classifier with multinomial eventmodel usually outperforms the other variants of Näıve Bayes models. Parameters ofMultinomial Näıve Bayes model includes alpha which is Laplace smoothing parameter.2.1.2.5 K-Nearest NeighborK-Nearest Neighbor, shortly known as KNN, is a well-known statistical pattern recog-nition algorithm which has been widely used in various text classification tasks. This2.1 Preliminariesmethod maneuvers classification tasks by categorising objects on the basis of nearestneighbors or training samples in the feature space. In this algorithm, given a test doc-ument, it locates k nearest neighbors from the training examples and uses categories ofthe k neighbors to determine the category candidates. The weight of the categories ofthe neighbors is regulated by the similarity score of each neighbor document to the testdocument. If some of the neighbors share same category, then weight of each neighboris added for that category and the resulted sum is used as the prospective score of thatcategory for the test document. Next a sorted and ranked list of scores of differentcategories is created for the test document. By including a threshold value on thesescores, category assignment is attained. The decision rule for this algorithm is providedin the following equation:y(~x, cj) =~di∈kNNsim(~x, ~di)y(~di, cj)− bjwhere, y(~di, cj) ∈ {0, 1} is the classification of document ~di in respect of class cj .sim(~x, ~di) is similarity function between training text ~di and testing text ~x. bj is thethreshold value for decision making.Parameters of KNN includes ’n neighbors’ which indicates number of neighborsand ’weights’ parameter indicates weight function used for evaluation. One particulardrawback in terms of applying KNN is the struggle to elect optimal values. The bestpick for k value is usually varies with data. Larger k value can reduce the influence ofnoise on classification, however, it can formulate less distinct boundaries among classes.2.1.2.6 Decision TreeDecision Tree is a supervised learning algorithm commonly used for classification andregression tasks. It forecasts target value on the basis of simple decision rules from theextracted features of dataset. Being a commonly opted supervised learning algorithm,Decision Tree based methods are able to achieve higher accuracy with stability and of-fers simplicity in interpretation. These methods also represent non-linear associationssuitably which is unlikely of linear models. In [4], Decision Tree and PropBayes al-gorithm is compared to measure the approximate conditional probabilities of categoryoccurrence given feature occurrences. According to this study, PropBayes does notcapture large amount of feature efficiently and also unable to achieve good precision2.1 Preliminarieswith high recall and few number of features. The consequence of combining high recallwith small feature set is putting high value on a single feature, which often results intounreliable classification. Unlike PropBayes, Decision Tree can select feature stepwisewhich facilitates the usage of more features than PropBayes resulting more reliableclassification, nonetheless, requiring more training examples.2.1.2.7 Neural NetworkThe concept of neural network is inspired from human brain and the way it processesinformation. Neural networks are known for</s>
<s>their extraordinary aptitude in obtaininguseful material from complicated or imprecise data and often used to detect patternswhich are too intricate for human or other computing methods to perceive. A neuralnetwork usually consists of an input layer, one or more hidden layer and an output layerwhere each layer consists of nodes or units. It is able to map training samples from inputlayer to the output layer. For an input unit or node, the given inputs are multipliedby the weights of that unit and added as a sum. This summation value is known asthe summed activation of the unit. Afterwards, this summed value is transformed byan activation function which is the output for the individual input node, also referredto as activation of the node. However, only the input layer and hidden layer containsactivation function since output layer is commonly taken to embody the class scores inclassification tasks. In our experiments, we used different types of layer to constructthree different neural network. These layers and their usage is described in the followingparagraphs.Fully connected layer Fully-connected layer, also known as densely connected layeris the most common form of neural network layer. It is also known as multilayerperceptron. Networks created with only this layer are fully connected pairwise, however,neurons which reside in a single layer shares no connection.Dropout layer Dropout layer averts overfitting and offers a way to merge variousarchitectures of neural network competently [17]. The term “dropout” indicates tempo-rary removal of neural network units from both hidden and visible layers alongside allof its incoming and outgoing connectivities. Dropout has proven to be a good regular-izer even using a larger neural network as it steadily added 2%-4% to the comparative2.1 Preliminariesperformance and a dropout rate of 0.5 has been demonstrated to be efficient in theexperiments conducted in [5].Convolution layer While convolutional Neural Network(CNN) has been successfullyinvolved in various benchmark studies for image recognition tasks [18], [19], it has also inand proven to be applicable for several Natural Language Processing(NLP) tasks such assemantic parsing [20], sentence modeling [21], sentence classification [5], character-leveltext classification [22]. In convolution layer, convolution is used over the input whereeach local region of input is connected to a neuron of next layer. This spatially extendedconnectivity is a hyper-parameter known as filter of the convolution. Each layer appliesdifferent filters and combines their result. For NLP tasks, one dimensional convolutionlayers are used generally. Input is generally a matrix of vector representations of textdocuments, each row corresponding to a word or a character. Afterwards, filters areused which slides over the full rows of the matrix (words or characters). Thus, the widthof the filter is same as the width of the input matrix. Height of the filter, also knownas region size or kernel size, usually specifies the length of one dimensional convolutionwindow. Stride refers to the step size of each shift taken by the filter. Input size isfixed with zero padding for regulating spatial size of output.Spatial dropout layer Spatial dropout is an alternative approach of incorporatingdropout layer with CNN. In SpatialDropout, value of dropout is</s>
<s>extended across thefeature space. In [18], authors initially experimented standard dropout with convolu-tion layer and observed that applying standard dropout before each convolution raisedtraining time, however, did not inhibit overtraining. The authors initialized spatialdropout layer which drops out values across entire feature map and observed that thismodified dropout improved performance of their method.Pooling layer Pooling layers are usually employed after the convolutional layers.Applying a max operation to each filter’s outcome is a common method of executingpooling. In Natural Language Processing(NLP), generally, pooling is applied over thewhole output volume, capturing a single value for each filter, which is known as 1-maxpooling. [23] implies that 1-max-pooling performs better than average-pooling andk-max pooling.2.1 PreliminariesActivation functions The activation functions, also known as transfer functions,regulate the output of a neural network. This function is connected to each neuron ofall the layers of a neural network (except for output layer), determining the activationof the neuron. Moreover, activation function normalizes the output of a neuron betweena range of 0 to 1 or -1 to 1.a) Rectified Linear Unit (RELU) The rectified linear activation function, oth-erwise known as RELU, is a piecewise linear function that outputs zero if the valueof input is negative, otherwise output value is the same as input value. It has be-come the default activation function that is widely used in varieties of neural networkarchitectures since it equips model with ease at training and often acquires excellentperformance.b) Sigmoid Sigmoid activation function provides smooth gradient by inhibitingfluctuations in output values. It normalizes each neuron’s output value that rangebetween 0 and 1. The sigmoid function used for the activation is given below:y =1 + e−xIn the equation above, x is the input and y is the output for sigmoid function.c) Softmax Softmax activation function is capable of handling more than twoclasses or categories. This is why Softmax activation function is used for multi-classclassification tasks on the output layer of the neural network since it provides theprobability distribution for all classes to categorise from all the possible target classes.2.1.3 Evaluation MetricsPerformance evaluation metrics that are used in this study are described in brief asfollows:Confusion Matrix Confusion matrix depicts the performance of a classificationmodel on test data. It is essential for computing other evaluation metrics such asaccuracy, precision, recall and F1-score. In Table -2.1, structure of a confusion matrixis provided.2.1 PreliminarieshdtpTable 2.1: Confusion matrixPredicted classActual classClass=Yes Class=NoClass=Yes True Positive(TP) False Positive(FP)Class=No True Negetive(TN) False Negetive(FN)• True Positives (TP) – True positives provides the number of correctly predictedpositive values.• True Negatives (TN) - True Negatives (TN) provides correctly predicted negativevalues.• False Positives (FP) – False Positives occurs when actual class is negative andpredicted class is positive.• False Negatives (FN) – False Negatives happens when actual class is positive butpredicted class in negative.Precision Precision is the ratio of True Positives(TP) to the total predicted positiverecords (TP + FP). It provides information about the proportion of the data pointsare relevant according to the model, are actually relevant.Precision =TP + FPRecall Recall is the ratio of True Positives(TP) to the all observations in</s>
<s>actualpositive class. Recall has the ability to find all relevant instances in a dataset.Recall =TP + FNAccuracy Accuracy is the most commonly used performance measure and it is a ratioof correctly predicted observations to the total observations. Accuracy can provideinformation if a model is being trained appropriately and how will be the performanceof it generally. However, it does not do well when you have a severe class imbalance.Accuracy =TP + TNTP + FP + FN + TN2.2 Literature ReviewF1-measure F1-score is the harmonic mean of Precision and Recall. Therefore, thisscore considers both false positives and false negatives. Usually F1-score is more usefulthan accuracy, especially in case of uneven class distribution.F1Score =2 ∗ (Recall ∗ Precision)Recall + Precision2.2 Literature ReviewUnderstanding natural language in text or any other form might be an easy task forhuman beings; however, scrutinising the structure of a language, describing the un-derlying concept and applying these intricacies to address task specific solutions is atricky endeavour for computers. Nonetheless, diverse methods have been incorporatedand achieved extraordinary outcome regarding miscellaneous language processing tasksover period of time; document categorisation being one of them. Document categorisa-tion refers to a process of classifying unlabelled documents into one or more predefinedclasses depending on their contents. It is a supervised classification task and entailsfeature extraction method to leverage different aspects of textual information. In thissection, literature review is described based on various aspects such as language, datasource, feature selection techniques, algorithms and results.2.2.1 LanguageText document classification or categorization is a common natural language processingtask conducted on numerous languages with varying methods. However, different lan-guages necessitate contrasting techniques in preprocessing due to the distinction amongthe complex structures and characteristics of various languages. Considering the factthat different languages consist of unique sets of character, particular grammar rulesand separate set of punctuation symbols; it is certain that preprocessing varies from onelanguage to another. Apart from the differences in core structure of various languages,inadequate amount of work is another reason that makes the preprocessing much moredifficult due to the lack of sufficient preprocessing knowledge on target language.On the subject of the amount of existing works, ample research works are carried outon English language compared to any other languages. Since most of the benchmarkstudies are conducted on English language, preprocessing in English is well-studied2.2 Literature Reviewand advanced preprocessing tools are centered towards English language. Generally,this task includes removing extra meaningless characters, removing words that occurscommonly in most of the documents since they have less importance in categorization– these words are known as stopwords.For English language classification tasks, in paper [3], dataset used in the experi-ment is hand-labeled where some text data are categorized with more than one label.Removing stopword and disregarding the terms occurring in less than three documentis also done as part of preprocessing. Similarly, manual category assignment can alsobe noticed in paper[4], [24]. In paper [25], the authors collected their English textdata from Yahoo newsgroup in HTML form; also manually indexed by human experts.Their preprocessing includes the removal of HTML header and tags, synonymouslyknown as document parsing and</s>
<s>removal of stop-word and low frequency words basedon term weighting and principal component analysis (PCA). Moreover, while most ofthe studies are conducted concerning word level categorisation in English; however,in recent years, character level English text categorisation has become an imminentinterest among natural language processing(NLP) researchers. The experiments con-ducted in [26] confirms that character level text categorisation is a definite possibilityfor English text categorisation without necessitating words. Furthermore, a study ofshort text classification is carried out where English text is engineered as sequentialdata using convolutional neural network based and recurrent neural network based rep-resentations [27]. The reason to consider text as sequential data relys upon the factthat short texts are usually part of sentences in documents or dialogues [27].Notwithstanding the fact that English texts are the most extensively studied in nat-ural language processing fields, existing research works are present for various Asianlanguages. In paper [28], authors worked on Marathi text document categorizationwith various supervised learning algorithms such as Näıve Bayes classifier, Modified Knearest neighbor classifier and Support Vector Machine. In paper [29], task of classifi-cation is conducted on Gujarati language where authors compared the effect of NäıveBayes Classifier on Guajarati text classification without and with using feature selec-tion method. In paper [30], authors used combination of Näıve Bayes Classifier andontology based classification approach to categorize Punjabi text documents. Thereare some text categorization research work exists for Arabic languages [31],[32].2.2 Literature ReviewThere are also existent research works which compares more than one languagefor performance evaluation. For instance, in paper [33], authors used Indonesian andEnglish twitter data for analysing character and word level text classification. Bothdatasets are preprocessed beforehand by removing hyperlinks and lowercasing charac-ters in the dataset. However, Chinese and English language datasets are more com-monly contrasted among various experiments [34],[26],[35].In the matter of Bangla document categorisation, various methods have been ex-perimented over recent years. Preprocessing task is different despite all dataset beingin Bangla language due to the variation in data sources, varying task requirements.However, the objective of preprocessing is to prepare text data to be accessible by fea-ture extraction tools. Thus, this task includes removal of stop words, punctuations,English letters, English and Bangla numeric letters since these are frequently used inBangla text [36]. In paper[37], authors proposed a Bangla document categorisation sys-tem using Term Frequency- Inverse Document Frequency (TF-IDF) as feature selectiontechnique and Stochastic Gradient Descent(SGD) as classifier. Pre-labeled dataset iscollected for this study from an online newspaper named BDNews24 where each doc-ument is preprocessed. In paper[38], supervised learning methods are assigned andcontrasted in terms of classifying Bangla text documents. Authors collected datasetfrom online newspapers known as Prothom Alo and Bdnews24 and preprocessed thesedocuments by removing punctuation symbols, unnecessary words, and stemming thewords into root words. Preprocessing task also includes tokenization. Text documentsgenerally comprises of sequences containing sentences, words or letters necessitatingsegmentation in order to enact the usability of feature extraction methods. This seg-mentation can be executed on the basis on sentence, word or character. The process ofsegmenting documents into usable units is known as tokenization [39], [40]. For Bangladocuments, ‘space’</s>
<s>is commonly used as delimiter for tokenization as it is noticable in[39], [41], [36], [42], [43],[40]. Moreover, word stemming and removal of insignificantwords or stopwords like conjunctions, pronouns is also included in various studies [44],[6], [45], [46], [47], [40]. However, in [7], tokenization process includes fixing the size ofeach text data after segmentation and padding is added to retain fixed size of 2200 ifinput text inhibits lesser words.2.2 Literature Review2.2.2 Data SourceText categorization has numerous real-life application ranging from sentiment analysisto spam detection. This is the incentive behind varying data resources as data can beamassed from countless sources to comply with target application. In case of Englishdocument categorization, Reuters-21578 corpus is extensively used [3], [24], [48], [35],[49],[50], [51], [52].Some studies compared performance with dataset of two different languages asdescribed in 2.2.1. However, in recent works, text data of various domain is collected formeasuring performance of their proposed method on different setting of experiments. Inpaper[21], authors conducted experiment in 4 datasets: first two datasets are collectedfrom Stanford Sentiment Treebank [53] to analysis sentiment of movie reviews, thirdis TREC dataset for categorising six question types in and fourth one is a large set oftwitter posts for sentiment analysis. In paper [26], authors created eight large-scale textdatasets that ranges from hundreds of thousands to several millions of samples. In paper[5], author tested their proposed convolutional neural network against six datasets: firstone called MR includes movie review dataset for positive/negative review analysis, nexttwo dataset named SST-1 and SST-2 is collected from Stanford Sentiment Treebank[53] for sentiment analysis and in case of SST-2 neutral reviews were eliminated, thirddataset is a subjectivity dataset which is previously used in [54] to determine thesubjective/objective of a sentence), fourth is TREC question dataset [55], fifth datasetCR involves customer reviews of various product to foretell positive/negative reviewsand finally sixth dataset is MPQA dataset [56]. In [27], authors incorporated threedifferent datasets to validate their Recurrent Neural Network and Convolutional NeuralNetwork based model to classify short texts:• DSTC 4: Dialog State Tracking Challenge [57].• MRDA: ICSI Meeting Recorder Dialog Act Corpus [58].• SwDA: Switchboard Dialog Act Corpus [59].For Bangla text categorization, online newspapers are the most utilised source invarious studies such as Prothom Alo, Bdnews24, Jugantor, Manabzamin, Kaler Kontho,AnandabazarPatrika, Bartaman, Ebela Tabloid, etc.[60], [38], [42], [39], [41], [36], [43],[37], [7] , [40],[61], [62].2.2 Literature ReviewHowever, in paper [63], authors constructed a readability classifier to measure theease of grasping a text where corpus is gathered from school textbooks used in theeducation system of Bangladesh. Furthermore, Bangla twitter text data is collectedfor sentiment analysis [64]. In paper [65], dataset that is used for sense classification ofBangla sentences is developed under the TDIL (Technology Development for the IndianLanguages) project of the Govt. of India which contains 84 domain of subjects.2.2.3 Feature Selection methodsFeature selection method is a task of crucial importance in terms of text categoriza-tion. Various feature extraction tools are incorporated in terms of text categorization.In earlier studies, bag of word feature extraction method was used [25]. However,Term Frequency- Inverse Document Frequency(TF-IDF) is by</s>
<s>far the most used andproven method for word-level feature extraction [26]. However, over the years, someother feature techniques have shown to outperform TF-IDF. In [35], authors tested La-tent Semantic Indexing (LSI), TF-IDF and multi-word text representation combiningwith Support Vector Machine and their experimental results showed that LSI workedbetter in both Chinese and English datasets used in their study. In [66], term weight-ing scheme tf.rf (term frequency-relevance frequency) outperformed TF-IDF with Sup-port Vector Machine(SVM). In [39], authors adopted a new feature selection techniqueTerm Frequency- Inverse Document Frequency – Inverse Class Frequency(TF-IDF-ICF)which attained better result than both Term Frequency(TF) and Term Frequency-Inverse Document Frequency TF-IDF in case of classifying Bangla text documents.Nonetheless, these methods are tested with few algorithms, while there are other well-established algorithms which combining with TF-IDF provided strong results regardlessof language distinction [24],[67],[38],[42],[36],[40],[37].Word embedding or word vector is another feature extraction method that has gainmuch popularity with the increased interest in neural network. In paper [5], authorsused publicly available word2vec vectors which were prepared by training 100 billionwords from Google News. Moreover, Word2Vec with skip-gram algorithm is used tocreate Bengali word embedding model for Bangla document categorization using deepconvolutional neural network [7]. In paper[68], the authors used word2vec model tocreate word embedding and used t-distributed Stochastic Neighbor Embedding (t-SNE)to reduce the dimension of the vector into two in order to attain less computation time2.2 Literature Reviewwhile creating word clusters with k-means clustering algorithm and tested word clusterswith Support Vector Machine(SVM) to categorize text documents. These Word clustersare used as features for classifying Bangla texts.However, there is no hard and fast rule for feature selection and extraction as nu-merous studies has exhibited diverse feature selection systems. In [27], authors usedRecurrent Neural Network (RNN) or Convolutional Neural Network (CNN) based vec-tor representations of text as features in order to classify short text. In [21], authorsused feature graph induced by Dynamic Convolutional Neural Network (DCNN) con-sisting convolution layer and dynamic k-max pooling layer for semantic modelling ofsentences. In paper [26], authors used characters as features for character-level textclassification. In this study, they included 70 characters comprising 26 English letters,10 digits, 33 other characters and the new line character. Authors did not differentiatebetween upper-case and lower-case letters as semantics remains unaffected with alter-nate letter cases. In [22], authors used a wide convolution layer, a k-max pooling layerand a non-linear function to obtain feature maps of varying order. Furthermore, termassociation and term aggregation are investigated as feature extraction approaches forBangla text classification in [41], [43]. In paper [6], authors used TF-IDF vector repre-sentation with autoencoders constructed with NN to represent high dimensional featureinto lower space. Moreover, in paper [44], authors initially used unigram TF-IDF scoreto investigate the consequence of exploring the varieties of linear and non-linear Sup-port Vector Machine (SVM) kernels of in case of Bangla news classification. However,they observed that linear SVM handle linearly inseparable cases by transforming datainto some higher dimensions. As their dataset already contains a lot of features whichtranslates as large amount of dimensions, they did not prefer combining TF-IDF</s>
<s>withlinear SVM. Thus, they compared feature space by using Document Frequency(DF)Thresholding and Term Frequency(TF) Thresholding, and chose Term Frequency(TF)as primary feature selection technique.2.2.4 AlgorithmsSelecting an algorithm and harnessing it toward a specific task was a common practicebefore the progression of machine learning. Machine learning algorithms have reducedtask specific engineering in manifolds making them applicable for variety of tasks whichexplains the increasing popularity of machine learning algorithms in various domain of2.2 Literature Reviewexpertise. Algorithm selection is nonetheless an important task in machine learningsince every algorithm does not offer decent result for all sort of application. However,for the task of text document categorization, various supervised learning algorithmshas provided outstanding result.In case of English text classification, authors Decision Tree algorithm is comparedwith PropBayes algorithm in [4] to evaluate the applicability in text categorization ontwo English datasets and discovered that Decision Tree algorithm performed betterthan PropBayes. However, this study outlines that PropBayes does not use large num-ber of features efficiently and Decision Tree’s stepwise feature selection enables it touse more feature but demands more training set samples. In [25], authors implementedclassification algorithms such as Näıve Bayes, nearest neighbor, decision tree classifierand subspace method in text classification task and observed that Näıve Bayes andSubspace Classifier algorithm performed better than Nearest Neighbor and DecisionTree. Another remark can be noted from this study is that combining multiple clas-sifiers for classification task does not always provide better classification accuracy. In[67], authors implemented a framework combining TF-IDF and KNN for text classifica-tion task. For the case of Bangla document categorization, In paper [37], SGD classifieralgorithm provided better performance in comparison to algorithms. In most studies,performance supervised algorithms such as Support Vector Machine (SVM), K Near-est Neighbor (KNN), Decision Tree (DT), Stochastic Gradient Descent (SGD), NäıveBayes are compared with each other while different algorithms perform better in dif-ferent studies due to the contrast in fine tuning of hyper parameters, feature selection,data preprocessing [40], [38], [25] , [41]. However, some other lesser known algorithmsare experiment for Bangla document categorization in recent years such as PART classi-fier [43], LIBLINEAR [36], Cosine Similarity and Euclidean Distance used as similaritymeasures on vector space with TF-IDF[42] . In paper [44], different kernel functionssuch as linear polynomial and radial basis kernels of Support Vector machines are exper-imented and evaluated for Bangla text classification. In some studies, newly proposedfeature selection techniques are experimented with classification algorithms which areknown to provide good results. In paper [39], Multinomial Näıve Bayes algorithm isused to evaluate their proposed feature selection technique Term Frequency- InverseDocument Frequency – Inverse Class Frequency(TF-IDF-ICF) with more commonlyused Term Frequency- Inverse Document Frequency (TF-IDF) and Term Frequency2.2 Literature Review(TF). In paper [68], authors used Support Vector Machine to test the efficiency of wordclusters as features in terms of Bangla text categorisation. Furthermore, in recentyears, neural networks with different architectures and kinds have achieved excellentoutcome in text classification task. In [5], four Convolutional Neural Network(CNN)architectures are tested with six English dataset to validate the efficiency of CNN forsentence classification task. The experimental results provided evidence that CNN isapplicable</s>
<s>in text classification task. Various studies combined word embedding modelsuch as Word2Vec with neural networks as classifier such as [5], [7], [68]. However,there are some other studies which does not incorporate Word2Vec or word embeddingalgorithms even though their exmeriments consist of CNN. In paper[21], authors con-structed with Convolutional Neural Network (CNN) with dynamic k-max pooling forModelling Sentences which performed well across various datasets described in 2.2.2. Inpaper [27], sequential CNN or RNN based vector representations of text are forwardedto a two-layered feedforward neural network as classifier for the task of short text clas-sification. In paper [22] , authors proposed a Long Short Term Memory(LSTM) basedarchitecture with K-max pooling layer and one dimensional spatial dropout layer whichsimulates Convolutional Neural Network. In paper [26], authors designed two convolu-tional neural network with convolutional layer and fully connected layer for characterlevel text classification task. In paper [6], authors used low dimensional auto encodedtext representations with neural network architecture as classifier to categorise Banglatexts. The architecture of the neural network includes 50, 50 and 100 hidden units,RELU activation function at first two layers and Softmax activation in final outputlayer.2.2.5 Performance EvaluationA model’s applicability in terms of its target application is evaluated by its perfor-mance. Thus, performance evaluation is an imperative task for machine learning ap-plications. There are various evaluation metrics to assess a model’s capability and dif-ferent studies choose diverse performance measures to estimate their adopted methods.A brief description is provided for some of the commonly used and known evaluationcriterions in 2.1 In terms of English text categorization, authors used micro averageof recall and precision as evaluation measure in [4]. In this study, with parametercontrolling, they increased algorithm’s document assigning capabilities which results2.2 Literature Reviewincreased recall and usually decreased precision. Also, they analyzed precision-recallcurve and employed linerar interpolation to observe breakeven point on the curve. Onthe basis of recall/precision curve, authors experimented PropBayes and Decision Treewith varying feature sets selected by information gain and observed that Decision treemethod performed better even on high recall value. In [67], authors tested a frameworkcombining Term Frequency – Inverse Document Frequency (TF-IDF) and K- NearestNeighbor(KNN) with 500 online documents. The testing was conducted in online en-vironment and authors observed that classification performance was depended on thevariety of categories since documents of same category provided better result. Theyalso discerned that quality of classification decreased with increased number of docu-ments. In [5] , authors included a single channel convolutional neural network(CNN)architecture similar to [21] and compared performance. Even though both study incor-porated similar structures of CNN, [5] reported better performance than [21]. In paper[22], authors tested with six different datasets with Long Short Term Memory(LSTM)based architecture by comparing with several strong baseline methods and conjecturedthat a single machine learning model does not work well with all kinds of datasets. Inpaper [26], authors made observations that character level convolutional neural net-works, otherwise known as ConvNets, are applicable in terms of text classificationwithout needing any word. They also observed that character level classification worksbetter with larger dataset along the scale of millions whereas</s>
<s>traditional approach liken-gram TF-IDF performs well with dataset size that expanses up to several hundreds ofthousands. For Bangla document categorisation, in [69], authors included training andtesting time with precision, recall and F1-score as evaluation metrics and based on thismetrics, Stochastic Gradient Descent outperformed other compared algorithms withprecision value 0.9386, recall value 0.9388 and F1 score 0.9385. In paper [38], authorscompared three types of sampling methods namely: training set validation, percentagesplit validation and cross-fold validation with multiclass models of several classificationalgorithms distinctly on full dataset and on selected features. This study used accu-racy as evaluation metric and experimental results indicated that Logistic Regressionalgorithm provided better accuracy than other algorithms in all sampling techniques.In [61], authors compared TF-IDF and Chi-square as feature selection technique withStochastic Gradient Descent, Näıve Bayes, Support Vector Machine algorithms andobserved that TF-IDF based models performed better than Chi-square based models.2.2 Literature ReviewIn [42], authors used accuracy, precision, recall and F-measure as evaluation metrics.Their proposed methods for text classification, cosine similarity achieved 95.80% ac-curacy, 0.958 in precision, 0.958 in recall and 0.958 in F-measure whereas Euclideandistance secured 95.20% accuracy, which are higher than other compared methods.They also included statistical nonparametric Friedman rank sum test for calculatingrank of all algorithms, and in this ranking system, cosine similarity and Euclideandistance achieved 1st and 2nd rank respectively. In [39], authors introduced TermFrequency - Inverse Document Frequency - Inverse Class Frequency (TF-IDF-ICF) asa feature selection technique and evaluated its performance with accuracy, precision,recall and F1 measure as performance measure. Experimental results depicted thatTF-IDF-ICF based model scored 98.87% in accuracy, 0.989 in precision, 0.989 in recalland 0.989 in F1 measure which is better than other experiments conducted in thisstudy. In paper [7], deep convolutional neural network is employed for text classifica-tion task. This approach obtained 94.96% accuracy and showed better performancethan other experiments conducted in this study. In [6], authors experimented a deepfeed forward network in terms of text classification and their proposed method achieved94.05% accuracy.Chapter 3Proposed MethodMethodology of a thesis offers intricate specificity of the work conducted by providinginformation about data sources, analysis techniques, performance scrutiny, etc. Thischapter provides details about our proposed method and other experiments conductedin this study.3.1 Block DiagramThis study entails three sequential steps: A) preprocessing raw text data and distribut-ing the total dataset into train set and test set, B) applying classifier algorithms andneural networks to test set and finally, C) prediction and model evaluation. The blockdiagram of the methodology of this thesis work is provided in Figure 3.1.Figure 3.1: Block Diagram - Steps of Methodology.In section 3.2, data collection and preprocessing is described. Section 3.3 providesdetails about feature extraction techniques. Section 3.4 delivers model fitting analogiesof conducted experiments. Section 3.5 offers information about model evaluation.3.2 Feature extraction3.1.1 Data Preprocessing1600 articles of 16 distinct classes are collected from two online Bangla newspaperknown as ProthomAlo (http://www.prothom-alo.com) and BDnews24 (http:// bd-news24.com). Documents were previously labeled in the mentioned newspapers. Thesedocuments are obtained by parsing Bangla texts from paper articles and further ar-ranged into 16 classes where 100 articles belong to each</s>
<s>class. The 16 categories arenamely: Commerce, Science, Art and literature, Economy, Education, Entertainment,Immigrant, Kids, Lifestyle, Movie, National, Politics, Sports, Stock, Technology, World.The whole dataset is divided into two sets: training set containing 1120 articles andtesting set composed of 480 articles. Each class has 70 training documents and 30 testdocuments. Stratification of dataset guarantees that training/testing set would con-tain documents from all possible categories which is uncertain when it comes to randomsampling. Thus, dataset is stratified. Furthermore, punctuation marks, English letters,mail addresses, html tags are removed from each raw document while tokenizing.3.2 Feature extractionIn terms of feature extraction, Term Frequency-Inverse Document Frequency (TF-IDF)is used for classification algorithms. Top 1500 features with the highest term frequencyand document frequency less than 0.7 are selected where each feature appears in morethan four documents during the process. In the case of neural networks, 5000 wordswith the highest word frequencies in training set are selected for feature extractionwhere each word is mapped to an integer number (known as token). These tokensrepresent sequential data and for Convolutional Neural Network (CNN), sequential datais essential since TF-IDF does not interpret neighbourhood among words. However, formultilayer dense neural network, both TF-IDF and word vectors are incorporated withtwo different model architectures. Moreover, CNN architecture includes embeddinglayer which creates word vectors to be used as features. These word vectors are alsoknown as distributional similarity based word representations. These representationsalso provides contextual similarity between words which results into a more effectiverepresentation. The maximum number of words in a document is 2037 among all thedocuments of the training set. Each document requires to have equal length to be3.3 Model Fittingutilized by embedding layer. Hence, each document size is fixed to 2000 words bypadding and truncating as needed.3.3 Model FittingModel fitting denotes the ability of generalization of a machine learning algorithm. Itrequires a suitable algorithm for the target task. After fitting a model on training data,the performance of the model is evaluated. In 3.3.1, model fitting process of this workis described.3.3.1 Classification AlgorithmsSix well- known classification algorithms: K Nearest Neighbor, Decision Tree, SupportVector Machine, Stochastic Gradient Descent, Multinomial Näıve Bayes, and LogisticRegression are used in this study since these algorithms have been used in many textclassification works as described in 2.2.1. Different models are created with these algo-rithms with different parameter values and these models are fitted on the training set.Subsequently, grid search is applied for each model on training set with different sets ofparameters for obtaining optimal hyper parameter values. Hyper-parameter optimiza-tion is a pivotal task since a model’s performance is highly influenced by it. Optimalparameters are determined based on the best score of 10-fold cross validation. Duringthis process, training set is being fitted to each model with optimal set of parameters.This method is illustrated in Figure 3.2.3.3.2 Neural NetworksConvolutional Neural Network In convolutional neural network model (CNN),embedding layer is used as part of the model itself to procure word embedding. Thislayer learns word vectors along with the model while being fitted on training data.Each input document is provided as sequence of N tokens</s>
<s>(N=2000) to the model’sembedding layer that transforms the tokens into n-dimensional word vectors (n=128).Subsequently, a one dimensional spatial dropout layer is included for regularization assuggested in paper [22]. Next, a one dimensional convolution layer is included in frontof a one dimensional pooling layer, namely GlobalMaxPool layer. Finally, the outputlayer of this architecture is a dense layer with 16 units is where softmax is added with3.3 Model FittingFigure 3.2: Process Diagram for Classification Algorithms - used for K NearestNeighbor, Decision Tree, Support Vector Machine, Stochastic Gradient Descent, Multino-mial Näıve Bayes, and Logistic Regression.as activation function. The diagram of Convolutional Neural Network architecture isgiven in Figure 3.3Dense neural networks For this study, two architectures of dense neural networkare experimented.• DenseNN1: One dense neural network includes embedding layer as a part of archi-tecture to construct word vector and conduct classification task simultaneously.• DenseNN2: other one uses TF-IDF as feature selection technique hence it doesnot contain embedding layer (proposed model).In terms of dense neural network model which contains embedding layer as the firstlayer (DenseNN1), it takes input as sequences like convolutional neural network (CNN).However, instead of spatial dropout layer, a flatten layer is added that transforms multi-dimensional word vectors into one dimensional tensors. These tensors can be employedby the following dense layer. Afterwards, a dropout layer is added for regularization.The output layer has the same structure as CNN. The diagram for Dense Neural Net-work (DenseNN1) architecture is depicted in Figure 3.4.3.3 Model FittingFigure 3.3: Convolutional Neural Network (CNN) architecture - with wordvectors as featuresFigure 3.4: Dense Neural Network (DenseNN1) architecture - with word vectorsas features.3.3 Model FittingThe proposed model DenseNN2 uses TF-IDF feature selection method. The firstlayer of this model is a dense layer which uses TF-IDF features as input. It is worthmentioning that this model does not include any flatten layer or spatial dropout layerdue to the absence of embedding layer. A dropout layer is added instead of previouslymentioned layers after each of the first two dense layer to fulfil the purpose of regularizer.The objective of a dropout layer is to refine generalization in performance by inhibitingstrong correlation among activation functions. Finally, the output layer is a dense layerwith 16 units and uses softmax as activation function. The diagram of the ProposedMethod: Dense Neural Network (DenseNN2) architecture is illustrated in Figure 3.5.Figure 3.5: The Proposed Method: Dense Neural Network (DenseNN2) archi-tecture - with TF-IDF features.10 fold cross validation is used for estimating model’s capacity of predicting onunseen data for all neural network architectures and classification techniques. Cross foldvalidation provides an estimation of the skill of a machine learning model and choosingthe number 10 for cross fold validation is due to the fact that various experimentsconducted on varying datasets with different learning methods have depicted that 10folds can obtain the best estimation [70]. The reason for choosing 128 dimension forembedding layer in CNN and DenseNN1 is that paper [71] shows that 128 dimensionscan capture the necessary semantic informations. In our proposed model (DenseNN2),input layer has 128 nodes as well since in [72], authors experimented</s>
<s>over 16, 32, 64and 128 number of units in input layer for abnormality detection in X-ray images andfound that 128 nodes performed plausibly well. Uniformly distributed initialiser are3.4 Model Evaluationused in embedding layer and for all dense layer in these architectures, ’glorot uniform’is used as kernel initialiser and ’zeros’ is used as bias initialiser. Batch size is usuallyselected as number that is power of two since the number of physical processor of GPUis often a power of two and selecting batch size ensures optimal computation due todata parallelism. In our experiments, batch size is 128. Learning rate is 0.001 which isdefault value of adam optimiser.3.4 Model EvaluationTo evaluate the performance of our experimental models, we used accuracy and F1score as performance measures. Prediction on test set is conducted for each fittedmodel of all nine experiments. Predicted labels and true labels of test set are used toquantify accuracy and F1 score. Moreover, 10 fold cross validation is performed onboth traditional classification algorithms and neural networks for model estimation.3.5 SummaryIn summary, this chapter provides the structural information of the experiments de-signed for the purpose of this thesis. Experiments include K-Nearest Neighbor, Deci-sion Tree, Support Vector Machine, Stochastic Gradient Descent classifier, MultinomialNäıve Bayes, Logistic Regression algorithms and three different architectures of neuralnetworks where word embedding and TF-IDF are used as feature selection techniques.The proposed model contains TF-IDF as feature selection technique and dense neuralnetwork as classifier.Chapter 4Experimental AnalysisThis chapter provides implementational details of the designed experiments described inchapter 3. Moreover, this provides information about performance evaluation analysisof the experiments conducted in this study.4.1 Data Collection and PreprocessingFor this thesis work, we collected 1600 articles from two different online Bangla news-paper namely: ProthomAlo and BDnews24. These 1600 articles are collected from16 different categories by parsing which are labeled previously in these Bangla news-papers. At next, these articles grouped into 16 classes with 100 articles per class inthe same categories as in the newspapers. Since these documents are collected fromonline newspapers, raw data contains html-tags, email addresses, English letters anddigits and many other unnecessary literals which will add noise to the dataset. Thus,these needless characters are excluded from the dataset by occupying regular expres-sions in python language. Python language is used for all the tasks conducted in thisthesis work. These documents are further processed while tokenizing by incorporatingBangla Unicode values and keeping only Bangla words after tokenization by using reg-ular expression based tokenization approach. Afterwards, the dataset is apportionedinto training set and testing set by using train/test split where the testing set size is30% and training set size is 70% of the whole dataset.4.2 Experimental Setup4.2 Experimental SetupIn this section, experimental intricacies of our conducted experiments are provided indetails. Feature selection techniques and hyper parameter optimization of feature spaceis described in section 4.2.1. Fine tuning of algorithms of this thesis work are describedin section 4.2.2. Lastly, outcome of this thesis work is analyzed in the following section4.2.3.4.2.1 Feature SelectionFor classification algorithm based models, Term Frequency–Inverse Document Fre-quency (TF-IDF) is used as feature extraction technique. First,</s>
<s>tokenized datasetsare converted into bag of words by ‘CountVectorizer’ function of scikit-learn’s prepro-cessing tool. In this process, several parameters that shapes the structure of bag ofword methods are optimized such as ‘encoding’, ‘max df’, ‘min df’, ‘token pattern’,‘max features’, ‘lowercase’. Since this task is on Bangla language, the value of ‘uni-code’ parameter is given as UTF-8 and in ‘token pattern’ parameter, Bangla wordpattern is provided as a regular expression. Parameter ‘lowercase’ value is set to ‘True’as a default value, however, this is not applicable for Bangla language so this valueis changed to ‘False’. The value of ‘max df’ and ‘min df’ are 0.7 and 4 respectively.Value of parameter ‘max df’ means if a word appears in 70% documents then that termis ignored, and ‘min df’ optimization eliminates a word which appears in less than 4documents. Optimization of these two parameters eliminates the requirement of stopwords removal. Parameter ‘max features’ value is 1500 which signifies top 1500 fea-tures are selected for this task after applying all the other optimizations. Subsequently,these bag of word vectors are transformed into TF-IDF vectors by a transforming func-tion of scikit-learn called ‘TfidfTransformer’. For neural networks, the most common5000 words are kept as features and transformed into sequences by a preprocessingtool for text tokenization in a neural network library known as ‘keras’ which is built inpython programming language. For the proposed model (DenseNN2), these sequencesare converted into TF-IDF vector representations. For Neural Network models withembedding layer (CNN and DenseNN1), each input documents should be of fixed sizeto maintain spatial integrity. Therefore, maximum document length for training setis calculated and the highest number of words in a training document is 2037. For4.2 Experimental Setupthis experiment, each article’s word length is affixed to 2000 words by pre-padding orpre-truncating.4.2.2 AlgorithmsFor classification models, sets of parameters are primarily selected for K Nearest Neigh-bor, Decision Tree, Support Vector Machine, Stochastic Gradient Descent, MultinomialNäıve Bayes and Logistic Regression algorithm to find optimal parameter by doinggrid search and cross validation jointly with ‘GridSearchCV’. It is an extensive searchmethod for obtaining ideal parameter values for an estimator by combining cross vali-dation. This method picks the most suitable parameters for the specific task based onCross Validation (CV) scores and incorporates the user-chosen estimator. The impactof various parameters on all these algorithms are described in 2.2.1. The best set ofparameters obtained from the various combination of parameter sets after performinggrid search and 10-fold cross validation is given below:• Decision tree: optimal set of parameters is ’criterion’: ’entropy’, ’max depth’: 10,’min samples leaf’: 1• Support Vector Machine: optimal set of parameters is ’C’: 10, ’gamma’: 0.1,’kernel’: ’rbf’• Logistic Regression: ’multi class’: ’multinomial’, ’solver’: ’newton-cg’• K Nearest Neighbor: ’n neighbors’: 15, ’p’: 2, ’weights’: ’distance’• Multinomial Näıve Bayes: ’alpha’: 0.1• Stochastic Gradient Descent: ’alpha’: 0.0001, ’loss’: ’log’In terms of neural networks, a neural network library namely ‘keras’ which is writtenin python programming language is used. All of the neural network architecturesare constructed in sequential manner which facilitates these neural networks to beassembled layer by layer. For the proposed model</s>
<s>(DenseNN2), input layer consistsof 128 nodes and Rectified linear unit (RELU) is used as activation function. Seconddense layer involves 64 nodes with sigmoid activation function. Furthermore, a dropoutlayer is included after each dense layer with a value of 0.5 as recommended [5]. For CNN4.2 Experimental Setupand word vector based dense neural network (CNN, DenseNN1), embedding layer takes5000 tokens as vocabulary and each token is represented in 128 dimensions (n=128)and maximum length of each sequence is 2000. However, spatial dropout layer isincorporated with a value of 0.2 in CNN whereas word vector based dense neuralnetwork includes flatten layer after the embedding layer of both of the models. ForCNN, a one dimensional convolution layer, with 256 different filters of kernel size 3 isadded with RELU as activation function followed by a 1-D global max pool layer. Fordense NN, after flatten layer comes a dense layer composed of 128 units and Sigmoidas activation function, followed by a dropout layer with a value of 0.5 which purposeis to regularisation. Finally, output layer contains a dense layer with 16 units andSoftmax activation function for all NN models. During model compilation, Adamoptimiser is used with categorical cross entropy as loss function [73]. While doing 10cross fold validation on neural networks, 10 models must be constructed individuallyand evaluated to analysis the efficiency of model’s structure in capturing the details.This process is computationally expensive, however provide less biased estimate forthe model. Thus, each neural network architecture is tested for 25 epochs with 10different model with same architecture while training set is split into 10 folds. In eachcross validation iteration, a model with the same structure is trained on nine foldsand tested on one fold and this process is repeated until all of the folds are evaluated.Afterwards, one model for each three structures are tested for performance evaluation.Cross validation estimates at each iteration are shown as a chart in Figure 4.1From this line plot, it can be noticed that the proposed model:Dense Neural Networkwith TF-IDF as Feature extractor (DenseNN2) has better estimation than other neuralnetwork architectures (CNN, DenseNN1) based on 10-fold cross validation method.4.2.3 ResultFor this study, Accuracy, weighted average of precision, weighted average of recall,weighted average of F1 scores are selected as evaluation metrics. To calculate thisscores, confusion matrix is necessary. Thus, prediction is made on test dataset andconfusion matrix is evaluated for all models. Confusion matrix provides informationon how many documents of a class in test set is correctly classified or misclassified.Consequently, accuracy, F1 score is calculated from confusion matrix. Confusion matrix4.2 Experimental SetupFigure 4.1: Cross Validation Scores Of Neural Network Models - 10-fold crossvalidation score of all three models after each iteration.Table 4.1: Performance Evaluation ScoresAlgorithms F1 Score(weighted average) Precision Recall Accuracy(%)DenseNN2 0.85 0.85 0.85 85.208Support Vector Machine 0.82 0.82 0.82 81.870Stochastic Gradient Descent 0.82 0.83 0.82 82.890Logistic Regression 0.81 0.81 0.81 80.830Multinomial Näıve Bayes 0.80 0.81 0.81 80.620CNN 0.80 0.79 0.79 79.166K-Nearest Neighbor 0.76 0.82 0.74 74.375DenseNN1 0.65 0.62 0.61 62.291Decision Tree 0.53 0.59 0.52 51.670of all the models are illustrated in Figure 4.2-4.10. Accuracy,</s>
<s>precision, recall, F1 scoreand cross validation score of all models are depicted in Table 2.Discussion It is noticable from all the confusion matrices from Figure (4.2- 4.10)that dataset shows interclass relations among the classes which means documents ofone class can be categorize into other class as well. This can be noticed in the case ofseveral classes such as documents in ‘movie’ category can be classified into ‘entertain-ment’ category, ‘commerce’ and ‘economy’ class shares similar content, and documentsof ‘national’ classes also can include various kinds of informations which can be con-tent wise similar to ’world’ ‘economy’, ‘education’,‘politics’ news. The proposed model,Dense Neural Network with TF-IDF as Feature extractor(DenseNN2) shows this mis-4.2 Experimental SetupFigure 4.2: Confusion Matrix of Proposed model: DenseNN2 - Dense NeuralNetwork with TF-IDF as Feature extractor.classification trend as well as other well performed classifier algorithms such as SupportVector Machine, Stochastic Gradient Descent. In terms of neural networks, both theproposed model DenseNN2 and CNN architecture showed almost similar pattern inconfusion matrices. Both of the architectures did misclassification among categorieswhich are content-wise similar and almost interchangeable except for DenseNN1 whicharchitecture includes embedding layer. Multidimensional Word vectors are flattenedin this architecture which caused dimensionality reduction. Dimensionality reductioncan be a benefit when large number of features are involved. However, in case of smalldataset, reducing dimensionality can lead to data loss. This might be the reason of lowperformance of this neural network architecture.The worst performing experiment contains Decision tree algorithm. However, thisphenomena can be explained. Although Decision tree algorithm is reliable in terms oftext classification, it uses features stepwise requiring more data [4]. Since our datasetis small, it can be expected that performance of decision tree can be improved withemploying more training instances.4.2 Experimental SetupFigure 4.3: Confusion Matrix: Support Vector Machine - Support Vector Machinemodel with optimal parameters.Table 4.1 shows the performance evaluation of all the experiments conducted inthis thesis.From the table, it can be noticed that Our proposed method: Dense Neu-ral Network with TF-IDF as Feature extractor (DenseNN2) outscored (85.208% inaccuracy, 0.85 in precision, recall and weighted F1-score) other algorithms(SupportVector Machine, Stochastic Gradient Descent, Logistic Regression, Multinomial NäıveBayes, Convolutional Neural Network(CNN), K-Nearest Neighbor, DenseNN: DenseNeural Network with embedding layer as part of architecture, Decision Tree ) on thebasis of accuracy(%), weighted average of precision, recall, and F1-measure. SupportVector Machine(SVM) and Stochastic Gradient descent(SGD) models also providedsatisfactory performance in terms of evaluation metrics. Decision tree model is theworst performing one among all the experiments. In case of Neural Networks, theproposed method (DenseNN2) performed better than other architectures(DenseNN1,CNN). Since TF-IDF based methods performed better than word vector based CNNand DenseNN, it can be said that TF-IDF remains a strong candidate as feature se-lection technique for small-scale dataset. Furthermore, it is worth mentioning that4.2 Experimental SetupFigure 4.4: Confusion Matrix: Stochastic Gradient Descent - Stochastic GradientDescent model with optimal parameters.although neural networks in general requires more training data than compared toother traditional machine learning algorithms, in this study, both the proposed method:DenseNN2 and CNN performed surprisingly well with DenseNN2 surpassing other ex-perimental model on the scale of</s>
<s>evaluation metrics.4.2 Experimental SetupFigure 4.5: Confusion Matrix: Logistic Regression - Logistic Regression model withoptimal parameters.4.2 Experimental SetupFigure 4.6: Confusion Matrix: Multinomial Näıve Bayes - Multinomial NäıveBayes model with optimal parameters.4.2 Experimental SetupFigure 4.7: Confusion Matrix: Convolutional Neural Network - ConvolutionalNeural Network with word embedding (CNN).4.2 Experimental SetupFigure 4.8: Confusion Matrix: K Nearest Neighbor - K Nearest Neighbor modelwith optimal parameters.4.2 Experimental SetupFigure 4.9: Confusion Matrix: Dense Neural Network - Dense Neural Networkwith embedding layer as part of architecture(DenseNN1).4.2 Experimental SetupFigure 4.10: Confusion Matrix: Decision Tree - Decision Tree model with optimalparameters.Chapter 5Conclusion and Future worksThis chapter provides the concluding remark of this thesis work, the limitations it hasand the future direction of this thesis. In section 5.1, conclusion of this thesis workprovided. In section 5.2, limitations are discussed and lastly, in section 5.3, futuredirection of this thesis work is outlined.5.1 ConclusionIn conclusion, this thesis presents a system to categorize Bangla text document byintegrating TF-IDF as feature selection technique with dense neural network. Themain verdicts are depicted as follows:• Proposed system has accomplished higher accuracy (85.208%) and F1-score (0.85)then rest of the experiments.• Proposed method which incorporates Term Frequency-Inverse Document Fre-quency based Neural Network has superior performance than word embeddingbased Neural Network models.• Neural Networks perform well on relatively small size dataset in terms of Banglatext classification contrary to the popular belief of neural network requiring large-scale dataset for better performance.• Among the classification models, Support Vector Machine is a close competitorto the proposed model based on performance.5.2 Limitations5.2 LimitationsThe limitation is of this work is that when two class have similarity in the contentsof their articles, classifier may predict incorrectly since some article can be part ofboth of the class. For instance, accuracy discrimination is noticed between class movieand class entertainment, since both can hold similar kind of articles which raises thequestion to multi-label classification problem. Considering the fact that an article canbelong to more than one classes, implementing multi-label categorization can improvethis limitation.5.3 Future WorkThe future direction of this work is given as follows:• Implementing multi-label classification to improve the classification of articleswhich belong to different class with similar contents.• Incorporating hybrid approach with deep neural network on larger dataset re-garding Bangla text document categorization.• Reducing dimensionality of features by including PCA to improve model perfor-mance.Bibliography[1] M. Krendzelak and F. Jakab, “Text categorization with machine learning and hier-archical structures,” in 2015 13th International Conference on Emerging eLearningTechnologies and Applications (ICETA), Nov 2015, pp. 1–5. 1[2] A. McCallum, “Multi-label text classification with a mixture model trained byem,” in AAAI workshop on Text Learning, 1999, pp. 1–7. 1[3] S. Tong and D. Koller, “Support vector machine active learning with applicationsto text classification,” Journal of machine learning research, vol. 2, no. Nov, pp.45–66, 2001. 1, 8, 17, 19[4] D. Lewis, C. Info, L. Studies, and M. Ringuette, “A comparison of two learn-ing algorithms for text categorization,” Third Annual Symposium on DocumentAnalysis and Information Retrieval, 10 1996. 1, 11, 17, 22, 23, 38[5] Y. Kim, “Convolutional neural networks for sentence classification,” arXiv preprintarXiv:1408.5882,</s>
<s>2014. 1, 13, 19, 20, 23, 24, 35[6] B. Purkaystha, T. Datta, M. S. Islam et al., “Layered representation of bengalitexts in reduced dimension using deep feedforward neural network for catego-rization,” in 2018 21st International Conference of Computer and InformationTechnology (ICCIT). IEEE, 2018, pp. 1–5. 3, 18, 21, 23, 25[7] M. R. Hossain and M. M. Hoque, “Automatic bengali document categorizationbased on deep convolution nets,” in Emerging Research in Computing, Informa-tion, Communication and Applications. Springer, 2019, pp. 513–525. 3, 18, 19,20, 23, 25BIBLIOGRAPHY[8] G. Forman and I. Cohen, “Learning from little: Comparison of classifiers given lit-tle training,” in European Conference on Principles of Data Mining and KnowledgeDiscovery. Springer, 2004, pp. 161–172. 3[9] K. Masuda, T. Matsuzaki, and J. Tsujii, “Semantic search based on the onlineintegration of nlp techniques,” Procedia-Social and Behavioral Sciences, vol. 27,pp. 281–290, 2011. 7[10] C. Friedman, T. C. Rindflesch, and M. Corn, “Natural language processing: stateof the art and prospects for significant progress, a workshop sponsored by thenational library of medicine,” Journal of biomedical informatics, vol. 46, no. 5, pp.765–773, 2013. 7[11] Y. Goldberg, “Neural network methods for natural language processing,” SynthesisLectures on Human Language Technologies, vol. 10, no. 1, pp. 1–309, 2017. 7, 8[12] L. H. Lee, C. H. Wan, R. Rajkumar, and D. Isa, “An enhanced support vectormachine classification framework by using euclidean distance function for textdocument categorization,” Applied Intelligence, vol. 37, no. 1, pp. 80–99, 2012. 8[13] S. Ruder, “An overview of gradient descent optimization algorithms,” arXivpreprint arXiv:1609.04747, 2016. 8[14] D. D. Lewis, “Naive (bayes) at forty: The independence assumption in informationretrieval,” in European conference on machine learning. Springer, 1998, pp. 4–15.[15] A. McCallum, K. Nigam et al., “A comparison of event models for naive bayestext classification,” in AAAI-98 workshop on learning for text categorization, vol.752, no. 1. Citeseer, 1998, pp. 41–48. 10[16] S. Xu, “Bayesian näıve bayes classifiers to text classification,” Journal of Infor-mation Science, vol. 44, no. 1, pp. 48–59, 2018. 10[17] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov,“Dropout: a simple way to prevent neural networks from overfitting,” The journalof machine learning research, vol. 15, no. 1, pp. 1929–1958, 2014. 12BIBLIOGRAPHY[18] J. Tompson, R. Goroshin, A. Jain, Y. LeCun, and C. Bregler, “Efficient objectlocalization using convolutional networks,” in Proceedings of the IEEE Conferenceon Computer Vision and Pattern Recognition, 2015, pp. 648–656. 13[19] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classifica-tion with deep convolutional neural networks,” in Advances in Neu-ral Information Processing Systems 25, F. Pereira, C. J. C. Burges,L. Bottou, and K. Q. Weinberger, Eds. Curran Associates, Inc.,2012, pp. 1097–1105. [Online]. Available: http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf 13[20] W.-t. Yih, X. He, and C. Meek, “Semantic parsing for single-relation questionanswering,” in Proceedings of the 52nd Annual Meeting of the Association forComputational Linguistics (Volume 2: Short Papers). Baltimore, Maryland: As-sociation for Computational Linguistics, Jun. 2014, pp. 643–648. 13[21] N. Kalchbrenner, E. Grefenstette, and P. Blunsom, “A convolutional neural net-work for modelling sentences,” arXiv preprint arXiv:1404.2188, 2014. 13, 19, 21,23, 24[22] B. Shu, F. Ren, and Y. Bao, “Investigating lstm with k-max</s>
<s>pooling for textclassification,” in 2018 11th International Conference on Intelligent ComputationTechnology and Automation (ICICTA). IEEE, 2018, pp. 31–34. 13, 21, 23, 24,[23] Y. Zhang and B. Wallace, “A sensitivity analysis of (and practitioners’ guideto) convolutional neural networks for sentence classification,” arXiv preprintarXiv:1510.03820, 2015. 13[24] V. Bijalwan, V. Kumar, P. Kumari, and J. Pascual, “Knn based machine learn-ing approach for text and document mining,” International Journal of DatabaseTheory and Application, vol. 7, no. 1, pp. 61–70, 2014. 17, 19, 20[25] Y. H. Li and A. K. Jain, “Classification of text documents,” The Computer Jour-nal, vol. 41, no. 8, pp. 537–546, 1998. 17, 20, 22http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdfhttp://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdfBIBLIOGRAPHY[26] X. Zhang, J. Zhao, and Y. LeCun, “Character-level convolutional networks fortext classification,” in Advances in neural information processing systems, 2015,pp. 649–657. 17, 18, 19, 20, 21, 23, 24[27] J. Y. Lee and F. Dernoncourt, “Sequential short-text classification with recurrentand convolutional neural networks,” in Proceedings of the 2016 Conference of theNorth American Chapter of the Association for Computational Linguistics: HumanLanguage Technologies, 2016, pp. 515–520. 17, 19, 21, 23[28] P. Bolaj and S. Govilkar, “Text classification for marathi documents using su-pervised learning methods,” International Journal of Computer Applications, vol.155, no. 8, pp. 6–10, 2016. 17[29] R. M. Rakholia and J. R. Saini, “Classification of gujarati documents using näıvebayes classifier,” Indian Journal of Science and Technology, vol. 5, pp. 1–9, 2017.[30] N. Krail and V. Gupta, “Domain based classification of punjabi text documentsusing ontology and hybrid based approach,” in Proceedings of the 3rd Workshopon South and Southeast Asian Natural Language Processing, 2012, pp. 109–122.[31] H. M. Noaman, S. Elmougy, A. Ghoneim, and T. Hamza, “Naive bayes classifierbased arabic document categorization,” in 2010 The 7th International Conferenceon Informatics and Systems (INFOS), March 2010, pp. 1–5. 17[32] A. H. Mohammad, O. Al-Momani, and T. Alwada’n, “Arabic text categorizationusing k-nearest neighbour, decision trees (c4. 5) and rocchio classifier: a compara-tive study,” International Journal of Current Engineering and Technology, vol. 6,no. 2, pp. 477–482, 2016. 17[33] M. Gumilang and A. Purwarianti, “Experiments on character and word level fea-tures for text classification using deep neural network,” in 2018 Third InternationalConference on Informatics and Computing (ICIC), Oct 2018, pp. 1–6. 18[34] Y. Jin, C. Luo, W. Guo, J. Xie, D. Wu, and R. Wang, “Text classification basedon conditional reflection,” IEEE Access, vol. 7, pp. 76 712–76 719, 2019. 18BIBLIOGRAPHY[35] W. Zhang, T. Yoshida, and X. Tang, “A comparative study of tf* idf, lsi and multi-words for text classification,” Expert Systems with Applications, vol. 38, no. 3, pp.2758–2765, 2011. 18, 19, 20[36] A. Dhar, N. S. Dash, and K. Roy, “Application of tf-idf feature for categorizingdocuments of online bangla web text corpus,” in Intelligent Engineering Informat-ics. Springer, 2018, pp. 51–59. 18, 19, 20, 22[37] F. Kabir, S. Siddique, M. R. A. Kotwal, and M. N. Huda, “Bangla text documentcategorization using stochastic gradient descent (sgd) classifier,” in 2015 Interna-tional Conference on Cognitive Computing and Information Processing (CCIP).IEEE, 2015, pp. 1–4. 18, 19, 20, 22[38] S. Al Mostakim, F. Ehsan, S. M. Hasan, S. Islam, and S. Shatabda, “Banglacontent</s>
<s>categorization using text based supervised learning methods,” in 2018International Conference on Bangla Speech and Language Processing (ICBSLP).IEEE, 2018, pp. 1–6. 18, 19, 20, 22, 24[39] A. Dhar, N. S. Dash, and K. Roy, “Classification of bangla text documents basedon inverse class frequency,” in 2018 3rd International Conference On Internet ofThings: Smart Innovation and Usages (IoT-SIU). IEEE, 2018, pp. 1–6. 18, 19,20, 22, 25[40] A. K. Mandal and R. Sen, “Supervised learning methods for bangla web documentcategorization,” arXiv preprint arXiv:1410.2045, 2014. 18, 19, 20, 22[41] A. Dhar, H. Mukherjee, N. S. Dash, and K. Roy, “Performance of classifiers inbangla text categorization,” in 2018 International Conference on Innovations inScience, Engineering and Technology (ICISET). IEEE, 2018, pp. 168–173. 18,19, 21, 22[42] A. Dhar, N. Dash, and K. Roy, “Classification of text documents through dis-tance measurement: an experiment with multi-domain bangla text documents,”in 2017 3rd International Conference on Advances in Computing, Communication& Automation (ICACCA)(Fall). IEEE, 2017, pp. 1–6. 18, 19, 20, 22, 25BIBLIOGRAPHY[43] A. Dhar, N. S. Dash, and K. Roy, “An innovative method of feature extraction fortext classification using part classifier,” in International Conference on Informa-tion, Communication and Computing Technology. Springer, 2018, pp. 131–138.18, 19, 21, 22[44] Q. I. Mahmud, N. I. Chowdhury, and M. Masum, “Reducing feature space and an-alyzing effects of using non linear kernels in svm for bangla news categorization,” in2018 International Conference on Bangla Speech and Language Processing (ICB-SLP). IEEE, 2018, pp. 1–6. 18, 21, 22[45] M. S. Islam, F. E. M. Jubayer, and S. I. Ahmed, “A support vector machine mixedwith tf-idf algorithm to categorize bengali document,” in 2017 international con-ference on electrical, computer and communication engineering (ECCE). IEEE,2017, pp. 191–196. 18[46] A. DHAR, N. S. DASH, and K. ROY, “A fuzzy logic-based bangla text classifica-tion for web text documents.” 18[47] T. Dash Roy, S. Khatun, R. Begum, and A. M. Saadat Chowdhury, “Vectorspace model based topic retrieval from bengali documents,” in 2018 InternationalConference on Innovations in Science, Engineering and Technology (ICISET), Oct2018, pp. 60–63. 18[48] Q. Li, L. He, and X. Lin, “Dimension reduction based on categorical fuzzy correla-tion degree for document categorization,” in 2013 IEEE International Conferenceon Granular Computing (GrC), Dec 2013, pp. 186–190. 19[49] Y. Saad and K. Shaker, “Support vector machine and back propagation neuralnetwork approach for text classification.” 19[50] V. Tam, A. Santoso, and R. Setiono, “A comparative study of centroid-based,neighborhood-based and statistical approaches for effective document categoriza-tion,” in Object recognition supported by user interaction for service robots, vol. 4,Aug 2002, pp. 235–238 vol.4. 19BIBLIOGRAPHY[51] Z. Zhen, H. Wang, L. Han, and Z. Shi, “Categorical document frequency basedfeature selection for text categorization,” in 2011 International Conference of In-formation Technology, Computer Engineering and Management Sciences, vol. 2,Sep. 2011, pp. 65–68. 19[52] Ziqiang Wang, Xia Sun, and Qingzhou Zhang, “Document categorization algo-rithm based on kernel npe,” in 2009 Chinese Control and Decision Conference,June 2009, pp. 2958–2961. 19[53] R. Socher, A. Perelygin, J. Wu, J. Chuang, C. D. Manning, A. Ng, and C. Potts,“Recursive deep models for semantic compositionality over a sentiment treebank,”in Proceedings of the</s>
<s>2013 conference on empirical methods in natural languageprocessing, 2013, pp. 1631–1642. 19[54] B. Pang and L. Lee, “A sentimental education: Sentiment analysis using subjec-tivity summarization based on minimum cuts,” in Proceedings of the 42nd annualmeeting on Association for Computational Linguistics. Association for Compu-tational Linguistics, 2004, p. 271. 19[55] X. Li and D. Roth, “Learning question classifiers,” in Proceedings of the 19thinternational conference on Computational linguistics-Volume 1. Association forComputational Linguistics, 2002, pp. 1–7. 19[56] T. Wilson, P. Hoffmann, S. Somasundaran, J. Kessler, J. Wiebe, Y. Choi,C. Cardie, E. Riloff, and S. Patwardhan, “Opinionfinder: A system for subjec-tivity analysis,” in Proceedings of HLT/EMNLP 2005 Interactive Demonstrations,2005, pp. 34–35. 19[57] S. Kim, L. F. D’Haro, R. E. Banchs, J. D. Williams, M. Henderson, and K. Yoshino,“The fifth dialog state tracking challenge,” in 2016 IEEE Spoken Language Tech-nology Workshop (SLT). IEEE, 2016, pp. 511–517. 19[58] A. Janin, D. Baron, J. Edwards, D. Ellis, D. Gelbart, N. Morgan, B. Peskin,T. Pfau, E. Shriberg, A. Stolcke et al., “The icsi meeting corpus,” in 2003 IEEEInternational Conference on Acoustics, Speech, and Signal Processing, 2003. Pro-ceedings.(ICASSP’03)., vol. 1. IEEE, 2003, pp. I–I. 19BIBLIOGRAPHY[59] D. Jurafsky, R. Bates, N. Coccaro, R. Martin, M. Meteer, K. Ries, E. Shriberg,A. Stolcke, P. Taylor, and C. Van Ess-Dykema, “Automatic detection of discoursestructure for speech recognition and understanding,” in 1997 IEEE Workshop onAutomatic Speech Recognition and Understanding Proceedings. IEEE, 1997, pp.88–95. 19[60] M. N. Hasan, S. Bhowmik, and M. M. Rahaman, “Multi-label sentence classifica-tion using bengali word embedding model,” in 2017 3rd International Conferenceon Electrical Information and Communication Technology (EICT), Dec 2017, pp.1–6. 19[61] M. Islam, F. E. M. Jubayer, S. I. Ahmed et al., “A comparative study on dif-ferent types of approaches to bengali document categorization,” arXiv preprintarXiv:1701.08694, 2017. 19, 24[62] M. R. Hossain and M. M. Hoque, “Automatic bengali document categorizationbased on word embedding and statistical learning approaches,” in 2018 Interna-tional Conference on Computer, Communication, Chemical, Material and Elec-tronic Engineering (IC4ME2). IEEE, 2018, pp. 1–6. 19[63] Z. Islam, M. R. Rahman, and A. Mehler, “Readability classification of banglatexts,” in International conference on intelligent text processing and computationallinguistics. Springer, 2014, pp. 507–518. 20[64] S. Chowdhury and W. Chowdhury, “Performing sentiment analysis in bangla mi-croblog posts,” in 2014 International Conference on Informatics, Electronics &Vision (ICIEV). IEEE, 2014, pp. 1–6. 20[65] A. R. Pal, D. Saha, and N. S. Dash, “Automatic classification of bengali sen-tences based on sense definitions present in bengali wordnet,” arXiv preprintarXiv:1508.01349, 2015. 20[66] M. Lan, C.-L. Tan, H.-B. Low, and S.-Y. Sung, “A comprehensive comparativestudy on term weighting schemes for text categorization with support vector ma-chines,” in Special interest tracks and posters of the 14th international conferenceon World Wide Web. ACM, 2005, pp. 1032–1033. 20BIBLIOGRAPHY[67] B. Trstenjak, S. Mikac, and D. Donko, “Knn with tf-idf based framework for textcategorization,” Procedia Engineering, vol. 69, pp. 1356–1364, 2014. 20, 22, 24[68] A. Ahmad and M. R. Amin, “Bengali word embeddings and it’s application insolving document classification problem,” in 2016 19th International Conferenceon Computer and Information Technology (ICCIT). IEEE, 2016, pp. 425–430.20, 23[69]</s>
<s>J. Kaur and J. R. Saini, “A study of text classification natural language processingalgorithms for indian languages,” The VNSGU Journal of Science, Technology, pp.162–167, 2015. 24[70] I. H. Witten and E. Frank, Data Mining: Practical Machine Learning Tools andTechniques, 2nd ed., ser. Morgan Kaufmann Series in Data Management Systems.Morgan Kaufmann, 2005. 31[71] D. Britz, A. Goldie, M.-T. Luong, and Q. Le, “Massive exploration of neuralmachine translation architectures,” arXiv preprint arXiv:1703.03906, 2017. 31[72] T. Sasaki, K. Kinoshita, S. Kishida, Y. Hirata, and S. Yamada, “Effect of num-ber of input layer units on performance of neural network systems for detectionof abnormal areas from x-ray images of chest,” in 2011 IEEE 5th InternationalConference on Cybernetics and Intelligent Systems (CIS), Sep. 2011, pp. 374–379.[73] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXivpreprint arXiv:1412.6980, 2014. 36 List of Figures List of Tables 1 Introduction 1.1 Introduction 1.2 Problem Statement 1.3 Motivation 1.4 Objectives of the Thesis 1.5 Brief Methodology 1.6 Thesis Contributions 1.7 Organization of the Thesis 2 Background and Literature Review 2.1 Preliminaries 2.1.1 Feature Selection Methods 2.1.1.1 Term Frequency - Inverse Document Frequency (TF-IDF) 2.1.1.2 Word Embedding 2.1.2 Algorithms 2.1.2.1 Support Vector Machine 2.1.2.2 Stochastic Gradient descent 2.1.2.3 Logistic Regression 2.1.2.4 Naïve Bayes 2.1.2.5 K-Nearest Neighbor 2.1.2.6 Decision Tree 2.1.2.7 Neural Network 2.1.3 Evaluation Metrics 2.2 Literature Review 2.2.1 Language 2.2.2 Data Source 2.2.3 Feature Selection methods 2.2.4 Algorithms 2.2.5 Performance Evaluation 3 Proposed Method 3.1 Block Diagram 3.1.1 Data Preprocessing 3.2 Feature extraction 3.3 Model Fitting 3.3.1 Classification Algorithms 3.3.2 Neural Networks 3.4 Model Evaluation 3.5 Summary 4 Experimental Analysis 4.1 Data Collection and Preprocessing 4.2 Experimental Setup 4.2.1 Feature Selection 4.2.2 Algorithms 4.2.3 Result 5 Conclusion and Future works 5.1 Conclusion 5.2 Limitations 5.3 Future Work Bibliography</s>
<s>A study of readability of texts in Bangla through machinelearning approachesManjira Sinha & Anupam Basu# Springer Science+Business Media New York 2014Abstract In this work, we have investigated text readability in Bangla language. Textreadability is an indicator of the suitability of a given document with respect to a targetreader group. Therefore, text readability has huge impact on educational contentpreparation. The advances in the field of natural language processing have enabledthe automatic identification of reading difficulty of texts and contributed in the designand development of suitable educational materials. In spite of the fact that, Bangla isone of the major languages in India and the official language of Bangladesh, theresearch of text readability in Bangla is still in its nascent stage. In this paper, we havepresented computational models to determine the readability of Bangla text documentsbased on syntactic properties. Since Bangla is a digital resource poor language,therefore, we were required to develop a novel dataset suitable for automatic identifi-cation of text properties. Our initial experiments have shown that existing Englishreadability metrics are inapplicable for Bangla. Accordingly, we have proceededtowards new models for analyzing text readability in Bangla. We have consideredlanguage specific syntactic features of Bangla text in this work. We have identifiedmajor structural contributors responsible for text comprehensibility and subsequentlydeveloped readability models for Bangla texts. We have used different machine-learning methods such as regression, support vector machines (SVM) and supportvector regression (SVR) to achieve our aim. The performance of the individual modelshas been compared against one another. We have conducted detailed user survey fordata preparation, identification of important structural parameters of texts and valida-tion of our proposed models. The work posses further implications in the field ofeducational research and in matching text to readers.Keywords Bangla text comprehensibility . Text readability . Resource creation .Readability models . Regression . Support vector machines . Support vector regression .User studyEduc Inf TechnolDOI 10.1007/s10639-014-9368-yM. Sinha (*) :A. BasuDepartment of Computer Science and Engineering, Indian Institute of Technology Kharagpur, Kharagpur,Indiae-mail: manjira87@gmail.com1 IntroductionReading is a complex cognitive action involving different steps like, recognizing andunderstanding individual words from a text, and decoding the grammatical structure ofsentences to obtain the semantic information conveyed by them. Text readability or textcomprehensibility generally refers to how well a reader is able to comprehend thecontent of a text through reading. According to Edgar Dale and Jean Chall (1948)readability is “… the sum total (including all the interactions) of all those elementswithin a given piece of printed material that affect the success of a group of readershave with it. The success is the extent to which they understand it, read it at an optimalspeed, and find it interesting.”We will use the terms readability and comprehensibility,interchangeably in the paper. For any given text, there is no absolute measure ofdifficulty based on purely the text parameters. The way a text will be interpreteddepends on the background and context of the reader. Therefore, readability of a texthas to be studied in relation with the characteristics of its reader, as well. Subsequently,we can assume that the cognitive load associated with</s>
<s>the understanding of a textdepends broadly on five factors. The first four are texts related parameters; they areoften intertwined and the fifth one relates the reader:1. Lexical choice: the complexity of the different words or phrases used in the text.2. Syntactic complexity: the structural features of a text, it depends on the nature ofsentences; their construction and inherent difficulties.3. Semantic complexity: it represents the difficulty to grasp meaning from the wordsor sentences used in the text.4. Discourse level complexity: it depends on text properties like cohesion, coher-ence, rhetorical structure of text.5. Background of the reader or the target audience: it is a complex derivative ofone’s educational and socio-cultural background.Apart from the reader and the text, the communicating language also affectsreadability. The structure and pattern of a language reflects how its users perceiveand understand their surrounding world. Every language has some unique propertiesdepending on its demography which in-turn control the determining factors for read-ability in that concerned language. Several cross-linguistic experiments indicate thatlanguage comprehension and processing are quite language dependent (Taft 2004).Therefore, the findings from experiments in one language cannot be generalized to alllanguages making it important to conduct similar experimentations in other languages.Consequently, different languages have developed different readability formulae(Rabin et al. 1988).1.1 Importance of text readabilityReadability of a text is a significant factor in the design of contents intended to matchthe reading competence of the target populations. Easy to read texts improve compre-hension, retention, reading speed and reading persistence. The impact of readabilityresearch is felt whenever we need to have effective textual communication with peoplein different fields of activities, such as education, health-care, business or governmentEduc Inf Technolpolicies. Expenditure overhead of most of the public welfare and awareness systemincreases largely due to lack of understanding of the information and instructionmanuals. It has been found in studies that the average reading level of adult populationin USA is 8th grade (Cotugna et al. 2005), which is why many public documents fail tomeet their good intentions as they require higher comprehending abilities.1.1.1 Importance of text readability in educational purposesResearch in text readability began from an educational perspective. It has been wellestablished that the efficiency of educational contents for both children and adultsincreases if the comprehension difficulty and design of the text matches the targetstudent group (DuBay 2007). Early researches such as Flesch Reading Ease Index,Dale-Chall readability formulae, Gunning Fog index etc. (refer to section 2) werededicated to improving the school and college level reading materials. They focusedon the reading difficulty of the educational materials and the actual reading ability ofthe target students groups. Subsequently, the scope of these researches extended beyondthe conventional education sector to the areas like training guide for air-force andinstruction manual for health education (DuBay 2004). Formulae like ATOS-TASA(Learning 2001) and Read-X (Miltsakaki and Troutt 2007) were developed by profes-sionals in order to make school textbooks readable for students. Models such asproposition-inference by Kintsch and Van Dijk (1978) attempted to level text formatching the target reader population along with identifying the difficult areas.Methods like Latent Semantic Analysis (LSA) have</s>
<s>been found to be effective fordetermining suitable educational material for college goers. More recent machinelearning approaches by Heilman et al. (2008) and Petersen and Ostendorf (2009) havepresented more in-depth analysis of educational contents, both in formal and informalsector. The studies mentioned above are only few examples from the numerous workson readability of educational materials. This brief overview suggests the importance oftext readability in all levels of education.Therefore we can conclude that readability has a two-fold importance in socialaspects: first, in developing efficient contents for successful dissemination of educationand literacy and second, in designing materials for successful conveyance of informa-tion to the target reader.1.2 The context of BanglaLiteracy is the key to the socio-economic progress. In India, the adult literacy rate iswell below the world average.1 Low literacy rate implies lower reading levels, which inturn impedes empowerment of the common people. Moreover, India is a country with alarge number of languages; according to census 2011 there are 1635 recognized mothertongue spanning over various language families and approximately 23 official lan-guages that are used by different states. 2 Many of these languages have regionaldialects. Although English is one of the official languages, a large section of Indian1 http://www.censusindia.gov.in/2011-prov-results/indiaatglance.html2 http://en.wikipedia.org/wiki/Languages_with_official_status_in_India#Eighth_Schedule_to_the_ConstitutionEduc Inf Technolhttp://www.censusindia.gov.in/2011-prov-results/indiaatglance.htmlhttp://en.wikipedia.org/wiki/Languages_with_official_status_in_India%23Eighth_Schedule_to_the_Constitutionhttp://en.wikipedia.org/wiki/Languages_with_official_status_in_India%23Eighth_Schedule_to_the_Constitutionpopulation primarily use their mother tongue. Therefore, to reach to a large number ofpeople, it is imperative to communicate in the native languages.Despite the fact that readability measures are language dependent, the research onIndian language text readability is still in its infancy. In this paper, we have focused onidentifying the readability of Bangla texts. Bangla is (also known as Bengali) is aneastern Indo-Aryan language and has its own script. The 220 million native and about250 million total speakers have made it the seventh most spoken language in the world.Bangla is the second most spoken (after Hindi) and one of the official languages ofIndia with about 84 million native users3; it is also the national language of Bangladesh.The need for focusing on one of the major native languages of India lies in the fact thatpeople can interpret better, when the documents are in their own languages or mothertongue, i.e. their L1 language (Oakland and Lane 2004). However, even a nativelanguage instruction has to be comprehensible by the target reader; many welfareprograms fail, as they require people to have a higher reading level than the present.Therefore, texts have to be designed, customized and presented in a manner that suitsthe cognitive capacity of target population. To meet this goal, at the very beginning, wehave to identify how the different textual features affect the text difficulty in Bangla andhow they can be modelled effectively.1.2.1 Issue of usage variations and diglossia in BanglaBangla as used in India and Bangladesh posses some phonetic and accents (refer tofootnote 3) and alternate lexical terms to denote some concepts. For example, to denotewater, Bangla speakers in West Bengal, India mostly use jola (জল), while pAni (পানি) isused in Bangladesh, but these variations are primarily due to the religious influence onlanguage rather than regional influence.4Bangla has many dialectical variations in India as</s>
<s>well as in Bangladesh (Agnihotri2008). The standard form of Bangla used at present in both India and Bangladesh isbased on the West-Central dialect of Nadia district (Agnihotri 2008). Bangla alsoexhibits diglossia i.e., the situation where two variants (or dialect) of the same languageare used by a single language community (Ferguson 1959). In standard modern Bangla,the two diglossic variations are sAdhubhAshA (সাধভুাষা) or the highly codified versionintended for formal documents and the more colloquial calitabhAshA (চলিতভাষা) form(Chakraborti 2003). The differences between the two forms are like the former usesmore sanskritized vocabulary and longer verb inflections, where the later has compar-atively simple grammar forms. However, in the present day usage of Bangla, apart fromsome legal and formal government documents, the calita form is used everywhere forboth spoken and written communication.In our study, we have focused on the calita (চলিত) form of the Bangla language as itis the only form used in real life purposes. Apart from some documents selected fromthe classic literature corpora (refer to section 4), all other test documents are in calitaversion. However, all the native Bengal users who participated in our experiment areaccustomed with the sAdhu (সাধ)ু version as it is taught in school level languagecourses.3 http://en.wikipedia.org/wiki/Bengali_language4 We have used the convention: Bangla term (iTrans transliteration)Educ Inf Technolhttp://en.wikipedia.org/wiki/Bengali_language1.2.2 Difference between Bangla and englishIn section 4.1, we have demonstrated that some of the widely used English readabilityformulae are not applicable for estimating text difficulty in Bangla. We concluded thatthis phenomenon is a result of the language structure difference between Bangla andEnglish. Here, we have enumerated some of those points.& Bangla and English belong to two very separate language families, namely Indo-European and West-Germanic.& Bangla is morphologically richer than English. It has got a rich inflectional andderivational morphology inherited from Sanskrit, Persian and Arabic. It results inan abundance of compounding, and shows mild agglutination.& Bangla orthography belongs to the abugida5 class and English belongs tothe alphabetic class. Therefore, the formation and identification of thevisual form of a Bangla word happens in a different way than that ofEnglish.& Almost all the readability metrics in English treat polysyllabic words (more thantwo syllables) as hard words, but in case of Bangla polysyllabic words are commonin everyday use. For example, the verb kariAchilAma (করেছিলাম) in Bengalicorresponds to the phrase I had done in English and contains 5 syllables.& Bangla is a head-final language that allows agreement with the rightmost conjunctwhen the verb follows the conjoined phrase.& Bangla is also a wh in-situ language. In Bangla, the use of copular is not necessaryfor sentence construction. Both of these properties affect the skill required forsentence processing in Bangla.& Bangla is a flexible word order language, which allows multiple grammaticallycorrect surface forms.Based on the above discussion, the objective of our work is to: a) create annotatedreadability resource in Bangla, b) development of computational models for textreadability in Bangla and c) evaluation of the models. Our final aim is to design ainput–output black-box system (see Fig. 1) which given an input text in Bangla willreturn some syntactic statistics for</s>
<s>the text as well as its readability value as estimatedby our readability formulae along with a result of binary classification (refer to section5 for detail of model building).At first, we have demonstrated that the well-known readability measuressuch as Flesch Reading Ease Index, SMOG index cannot be applied to deter-mine text readability in Bangla. Then we have developed models in Bangla topredict overall reading difficulty of a text perceived by a native user. Our targetuser group (see section 4.2) consists of undergraduate or graduate level studentsbelonging to medium to low economic background. We have used machine-learning methods such as regression, support vector machines and supportvector regression to achieve our aim. The outcomes of the models are discussedin context of text comprehensibility and are compared against each other. Our5 http://en.wikipedia.org/wiki/AbugidaEduc Inf Technolhttp://en.wikipedia.org/wiki/Abugidastudy is based on the structural or syntactic features of a text like averagesentence length, average word length in terms of visual units and we havecustomized definitions of these features to accommodate the specificities ofBangla (refer to the “Data Preparation” section).The organization of the rest of the paper is as follows: section 2 presents abrief literature survey of text readability; section 3 describes text preparationand feature selection; section 4 illustrates the inapplicability of English read-ability metrics for Bangla and the subsequent user study that has been under-taken. Section 5 details the computational procedures and the inferences wehave drawn; this section is followed by the final section on conclusion andfuture works.1.2.3 Related worksThe objective of readability studies was to develop practical and easy toimplement methods to grade texts according to the reading abilities of adults(Buswell 1937). The quantitative analysis of text readability started with L.A.Sherman in 1880 (Sherman 1893). Until date, English has over 200 readabilitymetrics. Studying readability is gaining much popularity (DuBay 2004).Attempts have also been made in other languages such as Spanish, French,German, Dutch, Swedish, Russian, Hebrew, Chinese, Vietnamese and Korean(Rabin et al. 1988). The existing quantitative approaches towards predictingreadability of a text can be broadly classified into three categories (Benjamin2012) –these categories are not distinct, many a times they overlap andcommunicate with each other:1.3 Classical methodsThis type of approaches incorporate the syntactic features of a text like sen-tence length, paragraph length etc. The examples are Flesch Reading EaseScore (Flesch 1948), FOG index (Gunning 1968), Fry graph (Fry 1968),SMOG (McLaughlin 1969) etc. The chronologically newer formulas like newDale-Chall index (Chall 1995), lexile framework (Stenner 1996), ATOS-TASA(Learning 2001), Read-X (Miltsakaki and Troutt 2007) consider the readers’background and text semantics by incorporating information like word famil-iarity, word frequency or graded vocabulary list etc. We have considered thefollowing four models to examine their applicability in Bangla; the reason isFig. 1 Outline of a readability calculating systemEduc Inf Technoltheir high correlation with the established comprehension tests in English(DuBay 2007; McLaughlin 1969):11 Flesch Reading Ease=206.835−(1.015×averagesentence length)−(84.6×average number of syllablesper word)32 Gunning FOG grade=0.4 (averagesentence length+percentage of HardWords)23 Flesch-Kincaid Grade-Level=(0.39× average sentencelength)+(11.8×average number of syllables perword )—15.5944 SMOG grading=3+square root ofpolysyllable count per 30 sentencesThe formulae from this class fail short in some major situations. They</s>
<s>do not takeinto account the background of the reader and only measure the surface level features ofa text. They do not consider the semantic features of the text such as whether the actualcontents are making sense or not. Despite their shortcomings, these simple metrics arestill useful for many purposes. They are easy to calculate and provide a roughestimation of reading difficulty of a text provided.1.4 Cognitively motivated methodsThis class of methods uses high-level text parameters like cohesion, organization andcognitive aspects of the reader. Proposition and inference model (Kintsch and Van Dijk1978), prototype theory (Rosch 1978), latent semantic analysis (Landauer et al. 1998),semantic networks (Foltz et al. 1998) are examples of this category. This type ofapproach introduced text levelling or text revising methods (Kemper 1983; Brittonand Gülgöz 1991). One distinguished instance of this class is Coh-metrix(Graesser et al. 2004), (Graesser et al. 2011)], which scores a text based on200 different features spanning over a broad range including sophisticatedfeatures like text cohesion. Coh-Metrix has been used to measure text diffi-culty for both L1 and L2 (Crossley et al. 2007). Recently Coh-Metrix is beingused to assess writing quality and to distinguish between relative cohesion oftexts (McNamara et al. 2010). The DeLite software (vor der Brück et al.2008) uses a dedicated syntactic-semantic parser to analyze a text in German.It predicts the hardness of a text based on textual parameter values from fivedifferent levels: morphological, lexical, syntactic, and semantic and discourse.It also incorporates machine-learning algorithms to normalize the parametervalues and determine indicator weights. This in-turn improves its performance,DeLite’s prediction has been found to be more correlated with user predictionsthan that by traditional formulas (Benjamin 2012). DeLite serves as a bridgebetween cognitive motivated methods and statistical language modellingtechniques.This group of models moves beyond the surface features of a text and try to measureobjectively the different cognitive indicators associated with text and the reader.However, such studies are still in infancy. One of the major drawbacks of thesetechniques is that they can be too complex to be implemented for practical purposes.They require automation to an extent. Moreover, in has been observed that, in manysituations, some traditional indicators perform as well as the newer and more difficultversions (Crossley et al. 2007).Educ Inf Technol1.5 Methods involving statistical language modellingThis class of approaches incorporates the power of machine learning methods to thefield of readability. They expand the traditional simple indices to a probabilisticanalysis. They are particularly useful in determining online readability based on userqueries (Liu et al. 2004). They have been used to predict readability of web texts(Collins-Thompson and Callan 2005; Collins-Thompson and Callan 2004; Si andCallan 2003). Sophisticated machine learning methods like support vector machines(SVM) have been used to identify grammatical patterns within a text and classificationbased on it for both web texts (Heilman et al. 2008) and traditional texts (Schwarm andOstendorf 2005; Petersen and Ostendorf 2009). Schwarm and Ostendorf (2005) havealso used support vector machines for error analysis in text difficulty with respect to theintended grade level.Although, these methods sound promising, the shortcoming is that they cannot act asstandalone</s>
<s>measures as they need an amount of training data for classifiers appropriateto a particular user group. Moreover, they also need extensive user study for trainingand validation.1.6 Work done in BanglaCompared to numerous readability measures in English (Benjamin 2012), few initia-tives have been taken in Bangla. Das and Roychoudhury (2006) studied a miniaturemodel with respect to one parametric and two parametric fits. They considered twostructural features of a text: average sentence length and number of syllables per 100words. Seven paragraphs for seven different texts were used. They found the two-parametric fit as better performer. Islam et al. (Islam et al. 2012) have performedreadability classification on 24 textbooks in Bangla from the National Curriculumand Textbook Board, Bangladesh.6 The books were ranging from class two to classeight and the reading difficulty of a text was assumed based on the class level only.They have developed a readability classifier based on 25 information-theoretic featuresand five lexical features. They have found that the combination of all the features yieldan accuracy score of 75 %. However, their work do not describe the classificationmodels or target classes, moreover, as the domain of texts are restricted to only schooltextbooks, the performance of the features over general Bangla documents are yet to beevaluated.1.6.1 Data preparationIdentifying the structural parameters of a text We have already discussed that Banglahas many distinct characteristics from English. Bangla has a set of 53 charactersincluding 13 vowels and 40 consonants and has around 198 distinct consonant clusterswith individual graphemic forms. We have considered the following standard structuralor syntactic parameters of a text but customized them to accommodate the specificitiesof Bangla:6 http://www.nctb.gov.bd/Educ Inf Technolhttp://www.nctb.gov.bd/1. Average Sentence Length (ASL): Sentence length accounts for the number ofwords separated by spaces or any symbol in a sentence. Sentences are separatedby purnaviram or a dividing punctuation mark (a question mark or an exclama-tion symbol). A purnaviram is equivalent to a full stop/period in English to markthe end of the matter. Average Sentence Length is computed as dividing totalsentence length by total number of sentences.2. Average Word Length (AWL) in terms of visual units: Along with dedicatedgraphemes for consonants and vowels, Bangla scripts have some additionalgraphemes corresponding to the vowel modifiers (diacritic) and consonant con-juncts (jukta-akshars). Bangla orthographic word, is combination of the followingkinds of graphemes7:a. Independent form of vowel graphemes.b. Consonant graphemes with or without a vowel diacritic attached to them.c. Consonant conjuncts with or without a vowel diacritic attached to them.d. Other modifier symbols indicating nasalization of vowels, and suppression of theinherent vowels.We consider each kind as a separate visual unit of a word, which is equivalentto each alphabet in an English word. The length of a word corresponds to totalnumber of visual units in that word. Average Word Length is equal to total wordlength divided by number of words. An example is given below:(dibAniShi)=দি (di) + বা (bA)+ নি (ni)+ শি (Shi) Length=4, all the unitshere represents the second type (b) of graphemes specified above.3. Average number of Syllables per Word (ASW): A syllable is a unit oforganization for</s>
<s>a sequence of speech sounds. For syllable count in a Banglaword, we use Bangla grapheme to phoneme converter tool (G2P). 8 Averagenumber of Syllable per word is equal to total syllable count divided by numberof words.4. Number of PolySyllabic Words (PSW): Polysyllabic words are the wordswhose count of syllable exceeds two.5. Number of PolySyllabic Words per 30 sentences (PSW30): Polysyllabic wordsper 30 sentences are computed by taking the number of polysyllabic words intotal text, dividing it by total number of sentences and then multiplying it by 30.6. Number of Jukta-akshars (JUK) or consonant clusters: jukta-akshar orconsonant-conjunct is the circumstance when consonants or vyanjan occur to-gether in clusters. When a consonant with a halant (hasanta) is followed byanother consonant, we consider it as one jukta-akshar. More than one consonantwith halants, followed by a full consonant, is also considered as one jukta-akshar.A consonant occurring at the end of a word, i.e. it is not followed by any otherconsonant, is not considered as a jukta-akshar. The number of jukta-akshars countis the total number jukta-akshars present in the text. Jukta-akshars are an impor-tant feature for Bangla because each of the clusters has separate orthographic andphonemic (in some cases) representation than the constituents consonants. Themeasure is normalized for 50 sentences so that it can be compared across differenttexts.7 http://en.wikipedia.org/wiki/Bengali_alphabet#Characteristics_of_the_orthographic_word8 Downloaded from www.cel.iitkgp.ernet.inEduc Inf Technolhttp://en.wikipedia.org/wiki/Bengali_alphabet%23Characteristics_of_the_orthographic_wordhttp://www.cel.iitkgp.ernet.in/An example of a jukta-akshar is কষ্(ksha)=ক (ka)+ (্halant)+ষ (sha)Therefore, শিকষ্া (shikshA)= ি (i)+শ (Sha)+ক(ka)± (্halant)±ষ(sha)+ া (A)has jukta-akshar count equals to one.The last one, i.e. the number of jukta-akshars (JUK) is an important structural param-eter for Indian language text. Although English has consonant conjuncts like pt in someexceptional cases, the role hold by jukta-akshars in Bangla are more widespread andsignificant. The relation between jukta-akshars and text readability is being examinedfor the first time.1.6.2 Text selectionIn Bangla We could not find any publicly available Bangla dataset annotated in termsof their reading difficulties. As a result of this, we have developed a digital resourcepool of Bangla text documents in Unicode encoding that can be used for various NLPtasks such as feature extraction, document analysis etc. The current size of the resourceis about 200 documents of length 2000 words spanning over broad categories such asNews, literature. For our study, we have randomly selected seventy-five (75) texts. Thedetails are provided in Table 1 below:1.7 Applying existing english readability measures on Bangla textsAt first, Bangla texts are analyzed with the established readability indicators forEnglish. The scores obtained by the English models are presented in Table 2 below(for the sake of convenience, only 11 texts are presented). The table depicts the resultsfrom four famous and mostly trusted (Chall 1958; Klare 1963) English readabilitymodels namely: Flesch Reading Ease score (ER 1), Flesch-Kincaid Grade (ER 2),Gunning Fog index (ER 3) and SMOG index (ER 4). Although these readabilityModels have been applied to several languages with satisfactory results (Bambergerand Rabin 1984), in our case out of bound results are found. As an instance, readingscore of Flesch Reading Ease should lie in the range of 0–100 with a highest possibleupper bound</s>
<s>of 120 (Flesch 1948), whereas for Bangla texts, its value is more than 150Table 1 Details of texts with category, quantity and sizeSource of texts Number of texts Words (approximated in thousand (k))Literary corpora_classical 9 19Literary corpora_contemporary 10 20News corpora_general news 10 15News corpora_interview 9 15Blog corpora_personal 9 19Blog corpora_official 9 20Article corpora_ scholar 10 22Article corpora_general 9 21Educ Inf Technolfor all the texts. Results of Flesch-Kincaid Grade Level are not even positive andsmaller than −4, whereas, the possible lowest grade in theory is −3.40 with few realsituations9 (Kincaid et al. 1975). Grade levels evaluated by Gunning Fog Index andSMOG Index are within their prescribed range, but validating those with the actualreading standards of the experiment texts reveal that they are far from the expectedgrades10 (Gunning 1968; McLaughlin 1969). For example a Gunning Fog Index scoreof 18 requires an educational level equal to post graduation or more, but as per ouranalysis, almost all of the experimental texts yield an index more than 18 including thetexts from standard 8 school text books.From the above results, we can infer that these models are not suitable for measuringthe readability of Bangla texts. The disagreement on the values can be attributed to thesignificant differences in the language structure of English and Bangla as discussed Inour earlier sections.Therefore, the above observations motivated us to develop a new model of textreadability specifically for the Bangla language. Accordingly, we have performed userstudies to collect readability judgments on a large number of Bangla texts fromdifferent group of users. These collected dataset is used to develop computationalmodels of the reading difficulty of Bangla texts. The user survey has been describedin the following section.1.8 User study1.8.1 ParticipantsChoice of target reader group is a significant part of study on text readabilitydue to the highly subjective nature of text difficulty. Fifty native speakers ofBangla participated in the user study. To reflect reading difficulty experiencedby average Bangla native population, we have selected the participants basedon two criteria: Socio-Economic Classification (SEC) and proficiency in Banglalanguage. The socio-economic condition and the educational backgrounds of theparticipants are provided in Fig. 2a and b below, in the form of pie charts. Thesocio-economic classification was obtained according to the guidelines providedby The Market Research Society of India (MRSI) in their SEC classificationmanual. 11 MRSI has defined 12 socio-economic strata: A1 to E3, in thedecreasing order. As can be inferred from the chart, the participants range fromclasses B1 to C2, which represents the average or medium social-economicclasses. Moreover, a correspondence with the mean Household Potential Index(HPI) shows distribution from 18.7 to 4.4, which also represents the mediumrange of HPI. To capture the language skill, each native speaker was asked torate his/her proficiency in Bangla on a 1–5 scale (1: very poor and 5: verystrong) as can be seen from Fig. 2c, majority of the selected participants haveproficiency medium to poor. In the backdrop of a country like India it is notexceptional that a person pursuing graduation or higher education is from a9 http://en.wikipedia.org/wiki/Flesch%E2%80%93Kincaid_readability_tests10 http://en.wikipedia.org/wiki/Gunning_fog_index11 http://imrbint.com/research/The-New-SEC-system-3rdMay2011.pdfEduc Inf Technolhttp://en.wikipedia.org/wiki/Flesch%E2%80%93Kincaid_readability_testshttp://en.wikipedia.org/wiki/Gunning_fog_indexhttp://imrbint.com/research/The-New-SEC-system-3rdMay2011.pdfmedium to low</s>
<s>economic background and is not so proficient in the nativelanguage. The choice of participants representing average population partiallymet our long-term goal to model reading difficulty in Bangla of the backwardsocial-economic classes. The objective we will achieve in our subsequentreadability studies in future.Table 2 English readability models applied to BanglaText No ER 1 ER 2 ER 3 ER 41 170 −4 24 192 172 −5 23 183 173 −5 21 174 183 −9 13 115 173 −5 21 176 178 −7 16 137 174 −6 20 158 177 −7 20 159 161 −1 30 2310 168 −4 25 19Fig. 2 Participants’ details: a (top-left) education and age, b (top-right) social and economic background, c(bottom) confidence in proficiency in BanglaEduc Inf Technol1.8.2 ProcedureEach participant was presented with the 50 texts mentioned above. They wereinstructed to read the text carefully. Upon completion, they were asked the question:“how easy was it for you to understand/comprehend the text?”, and they were to answeron a scale of 10, where ‘1’stands for very easy to understand/comprehend and ‘10’ forextremely difficult. The averages of all users’ score against the 75 documents arepresented in form of a histogram (Fig. 3). It can be seen that the documents werechosen over a broad range of spectrum to incorporate different levels of readingdifficulty. The user ratings were validated with 2σ validation around the mean with a99 % confidence interval.As a simple way to check the degree of variation of different linguistic features to theevaluation done by the users, the Spearman’s rank correlation (Zar 1998) has beencomputed between them. Figure 4 presents correlation between the features and theuser ratings. From Fig. 4 above, it is visible that the factor that mostly correlates withthe user’s perception of hardness of a text is the number of jukta-akshars present per 50sentences in the text. The next are the number of polysyllabic words followed byaverage word length in terms of visual units and then the number of average syllableper word. Interestingly, the average sentence length, although found to be an influentialfactor in text readability (Crossley et al. 2007) comes at only fourth in terms of thecorrelation. After analyzing the correlation of textual parameters with user perception,we next move onto computational model building. One point must be noted here that,although attempts are being made to develop readability measures that are languageindependent, they are comprised of two parameters: average sentence length andaverage word length (Grzybek 2010). Therefore, we need to observe whether at allthese two parameters in their conventional meanings are significant in Bangla textreadability.Fig. 3 Distribution of user annotationsEduc Inf Technol1.9 Computational models for text readability predictionDifferentiating or grading texts according to their reading level can be viewed as aclassification problem as well as a regression problem. We have analyzed both of theapproaches in the following subsections. The classification approach differs from theregression approach in certain ways as they have different objectives. In the followingsections, we have elaborated the statistical and machine learning techniques as well asthe implications of the results obtained by their application to our problem.</s>
<s>We felt theexplanations of the background objectives of the techniques are necessary as theseobjectives will interpret the results from the corresponding methods.Regression analysis (Montgomery et al. 2007) is an estimation problem as it predictsthe absolute value of the dependent variable based on the independent terms; it attemptsto minimize the sum of squared error (SSE).12 The goodness of regression can bejudged by coefficient of determination (R2)13 Most of the traditional readability indicesmentioned in section 2 have been developed using regressions. On the other hand,Support Vector Machine or SVM (Cortes and Vapnik 1995; Manning et al. 2008) is alarge-margin classifier that attempts to partition the test data in two distinct groups withas much distance as possible, based on an earlier training set. SVM can be beneficialcompared to regression due to the following reasons: SVM (especially, soft-marginclassifier) has a strong theoretical basis to regularization, which prevents the classifierfrom overfitting the data; it also performs well when there are many attributes and fewcases to train the model. As discussed above in related works, SVM method is oftenfound to be more effective than regression in predictive difficulty level of a text(Petersen and Ostendorf 2009).Fig. 4 Correlation of text parameters with user scores12 http://en.wikipedia.org/wiki/Mean_squared_error13 http://en.wikipedia.org/wiki/Coefficient_of_determinationEduc Inf Technolhttp://en.wikipedia.org/wiki/Mean_squared_errorhttp://en.wikipedia.org/wiki/Coefficient_of_determinationWhen applied to text difficulty analysis, regression technique assigns an absolutehardness score to the target document, whereas, SVM classification determines if thetarget document belongs to the hard or easy class. Therefore, whether to use either ofthe two techniques depends on the intended use.1.9.1 Model development by regressionTraining For the training of the regression models, we have used 50 out of the 75texts. We have examined the possible one-parametric (linear and hyperbolic), two-parametric (linear) and three-parametric (linear) fits. Table 3 documents the short-listed models (including Flesch and SMOG equivalence (model 1 and model 2))for which the fitting is optimal (low SSE and high R2) from each category.Although, we have calculated the one-parametric hyperbolic fits on each textparameters, their R2 turn out to be negative suggesting that this kind of model isnot fit for our cases.Validation We have applied our six short listed readability models (Table 4) to theremaining 25 texts of the 75 texts for validation purpose. The root mean square error(RMSE) has been taken as a measure of the accuracy of the models. The RMSE of thepredictions made by our readability models compared to the actual scores given by theusers are summarized below in Table 4. Figure 5 represents the comparison ofperformances of the six different models. At the training phrase, model 3, model 4,model 5 and model 6 have comparable results, but during validation, clearly, model 3and model 4 have significant better results over the other two models (refer to Table 4).In addition, it is to be noticed that these models have number of jukta-akshars per 50sentences (JUK), average word length in terms of visual units (AWL) and number ofpolysyllabic words (PSW) as the parameters with positive coefficients and number ofpolysyllabic words per 30 (PSW30) sentences with a negative coefficient. Moreover, inmodel 3, PSW has very small coefficient implying</s>
<s>they have comparatively lesscontribution in text difficulty. The equivalence of Flesch Reading Ease (model 1) andSMOG index (model 2) which rely on ASL, ASW and sqrt (PSW30) respectively arenot in the top, though these two indices have significant impact for English. Therefore,it is established that the efficient readability models of Bangla do not have ASL andASW as their two parameters. Subsequently Bangla text readability cannot be consid-ered along with the research towards development of language independent readabilityTable 3 Tentative readability metrics in BanglaModel Expression R2 SSEModel 1 −10.4+.11*ASL+5.22*ASW 0.58 1.77Model 2 0.44*sqrt (PSW30) −1.79 0.53 N/AModel 3 −5.23+1.43*AWL+.01*PSW 0.80 0.82Model 4 1.15+.02*JUK−.01*PSW30 0.78 0.9Model 5 5.37+.01*PSW−2.29*ASW+.01*JUK 0.83 0.83Model 6 5.71+.18*ASL−1.49*ASW+.01*PSW 0.83 0.84Educ Inf Technolmodels, which rely on the parameters ASL and ASW (Grzybek 2010); instead, we needto have novel readability models for Bangla. Accordingly, we propose these twomodels (model 3 and model 4) as our readability metric for Bangla, namely:ReadabilityinBangla1 (RB1) and ReadabilityinBangla2 (RB2). Fig. 6 presents a visu-alization of the predictions obtained by the two metrics along with the user feedback forthe 25 validation texts Figs. 7 and 8.1.9.2 Classification using support vector machines (SVM)In this section, we have applied the binary SVM classifier to classify Bangla textdocuments into two binary classes namely, hard and easy texts. Since, the median valueof the user ratings is 4.9, we have labeled the texts having user rating less than 4.9 aseasy or class -1 and the rests as hard or class 1. Therefore, the feature space x xϵRnð Þ ofour SVM consists of a 75×6 matrix containing 6 features (mentioned in DataPreparation section) of each of the 75 texts and the binary mapping of user evaluationcorresponding to the texts represents the label space. Given a training set instance-classpairs (xi; yi ), i=1…l, where xi∈ Rn and y∈ 1; −1f gl ; the general equation of a SVMis (Manning et al. 2008):wT wþ Cξi is minimized;w ¼ weight vector;C ¼ regularization termð1ÞTable 4 Summary of validation resultsModel 1 2 3 4 5 6RMSE 1.32 1.19 0.54 0.67 1.08 1.27Table 5 Results of SVM classification on Bangla text using different kernelsLinear Polynomial Radial basis SigmoidFraction of texts correctly classified 80 % 75 % 56.25 % 56.25 %multiple correlation (R) .87 .85 .67 .67C=10;d=2;r=0;γ=1/6=0.1;ξi=0.01Fraction of texts correctly classified 75 % 73 % 56.25 % 56.25 %multiple correlation (R) .81 .79 .65 .65C=10;d=2;r=0;γ=1/6=0.1;ξi=0.001Fraction of texts correctly classified 70 % 70 % 55 % 55 %multiple correlation (R) .79 .72 .63 .64C=100;d=2;r=0;γ=1/6=0.1;ξi=0.01Educ Inf Technolyi wTΦ xið Þ þ b� �≥1− ξi; ξi slack variableð Þ≥0 ð2ÞFig. 5 Visualization of performance of different regression modelsFig. 6 Comparison of model performances with user ratingsEduc Inf TechnolWe have divided the dataset in 4 parts, each having 4 texts and have performed a 4-fold cross validation. We have used four types of kernel function on the data usingLIBSVM (Chang and Lin 2011) software:Fig. 7 Accuracy of different kernels with respect to the SVM parametersFig. 8 Comparing coefficient of multiple correlation-R and SVM parameters for different kernelsEduc Inf Technol1. linear : K</s>
<s>xi; x j� � ¼ xT ix j2. polynomial : K xi; x j� � ¼ γxT ix j� �� þrÞd; d ¼ degree; γ ¼ 1=features3. radial basis function : K xi; x j� � ¼ exp −γ mod xi−x j� ���2ÞÞ4. sigmoid : K xi; x j� � ¼ tanh γxT ix j� �� þrÞ ; r ¼ coefficientThe table above presents the results of SVM classifications with different set ofSVM parameters as mentioned in Eqs (1) and (2) along with different types of kernels.To evaluate the quality of the classifications, multiple correlation (R) and percentage oftexts accurately classified are used. R denotes the extent to which the predictions areclosed to the actual classes and its square (R2) indicates the percentage of dependentvariable variation that can be explained by the model. As can be shown from the resultsabove, about three-fourth of the documents were classified correctly using linear orpolynomial kernel of degree 2, when the regularization term C=10 and the value of theslack variable ξi≤0.01. Therefore, we can conclude that hard and easy to comprehendtexts are linearly classifiable in the 6-dimensional document feature space mentioned insection 3. Compared to the regression approach, in case of SVM, it was not necessaryto choose a subset of the text features in order to obtain good results.1.10 Support vector regression (SVR)Support vector regression (SVR) extends the SVM technique into estimation problemssuch as regression (Drucker et al. 1997; Basak et al. 2007; Smola and Schölkopf 2004).We have examined the performance of SVR in text readability determination. Linearand polynomial kernel of degree 2 has been used along with the epsilon loss functionvalue at 0.1. We have used two different combinations of features: first, all the sixfeatures were considered and then the three features (AWL, PSW, and JUK), whichwere found to be the most influential from the results of linear least square regression(refer to section “model development by regression”) were taken. The results arepresented in the Table 6. As can be deduced from the analysis of R2 and RMSEvalues, SVR performs poorly than the least square regression when applied to textdifficulty analysis (refer to Tables 4 and 5). Fig. 9 presents the comparative chart ofRMSE values as obtained by least square regression and SVR.As regression and classification approach address the problem of text readability indifferent ways, it is difficult to compare their outputs in absolute terms. But for the sakeof convenience, we have presented a comparison of the Goodness of Fit (R2) for linearregression, SVM binary classification, SVM regression with 6 features, SVMregression with three features (refer to Fig. 10). The two series corresponding tolinear regression represents the two readability formulae developed, and the seriesTable 6 Results of SVRLinear kernel Polynomial kernelRMSE R2 RMSE R2All features 1.6 0.02 2.2 0.63Three features 1.1 0.28 23.3 0.47Educ Inf Technolcorresponding to SVM and SVR denotes performances of linear and polynomialkernels. As can be seen from the charts, both linear regression and binary SVMclassifier have performed efficiently in achieving their desired objectives and haveFig. 9 Comparison of RMSE for</s>
<s>linear and SVM regressionFig. 10 Comparison of goodness of fit for the different proposed readability modelsEduc Inf Technolcomparable R2 values. Whether to use regression or classification in determining thehardness of a text depends on the intended application. Moreover, from Figs. 9 and 10,it can also be inferred that, although SVR models with polynomial kernel havecomparable R2, they incur large errors than linear regressions.1.10.1 Readability prediction systemIn the introduction, we have declared that our aim is to incorporate the models andfindings from our study into an input–output system that can be easily used formeasuring the difficulty of a given Bangla document. To achieve this goal, we havedeveloped a system (refer to Fig. 11 below), which given a Bangla text documentprovides as outputs the values of the text attributes as well as predictions by two Banglaregression models and the result from binary SVM classification. SVR has not beenincorporated due to its poor performance as described above. The scores from theEnglish readability formulae have also been incorporated for the sake of transparency.Fig. 11 Snapshot of the first version of the readability prediction systemEduc Inf TechnolAt present, our system does not contain any provision to incorporate the specificuser backgrounds. Currently we are expanding the study to different user groups andsubsequently the system will able to predict the relative difficulty of a text dependingon the user characteristics.1.11 Conclusion and perspectiveIn this paper, we have presented different techniques to predict comprehension diffi-culty of Bangla texts. We have used syntactic and lexical text features, majority ofwhich have not been used earlier to model the text readability in Bangla. Throughregression, we have established two readability models for Bangla based on averageword length in terms of visual units, number of polysyllabic words and number ofjukta-akshars. Similarly, for SVM, we have shown that linear and polynomial kernelfunctions deliver the best results in predicting the text difficulty; SVM predicts the classlabel (easy or hard) of a text based on all of the features. We have also combined thetwo approaches in support vector regression (SVR) method, but SVR has been found toperform less accurately than traditional regression technique. In the course of the paper,we have shown that although the classical English readability formulas correlate withthe user evaluation, they are not helpful as text difficulty predictors. The study ofBangla readability (Das and Roychoudhury 2006) has studied comparison of one andtwo parametric fits through regression for a miniature model, but they have notconsidered parameters like AWL, JUK; we have found these parameters among themajor players. Moreover, according to our regression analysis, any one-parametrichyperbolic fit other than the SMOG equivalence is not appropriate as they generatenegative values of R2. According to the best of our knowledge, this is the first work inBangla, which attempts to develop text readability measures using regression andmachine learning methods, and compares among them. In future, we plan to extendthe binary SVM to a multi-class classifier (e.g., easy, moderate, hard) and augment thefeature space with more text properties such as part of speech statistics. We are alsoworking on developing devising readability</s>
<s>predictors targeted towards different agegroup such as school students and different social groups such as the first generationlearners from economically backward strata. We will also like to study the effects ofother textual features like semantics, coherence etc. on readability of Bangla Texts.ReferencesAgnihotri, R. K. (2008). 13 orality and literacy. Language in South Asia, page 271.Bamberger, R., & Rabin, A. T. (1984). New approaches to readability: Austrian research. The ReadingTeacher, 37(6), 512–519.Basak, D., Pal, S., & Patranabis, D. C. (2007). Support vector regression. Neural Information Processing-Letters and Reviews, 11(10), 203–224.Benjamin, R. (2012). Reconstructing readability: Recent developments and recommendations in the analysisof text difficulty. Educational Psychology Review, 24, 1–26.Britton, B., & Gülgöz, S. (1991). Using kintsch’s computational model to improve instructional text: Effects ofrepairing inference calls on recall and cognitive structures. Journal of Educational Psychology, 83(3),329.Buswell, G. (1937). How adults read. University of Chicago.Educ Inf TechnolChakraborti, P. (2003). Diglossia in Bengali. PhD thesis, University of New Mexico.Chall, J. (1958). Readability: An appraisal of research and application. Number 34. Ohio State University.Chall, J. (1995). Readability revisited: The new Dale-Chall readability formula, volume 118. Cambridge:Brookline Books.Chang, C.-C., & Lin, C.-J. (2011). Libsvm: a library for support vector machines. ACM Transactions onIntelligent Systems and Technology (TIST), 2(3), 27.Collins-Thompson, K. and Callan, J. (2004). A language modeling approach to predicting reading difficulty.In Proceedings of HLT/NAACL, volume 4Collins-Thompson, K., & Callan, J. (2005). Predicting reading difficulty with statistical language models.Journal of the American Society for Information Science and Technology, 56(13), 1448–1462.Cortes, C., & Vapnik, V. (1995). Support-vector networks. Machine Learning, 20(3), 273–297.Cotugna, N., Vickery, C., & Carpenter-Haefele, K. (2005). Evaluation of literacy level of patient educationpages in health-related journals. Journal of Community Health, 30(3), 213–219.Crossley, S., Dufty, D., McCarthy, P., & McNamara, D. (2007). Toward a new readability: A mixed modelapproach. In Proceedings of the 29th annual conference of the Cognitive Science Society, pp. 197–202.Dale, E., & Chall, J. (1948). A formula for predicting readability. Educational research bulletin, pp. 11–28.Das, S., & Roychoudhury, R. (2006). Readability modelling and comparison of one and two parametric fit: acase study in bangla*. Journal of Quantitative Linguistics, 13(01), 17–34.Drucker, H., Burges, C. J., Kaufman, L., Smola, A., & Vapnik, V. (1997). Support vector regression machines.Advances in Neural Information Processing Systems, 9, 155–161.DuBay, W. (2004). The principles of readability. Impact Information, 1–76.DuBay, W. (2007). Smart Language: Readers, Readability, and the Grading of Text. ERIC.Ferguson, C. A. (1959). Diglossia. Word-Journal of the International Linguistic Association, 15(2), 325–340.Flesch, R. (1948). A new readability yardstick. Journal of Applied Psychology, 32(3), 221.Foltz, P., Kintsch, W., & Landauer, T. (1998). The measurement of textual coherence with latent semanticanalysis. Discourse Processes, 25(2–3), 285–307.Fry, E. (1968). A readability formula that saves time. Journal of Reading, 11(7), 513–578.Graesser, A., McNamara, D., & Kulikowich, J. (2011). Coh-metrix providing multilevel analyses of textcharacteristics. Educational Researcher, 40(5), 223–234.Graesser, A., McNamara, D., Louwerse, M., & Cai, Z. (2004). Coh-metrix: Analysis of text on cohesion andlanguage. Behavior Research Methods, 36(2), 193–202.Gunning, R. (1968). The technique of clear writing. NewYork: McGraw-Hill.Heilman, M., Collins-Thompson, K., and Eskenazi,</s>