text
stringlengths 41
31.4k
|
|---|
<s>. . , 𝐹𝑛 are the assumed features for sentence 𝑠 ∈ 𝑆, Bayes’ rule computes the probability that 𝑠 belongs to 𝑆𝑑, as (3.2), (3.3). Where, 𝑆𝑑 = set of sentences that are related to the document-element 𝑑 𝑆 = set of all sentences in the document 𝐷. We used (3.2), (3.3) to classify the correctly detected sentence by the model with a set of actually desired sentences. From the 12 Documents, we have collected the content-based features manually. And for all documents, the total number of “decision”, “agenda” and query sentences are manually counted. Then by the model, we have extracted the desired information with three features. For these three features, 10-fold cross-validation is used. Precision, recall, and F1 are measured to evaluate the result. It is defined as (3.6), (3.7), (3.8) and [12]: Where, 𝑇𝑎 = Set of data to be detected, 𝑇𝑑 = Set of detected data 𝑇𝑟𝑢𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒 = 𝑇𝑎 ∩ 𝑇𝑑 𝑇𝑜𝑡𝑎𝑙 𝑃𝑟𝑒𝑑𝑖𝑐𝑡𝑒𝑑 𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒 = 𝑇𝑑 𝐴𝑐𝑡𝑢𝑎𝑙 𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒 = 𝑇𝑎 In the Table 5.1, we have shown the precision, recall, and F1 for random eight documents from the set of 29 document whereas a perfect structure with high precision and high recall mean all results categorized appropriately. Here, we find all the recall are 1 which means all relevant “decision” was extracted but how many unrelated “decision” was extracted, it cannot to known by this. The precision score tells us most of the “decision” Table 5.1. Precision, Recall and F1 for “Decision” detection for Random 8 Documents Methods Query-Based : 1 Query and Content-Based : 2 Query, Content and Context-Based : 3 Pr Rc F1 Pr Rc F1 Pr Rc F1 1 .879 1 .935 .659 1 .795 .897 1 .945 2 .952 1 .976 .955 1 .977 .857 1 .923 3 .727 1 .842 .786 1 .880 1 1 1 4 .583 1 .737 .917 1 .957 .861 1 .925 5 .806 1 .892 .766 1 .867 1 1 1 6 .893 1 .943 .825 1 .903 .821 1 .902 7 .893 1 .943 .718 1 .836 .964 1 .982 8 .769 1 .870 .929 1 .963 .808 1 .894 extracted is relevant. The precision of eight documents of three methods for mining the “decision” with Query-based method (Feature 1),and Query- based with Content-based method (Feature 1 and 2), and Query-based with Content-based and with Context-based method (Feature 1, 2 and 3) is plotted in following Fig.5.1. And here we can find the differences between them where the method is going close to the perfect set of “decision” gradually. Here from eight documents we find that, the two documents with three methods (Feature 1, 2 and 3) shows precision, recall, and F1 as 1 which means the accuracy is 100% and one documents with three methods (Feature 1, 2 and 3) is nearly 1. Table 5.2 shows precision, recall, and F1 of a total number of detected “decision” for all 29 documents using Feature 1, 2 and 3 and the precision is 87%</s>
|
<s>with 92% F1 measure with the set of exact “decision”. And we get the accuracy [12] by the following way for “decision”: 𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝐴𝑙𝑙 𝑑𝑒𝑡𝑒𝑐𝑡𝑒𝑑 𝐷𝑒𝑐𝑖𝑠𝑖𝑜𝑛/𝑇𝑜𝑡𝑎𝑙 𝐷𝑒𝑐𝑖𝑠𝑖𝑜𝑛 (5.1) Here, the achieved accuracy is 86% for all 29 documents. Table 5.2. Total Set of Data with Precision, Recall and F1 Fig. 5.1: Precisions of the Three Methods of Random 8 Documents 0.20.40.60.81.21 2 3 4 5 6 7 8Number of DocumentsFeature 1 Feature 1 and 2 Feature 1, 2 and 3Total number of Agenda 474 Total Number of detected Agenda 475 Total number of Decision 698 Pr Rc F1 Total Number of detected Decision 607 .870 1 .920 Total Number of documents 29 The number of extracting sentences with “decision” using Query-based with Content-based (Feature 1 and 2), and Query-based with Content-based with Context-based (Feature 1, 2 and 3) and the set of a total number of “decision” 29 documents are separately shown in the Fig. 5.2. And we can see how close we are with the perfect set of “decision” for 29 set of data. Fig. 5.2: Total Decisions Detected of the Methods- A) Content-Based Method and B) Context-Based Method Merge with the Content-Based Method and C) Total Decision Counted Manually for 29 Documents Fig.5.3: BM25 Score and Sentence weight. (The query word- “মেকানিকযাল”) Moreover, in 29 documents the query of “Mechanical” as in Bangla “মেকানিকযাল” is searched and only the one document contains the query with 16 sentences which is shown 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Documents NoFeauture 1 and 2 Feature1, 2 and 3 Total Set1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Number of sentenceBM25 Score Sentence Wieghtin Fig. 5.3. In the Fig. 5.3. it is found that all the sentences except the extracted sentences with the query word, are set to zero in BM25 and the sentence weight is also similar to BM25 after top 16 sentences. Here we have plotted the top 20 sentences, but only 16 sentences have the BM25 score above zero which contains the query. Moreover, manually we have calculated there are exactly 16 sentences in a single file from a set of 29 documents which actually carries the above keyword. Fig. 5.4. Shows an example of some part of a single document for “decision”. Fig. 5.4. An Example of “Decision” Detection in a Single Document 5.2.2 Finding User Query from the Extracted Decision Pool Analysis For user query analysis, we have experimented with 29 resolutions of Academic Council’s meeting of KUET to extract the data and then using the method collected the decision list. It is divided into three parts: Keywords detection, Query detection, and categorization. We have also measured the cosine similarity of the sentences and rank them with TextRank algorithm [38], [52] to get the most informative sentences. For the decision sentences, we have to make a similarity matrix to know the top relevance sentences. In the Fig.</s>
|
<s>5.5. It showed the top TextRank from equation (3.9), (3.10) sentences using cosine similarity where the red words are the keyword extracted by tf-idf and Rake algorithm. Most of these words are from our 20 keyword knowledge base. TextRank also verifies the common frequency words from both tf-idf and Rake. From the TextRank we ranked all the lines and the values are then sent to the Gaussian model from equation (3.11), (3.12), (3.13), and (3.14) to make a classification. We have made three clusters to represent the graph which is shown below in Fig. 5.6. Fig. 5.5: Top-ranked Sentences by Textrank Here we can understand the blue and green groups are overlapping with each other. Therefore there are many lines whcih holds the same words as keywords hence share the cluster. And the words cosine similarity are also very close to each other as it is a specific domain and most of the topics are expressed with similar words. However, for K-means clustering for decision lines, it made a two-cluster in an unsupervised learning showed in Fig.5.7. It also concludes that these sentences are very similar shown in Fig.5.6. Fig. 5.6: Gaussian Curve for Three Clusters In the Table 5.3, we have shown the precision, recall, and F1 for decision sentences from the set of 29 document for a single query “মেকানিকযাল” whereas a perfect structure with high precision and high recall mean all results categorized appropriately. Here, we find the recall are 1. And the precision is 92% with 96% F1 measure. And the accuracy is (5.1). Fig. 5.7: K-Means of the Two Clusters Then by the model, we have extracted the desired information for different keywords with three features. For these three features, 10-fold cross-validation is used. Precision, recall, and F1 are measured to evaluate the result. It is defined as (3.6), (3.7), (3.8) and [11]. Table 5.3. Total Set of Data for One Keyword –মেকানিকযাল with Precision, Recall and F1 However, in our work from 29 documents the query of “Mechanical” as “মেকানিকযাল” is explored and only in a single document the query is found in 16 lines. But using the described three features here we found all the related words with “মেকানিকযাল” and only from all the decision pool we got 87 corresponding lines. 5.3 Discussion In this chapter we present the experimental outcomes and analyses of the proposed schemes. As there is no existing method in Bangla knowledge extraction from official documents, we have compared our schemes with the precision and recall and also with the own finding system. We have also made categorization with classification model. In each Total Number of ‘মেকানিকযাল’ related lines from decision lines 93 Total Number of detected only ‘মেকানিকযাল’ lines from decision lines [with a single query] 02 Pr Rc F1 Total Number of detected ‘মেকানিকযাল’ related lines from decision lines 87 .925 1 .961 case we found relevance result and hence we validated the system. We have showed that the scheme outperforms the exact word extraction schemes. 86%</s>
|
<s>accuracy is achieved for a sample Dataset to detect decisions from the PDF documents. CHAPTER VI Conclusions 6.1 Summary Now-a-days large volume of produced Bangla files maintenance and analysis have been a key concern in different aspects of data computing like Big Data. But the margin of large volume is increasing day bay day as the required size of data files is expanding gradually. On the other hand, the tools for effective knowledge extraction and finding required data among the large volume is very small as the Bangla language is extremely complex so there are less work done on it. Furthermore, Bangla is the sixth most widely spoken language in the world. So, it is very important to handle large volume Bangla documents efficiently with meaningful way. However, Extraction of the correct information from a huge set of Bangla files is very significant for decision making. The vast amount of legal Bangla files are produced by professionals, Judicial, and academician and is very difficult to find the previous or specific information. In this research we have described a new model, a domain specific information extraction system for Bangla official documents where the user will find his/her desired information using a set of the various knowledge-based method and fid the decisions and the topic of discussions which are taken the meeting written in Bangla text. In this work, we have presented the semantic and other features with natural languages processing. The results are presented here with a precision and recall which showed that the proposed algorithm achieved a high performance. The accuracy for this sample Dataset for “decision” extraction is 86%. The knowledge is also classified with keywords from the documents. However, the major disadvantage of the system is that knowledge base is for a specific domain with a small dataset and it do not extract information from the tabular data. Moreover, the keywords knowledge base is small and do not extract information using vocabulary and morphological investigation of words. 6.2 Recommendations for Future Works Since the proposed model is applied in the official Bangla PDF AC files from KUET for knowledge extraction, text, html or doc can use the scheme. More specifically – This scheme can be applied to Big Data. One important future direction of the work is that; the scheme can be easily implemented for discovering various patterns with semantics analysis. It will be very efficient to apply synopsis generation, summarization for a specific domain. REFERENCES 1. Weiss, G.M. and Davison, B.D., Data Mining. Handbook of Technology Management, H. Bidgoli, 2010. 2. PDF in 2016: Broader, deeper, richer, https://www.pdfa.org/pdf-in-2016-broader-deeper-richer, Accessed on August 10, 2018 3. Staar, P.W., Dolfi, M., Auer, C. and Bekas, C., Corpus Conversion Service: A machine learning platform to ingest documents at scale. arXiv preprint arXiv:1806.02284, 2018. 4. Fayyad, U., Piatetsky-Shapiro, G. and Smyth, P., From data mining to knowledge discovery in databases. AI magazine, 17(3), p.37, 1996. 5. Baker, J.B., Sexton, A.P., Sorge, V. and Suzuki, M., September. Comparing approaches to</s>
|
<s>mathematical document analysis from PDF. In Document Analysis and Recognition (ICDAR), 2011 International Conference on (pp. 463-467). IEEE, 2011. 6. Zanibbi, R. and Blostein, D., 2012. Recognition and retrieval of mathematical expressions. International Journal on Document Analysis and Recognition (IJDAR), 15(4), pp.331-357, 2012. 7. Mandal, S., Chowdhury, S.P., Das, A.K. and Chanda, B., Automated detection and segmentation of table of contents page from document images. In Document Analysis and Recognition, 2003. Proceedings. Seventh International Conference on (pp. 398-402). IEEE, 2003. 8. Wu, Z., Das, S., Li, Z., Mitra, P. and Giles, C.L., Searching online book documents and analyzing book citations. In Proceedings of the 2013 ACM symposium on Document engineering (pp. 81-90). ACM, 2013. 9. Chiu, P., Chen, F. and Denoue, L., Picture detection in document page images. In Proceedings of the 10th ACM symposium on document engineering (pp. 211-214). ACM, 2010. 10. Kataria, S., Browuer, W., Mitra, P. and Giles, C.L., Automatic Extraction of Data Points and Text Blocks from 2-Dimensional Plots in Digital Documents. In AAAI (Vol. 8, pp. 1169-1174), 2008. 11. Bhatia, S. and Mitra, P., Summarizing figures, tables, and algorithms in scientific publications to augment search results. ACM Transactions on Information Systems (TOIS), 30(1), p.3, 2012. 12. Tuarob, S., Bhatia, S., Mitra, P. and Giles, C.L., AlgorithmSeer: A system for extracting and searching for algorithms in scholarly big data. IEEE Transactions on Big Data, 2(1), pp.3-17, 2016. 13. Das, A. and Bandyopadhyay, S., Theme detection an exploration of opinion subjectivity. In Affective Computing and Intelligent Interaction and Workshops, 2009. ACII 2009. 3rd International Conference on (pp. 1-6). IEEE, 2009. 14. Islam, M.S., Research on Bangla language processing in Bangladesh: progress and challenges. In 8th International Language & Development Conference (pp. 23-25), 2009. 15. Molla, M.K.I. and Talukder, K.H., Bangla number extraction and recognition from document image. 5th ICCIT 2002, pp.200-206, 2002. 16. Sarkar, K., Bengali text summarization by sentence extraction. arXiv preprint arXiv:1201.2240, 2012. 17. Weka, https://www.cs.waikato.ac.nz/ml/weka/, Accessed on August 10, 2018 18. Suzuki, M., Tamari, F., Fukuda, R., Uchida, S. and Kanahori, T., INFTY: an integrated OCR system for mathematical documents. In Proceedings of the 2003 ACM symposium on Document engineering (pp. 95-104). ACM, 2003. 19. Baker, J.B., Sexton, A.P. and Sorge, V., A linear grammar approach to mathematical formula recognition from PDF. In International Conference on Intelligent Computer Mathematics (pp. 201-216). Springer, 2009. 20. Baker, J.B., Sexton, A.P. and Sorge, V., Faithful mathematical formula recognition from PDF documents. In Proceedings of the 9th IAPR International Workshop on Document Analysis Systems (pp. 485-492). ACM, 2010. 21. Anderson, R.H., Syntax-directed recognition of hand-printed two-dimensional mathematics. In Symposium on Interactive Systems for Experimental Applied Mathematics: Proceedings of the Association for Computing Machinery Inc. Symposium (pp. 436-459). ACM, 1967. 22. Mandal, S., Chowdhury, S.P., Das, A.K. and Chanda, B., Automated detection and segmentation of table of contents page from document images. In Document Analysis and Recognition, 2003. Proceedings. Seventh International Conference on (pp. 398-402). IEEE, 2003. 23. Pdfbox, http://pdfbox.apache.org/,Accessed on August 10, 2018 24. https://www.snowtide.com/, Accessed on August</s>
|
<s>10, 2018 25. http://www.xpdfreader.com/about.html, Accessed on August 10, 2018 26. https://www.pdflib.com/products/tet/, Accessed on August 10, 2018 27. Bhatia, S., Mitra, P. and Giles, C.L., Finding algorithms in scientific articles. In Proceedings of the 19th international conference on World wide web (pp. 1061-1062). ACM, 2010. 28. Das, A. and Bandyopadhyay, S., Phrase-level polarity identification for Bangla. Int. J. Comput. Linguist. Appl.(IJCLA), 1(1-2), pp.169-182, 2010. 29. Das, A. and Bandyopadhyay, S., Sentiwordnet for bangla. Knowledge Sharing Event-4: Task, 2, pp.1-8, 2010. 30. Bhattacharya, U., Parui, S.K. and Mondal, S., Devanagari and bangla text extraction from natural scene images. In Document Analysis and Recognition, 2009. ICDAR'09. 10th International Conference on (pp. 171-175). IEEE, 2009. 31. Hassan, A., Amin, M.R., Mohammed, N. and Azad, A.K.A., Sentiment Analysis on Bangla and Romanized Bangla Text (BRBT) using Deep Recurrent models. arXiv preprint arXiv:1610.00369, 2016. 32. Ramanathan, A. and Rao, D.D., A lightweight stemmer for Hindi. In the Proceedings of EACL, 2003. 33. Rose, S., Engel, D., Cramer, N. and Cowley, W., Automatic keyword extraction from individual documents. Text Mining: Applications and Theory, pp.1-20, 2010. 34. Engel, D.W., Whitney, P.D., Calapristi, A.J. and Brockman, F.J., Mining for emerging technologies within text streams and documents (No. PNNL-SA-64618). Pacific Northwest National Lab.(PNNL), 2009. 35. Whitney, P., Engel, D. and Cramer, N., Mining for surprise events within text streams. In Proceedings of the 2009 SIAM International Conference on Data Mining (pp. 617-627). Society for Industrial and Applied Mathematics, 2009. 36. Mihalcea, R. and Tarau, P., Textrank: Bringing order into text. In Proceedings of the 2004 conference on empirical methods in natural language processing, 2004. 37. https://github.com/vgrabovets/multi_rake , Accessed on Nov 10, 2018 https://www.snowtide.com/http://www.xpdfreader.com/about.htmlhttps://www.pdflib.com/products/tet/38. Pay, T., Lucci, S. and Cox, J.L., An Ensemble of Automatic Keyphrase Extractors: TextRank, RAKE and TAKE. 39. Lynn, H.M., Lee, E., Choi, C. and Kim, P., Swiftrank: an unsupervised statistical approach of keyword and salient sentence extraction for individual documents. Procedia Computer Science, 113, pp.472-477, 2017. 40. Cleveland, H., Information as a resource. Futurist, 16(6), pp.34-39, 1982. 41. Porter, M.F., An algorithm for suffix stripping. Program, 14(3), pp.130-137, 1980. 42. Robertson, S.E., Walker, S., Jones, S., Hancock-Beaulieu, M.M. and Gatford, M., Okapi at TREC-3. Nist Special Publication Sp, 109, p.109, 1995. 43. Schütze, H., Manning, C.D. and Raghavan, P., Introduction to information retrieval (Vol. 39). Cambridge University Press, 2008. 44. Kupiec, J., Pedersen, J. and Chen, F., July. A trainable document summarizer. In Proceedings of the 18th annual international ACM SIGIR conference on Research and development in information retrieval (pp. 68-73). ACM, 1995. 45. Teufel, S., Sentence extraction as a classification task. Intelligent Scalable Text Summarization, 1997. 46. https://medium.com/@starang/precision-and-recall-a-brief-intro-38589a21a09, Accessed on August 10, 2018 47. https://en.wikipedia.org/wiki/Confusion_matrix, Accessed on August 10, 2018 48. https://towardsdatascience.com/accuracy-precision-recall-or-f1-331fb37c5cb9 Accessed on August 10, 2018 49. https://blog.exsilio.com/all/accuracy-precision-recall-f1-score-interpretation-of-performance-measures/, Accessed on August 10, 2018 50. Skabar, A. and Abdalgader, K., Clustering sentence-level text using a novel fuzzy relational clustering algorithm. IEEE transactions on knowledge and data engineering, 25(1), pp.62-75, 2013. 51. Mihalcea, R., Graph-based ranking algorithms for sentence extraction, applied to text summarization. In Proceedings</s>
|
<s>of the ACL 2004 on Interactive poster and demonstration sessions (p. 20). Association for Computational Linguistics, 2004. 52. Brin, S. and Page, L., The anatomy of a large-scale hypertextual web search engine. Computer networks and ISDN systems, 30(1-7), pp.107-117, 1998. 53. Erkan, G. and Radev, D.R., Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of artificial intelligence research, 22, pp.457-479, 2004. 54. Gope, M. and Hasehm, M.M.A., Knowledge Extraction from Bangla Documents: A Case Study. In 2018 International Conference on Bangla Speech and Language Processing (ICBSLP) (pp. 1-6). IEEE, 2018. 55. Hassan, T., September. Object-level document analysis of PDF files. In Proceedings of the 9th ACM symposium on Document engineering (pp. 47-55). ACM, 2009. 56. Shams, R., Hashem, M.M.A., Hossain, A., Akter, S.R. and Gope, M., Corpus-based web document summarization using statistical and linguistic approach. In Computer and Communication Engineering (ICCCE), 2010 International Conference on (pp. 1-6). IEEE, 2010. 57. http://indradhanush.unigoa.ac.in/public/webcontent/webcontent.php?id=37,Accessed on Nov 10, 2018</s>
|
<s>Symptom-Based Disease Detection System In Bengali Using Convolution Neural NetworkSee discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/335935059Symptom-Based Disease Detection System In Bengali Using ConvolutionNeural NetworkConference Paper · June 2019DOI: 10.1109/ICSCC.2019.8843664CITATIONREADS1302 authors, including:Some of the authors of this publication are also working on these related projects:Design and Development of Precision Agriculture Information System for Bangladesh View projectEmotion Detection View projectAmit Kumar DasEast West University (Bangladesh)39 PUBLICATIONS 290 CITATIONS SEE PROFILEAll content following this page was uploaded by Amit Kumar Das on 22 September 2019.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/335935059_Symptom-Based_Disease_Detection_System_In_Bengali_Using_Convolution_Neural_Network?enrichId=rgreq-b20b1e218e8143d91e4ea5c81a982ef9-XXX&enrichSource=Y292ZXJQYWdlOzMzNTkzNTA1OTtBUzo4MDU3OTk2MDU1Nzk3NzZAMTU2OTEyOTAwNTg5Nw%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/335935059_Symptom-Based_Disease_Detection_System_In_Bengali_Using_Convolution_Neural_Network?enrichId=rgreq-b20b1e218e8143d91e4ea5c81a982ef9-XXX&enrichSource=Y292ZXJQYWdlOzMzNTkzNTA1OTtBUzo4MDU3OTk2MDU1Nzk3NzZAMTU2OTEyOTAwNTg5Nw%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Design-and-Development-of-Precision-Agriculture-Information-System-for-Bangladesh?enrichId=rgreq-b20b1e218e8143d91e4ea5c81a982ef9-XXX&enrichSource=Y292ZXJQYWdlOzMzNTkzNTA1OTtBUzo4MDU3OTk2MDU1Nzk3NzZAMTU2OTEyOTAwNTg5Nw%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Emotion-Detection-3?enrichId=rgreq-b20b1e218e8143d91e4ea5c81a982ef9-XXX&enrichSource=Y292ZXJQYWdlOzMzNTkzNTA1OTtBUzo4MDU3OTk2MDU1Nzk3NzZAMTU2OTEyOTAwNTg5Nw%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-b20b1e218e8143d91e4ea5c81a982ef9-XXX&enrichSource=Y292ZXJQYWdlOzMzNTkzNTA1OTtBUzo4MDU3OTk2MDU1Nzk3NzZAMTU2OTEyOTAwNTg5Nw%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Amit_Das20?enrichId=rgreq-b20b1e218e8143d91e4ea5c81a982ef9-XXX&enrichSource=Y292ZXJQYWdlOzMzNTkzNTA1OTtBUzo4MDU3OTk2MDU1Nzk3NzZAMTU2OTEyOTAwNTg5Nw%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Amit_Das20?enrichId=rgreq-b20b1e218e8143d91e4ea5c81a982ef9-XXX&enrichSource=Y292ZXJQYWdlOzMzNTkzNTA1OTtBUzo4MDU3OTk2MDU1Nzk3NzZAMTU2OTEyOTAwNTg5Nw%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/East_West_University_Bangladesh?enrichId=rgreq-b20b1e218e8143d91e4ea5c81a982ef9-XXX&enrichSource=Y292ZXJQYWdlOzMzNTkzNTA1OTtBUzo4MDU3OTk2MDU1Nzk3NzZAMTU2OTEyOTAwNTg5Nw%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Amit_Das20?enrichId=rgreq-b20b1e218e8143d91e4ea5c81a982ef9-XXX&enrichSource=Y292ZXJQYWdlOzMzNTkzNTA1OTtBUzo4MDU3OTk2MDU1Nzk3NzZAMTU2OTEyOTAwNTg5Nw%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Amit_Das20?enrichId=rgreq-b20b1e218e8143d91e4ea5c81a982ef9-XXX&enrichSource=Y292ZXJQYWdlOzMzNTkzNTA1OTtBUzo4MDU3OTk2MDU1Nzk3NzZAMTU2OTEyOTAwNTg5Nw%3D%3D&el=1_x_10&_esc=publicationCoverPdf978-1-7281-1557-3/19/$31.00 ©2019 IEEE 2019 7th International Conference on Smart Computing & Communications (ICSCC) Symptom-Based Disease Detection System In Bengali Using Convolution Neural Network Abstract— Natural language processing (NLP) and automatic detection of the disease have become popular in the recent era. Several research work show disease detection system in several languages. We present a disease detection system from the clinical text which is in Bengali language consisting of a numerous set of diacritic character, at a sentence-level classification. The clinical dataset consisting of Bengali text which is generally user interpreted symptom for the most common disease. Also, our approach represents the NLP methodology for Bengali language processing and classification of disease using several types of neural networks with hyper-parameter tuning and word vectorization. The aim of the research is the initial detection of disease from the user’s voice to text data, in our case Bengali. So, a speech recognition system developed in the Bengali language is used to feed the disease detection model and finalizing the output with the model-detected disease. Keywords—Neural network, Language model, Disease detection, Text classification. I. INTRODUCTION Medical diagnosis and detection of the disease have been first recorded in ancient Egypt (2630-2611 BC) [1]. From the very origin to modern ages numerous additions have brought a change in the disease detection system. Humans’ thought of expression and sense of diagnosis of disease has been improved with generations. Whenever we visit a doctor in case of disease-related issue, initially we describe our problems (initial symptom by us, or patient) in our natural language. Our doctors or health professionals spend time detecting the critical issue (symptoms of disease) and conclude providing medication or solution for minimizing the disease. For a developing country like Bangladesh where the literacy rate is considerably low [2], people tend to visit nonprofessional medical specialists a lot. For example – “a patient having heart-related issues.” So he should visit a Cardiologists. But over here they tend to visit an Allergists/Immunologists or other medical specialists. So, if we provide a system by which they could know what their disease is from their interpretation of symptom, both time and money can be saved. Here comes our idea – using the machine learning core to build a disease detection model that will allow detection of disease from symptoms described in Bengali. Also, several disease detection types of research are available in other languages such as English, Spanish, etc. as these languages are highly developed</s>
|
<s>considering numerous work on speech recognition system and natural language processing. Bengali is one of the most popular languages (ranked 7th worldwide), as one-sixth of words’ population speak Bengali [3][4]. So the primary focus is to work on the Bengali language to develop the human-computer interaction. This research shows the core and basics of applying NLP on Bengali by representing the disease classification from plain text applying the concept of the neural network which is used widely for classification problems and NLP. The neural network is achieving excellent results, such as sentence modeling and other NLP tasks. The first model is an artificial neural network (ANN) model; which in our case a basic neural network to classify the disease from symptom data, word embedding with bag-of-words (BOW), with colossal size matrix on input layer results in a comparatively slower model. The other model is based on a convolutional neural network (CNN) maintaining the basics of sentence classification as implemented on English [5]. The CNN model consists of one convolution layer on top of word embedding (Word2Vec: continuous bag-of-words (CBOW), skip-gram model and fastText) trained on Bengali disease symptom dataset. The validity of the dataset is ensured by crowdsourcing on our Bengali data. Also, the accuracy of detection of disease is shown in two cases – one, from the user plain data and the other one is voice data corresponding to text data. Test data from user input as text format has overall a better accuracy than input for test data from a proposed speech recognition in Bengali by Jahirul, Masiath & Rakibul [6]. The contribution of the research work follows – - The outcome of the research work is made open source so that doctors and health professionals can contribute to the system by enriching it with information and core purpose of helping people can be served. - It can be implemented in other systems. - A revised concept for Bengali stemming and also sentence level classification for Bengali. The paper consisting of sections as follow – Section II demonstrates some related work on NLP in Bengali language and some disease detection system. Section III containing information about the data used for the system. Description of the data processing and Bengali sentence architecture is also described in Section III, to gain a proper understanding of the language. Later, ANN and CNN both models are represented in Section IV. Experimental set up for word embedding was described in section V. Section VI contains the procedure for testing and evaluating the proposed model. Finally, Section VII includes the research summary. II. RELATED WORK There is a numerous amount of research on sentence level classification of tasks in several languages. But in case of conducting classification on health-related data is contemplated as a particular case. Because of neumerous diacritic character, the structure of Bengali being more complicated than another well-developed language like English, each specific task behind the classification method is clarified. Because in the case of research on NLP, the Bengali language</s>
|
<s>is still under development. Also, for classification on Enam Biswas Department of Computer Science and Engineering East West University, Dhaka, Bangladesh Email: mrnorman.enam@gmail.com Amit Kumar Das Department of Computer Science and Engineering East West University, Dhaka, Bangladesh Email: amit.csedu@gmail.com 2019 7th International Conference on Smart Computing & Communications (ICSCC)978-1-7281-1557-3/19/$31.00 ©2019 IEEErecord events of patients and long stablishing diseases (example – diabetes) Support Vector Machines (SVM) and Latent Dirichlet Allocation (LDA) serves a great purpose [7][8]. For text classification research, making a group of similar words representing semantically by the projection of words in vector space [9]. To learn Word2vec representation, Mikolov et al. introduce a great approach [10], which was also used on medical text research in the English language [11]. Well known classification features like BOW, n-grams and their TF-IDF has been used with ConvNets by Zhang & LeCun in English [12]. To gain a fixed size representation of Bengali sentence, some of the embeddings is implemented. Our work is the first such kind of research which is performed on Bengali and also in general human interpreted symptom of the disease. III. DATASET & DATA PROCESSING The dataset that has been used to classify disease from general symptom was scrapped from medical websites and data stores. Our main aim of the research is to allow user input their symptom in natural language that they express while describing to a doctor about their problems. Such example is shown in Table I. We do not expect the user to know the medical term of their disease symptom and most of the cases the medical condition don’t have any meaning in Bengali as we tend to memorize them in English and thus globally. TABLE I. SYMPTOM DEMONSTRATION A symptom of a max Acute Sinusitis patient English Bengali There is a massive amount of pain behind my eyes. The pain also seems to be on my forehead. Also, I am suffering from fever, cold for several days. The mucus from the nose is yellow and sometimes deep red. There is a drainage of mucus from throat. চ োখের [chokher] চেছখে [pechone] প্র ণ্ড [prochondo] ব্যথো [betha]। কেোখেও [kopaleo] ব্যথো [betha] মখে [mone] হয় [hoy]। আর [ar] হখে [hocche] আমম [ami] অখেক [onek] মিে [din] ধখর [dhore] জ্বর [jor] আর [ar] ঠোণ্ডোখে [thandate] ভুগমছ [vugchi]। েোখকর [naker] সমিি [shordi] হেুি [holud] এব্ং [ebong] চকোে [kono] সময় [somoy] গোড় [garo] েোে [lal]। গেো [gola] মিখয়ও [diyeo] কফ [kof] চব্র [ber] হয় [hoy]। A. Dataset The kind of information we looked for are sentences with symptoms in the form of conversational language. For such kind of text, websites like MedicineNet1, Mayo Clinic2, WebMD3, CDC4, and Healthline5 consists of various data. First, to collect data, we have chosen a set of most common diseases and their symptoms. Few data example of symptom for one disease is demonstrated in Table II. In our dataset overall, 59 diseases are included consisting of symptoms over 1500. B. Test Data To validate our model and gain confirmation of detection of disease,</s>
|
<s>we didn’t use the traditional way of splitting a portion of training data. We have collected the test data in voice and text form from patients of Dr. Md. Shafiqur Rahman6. Before the collection of data, patients were notified about the data collection and the reason behind it. Also for the data collection, personal information like – name, age, etc. is eliminated. For using the data, proper participants consent have been taken. Collected data from patients’ demonstration of their symptoms of the disease to the doctor is in both text and audio clip format. C. Data Processing Initially, the data was collected in the English language. Later it was converted to Bengali using Google Translate API. However, the output for each entry from the API was not correct in most of the cases. Things noticed – Google translate API tend to perform well by translating one word at a time. But the accuracy of the API went down when we give a whole sentence, in case of Bengali. Most of the time translated the sentence from Bengali doesn’t make any sense, even if the individual word translation of the sentence. So, later all the translated data was revised and processed manually. A demonstration of such example is provided in Table III. TABLE II. EXAMPLE OF AN INCORRECTLY TRANSLATED SYMPTOM This kind of translation almost changed the context of the sentence. Also, the spelling of words and several meaning and synonyms have been added to the dataset during revision. D. Stemming The processing of data before going into training has been looked upon carefully. As well as the test data. So, to allow our machine to learn something, we have gone through data cleaning to teach the core words to our machine. Else in case of word prefix and suffix we might gain worse performance. So here comes a concept called stemming. Several stemming algorithms are developed for numerous languages. But hardly any Bengali stemmer is found which is open sourced or already developed. Word in the Bengali language generally shows two types of inflection – verbal and nominal inflection. In some cases, pronominal and adjective inflection is observed, but hardy occur in describing thoughts [13]. a) Verb and Noun Inflection: In Bengali language verb inflection only occurs as a suffix. As, Verb formation (verb-roots + verb-suffix; Table IV). To detect the root of the verb, we have to understand the types of verbs. Bengali verbs are of two types – Finite and Non-finite. In the case of finite verbs, the occurrence of inflection are seen because of change in tense, a variation of persons and relation or honor (intimate, familiar, formal) [13]. Verb roots can be categorized into three groups: roots with only one complex English text Translated text (wrong) Revised text Tiny red dots on your skin from broken blood vessels. ভোঙো [vanga] রক্তব্োহী [roktobahi] জোহোজ [jahaj] চথখক [theke] আেেোর [apnar] ত্বখকর [toker] কু্ষদ্র [khudro] েোে [laal] মব্ন্দ ু[bindu]। চছড়ো [chera] রক্তেোেী [roktonali] চথখক [theke] আেেোর [apnar] োমড়োয় [chambray]</s>
|
<s>চছোট [choto] েোে [laal] িোগ [dag]। 1MedicineNet is a medical website that provides detailed information about diseases, conditions, medications and general health. https://www.medicinenet.com/ 2Mayo Clinic is a nonprofit academic medical center. https://www.mayoclinic.org/ 3WebMD provides valuable health information, tools for managing your health, and support to those who seek information. https://www.webmd.com/ 4CDC is one of the major operating components of the Department of Health and Human Services. https://www.cdc.gov/ 5Healthline Media, Inc. is a privately owned provider of health information headquartered in San Francisco. https://www.healthline.com/ 6Dr. Md. Shafiqur Rahman, Medical Officer in at Bangladesh University of Engineering & Technology (BUET), Dhaka. 7Dr. Altaf Hossain, M.B.B.S. & a Medicine doctor currently living in Dhaka, Bangladesh. 2019 7th International Conference on Smart Computing & Communications (ICSCC)Bengali character, roots with two complex Bengali character and roots with three complex Bengali characters (consonants + vowel marks; Table III). TABLE III. TYPES OF VOWEL MARK IN BENGALI Types of vowel marks Vowel Marks [ô] [i] [u] [ṛ/ri] [a] [ī/ee] [ū/oo] ম ু ো ী Complex Vowel Marks এ [e] ও [o] ঐ [oi] ঔ [ou] চ চ ো ৈ চ TABLE IV. CATEGORY OF VERB ROOTS Verb Roots Category 1 হ [ha], েো [kha], মি [di], শু [shu], etc. Category 2 কর [kor], কহ [koh], উঠ [uth], মফরো [fira], চি ড়ো [doura], etc. Category 3 টকো [cotka], মব্গড়ো [bigra], চছোব্েো [chobla], etc. This shows an example of building up of a category two verb-root. In Bengali, noun inflections are seen due to nominative, objective, genitive and locative causes which may vary from singular to plural [13]. Noun inflection (Table V) only happens at the end of the core noun. TABLE V. LIST OF NOUN SUFFIX Noun Suffix Singular রো [ra], টো [ta], টট [ti], েোেো [khana], েোমে [khani], etc. Plural এরো [era], গুমে [guli], গুখেো [gulo], চির [der], etc. b) Stemming procedure for Verb & Noun Inflection: In case of verbal stemming, steps of stemming must be followed serially. Two types of inflections – independent inflection (containing minimum of length one e.g., ই [i], ছ [ch], ক [k/ka], etc. or two inflections suffix, e.g., েো [la], চেো [lo], টো [ta], etc.) and combined inflection (combination of two or more independent inflections) are seen generally. From comparative study and few fixes with rules of [13] and related article on word formation in Bengali, we have proposed a basic algorithm (providing some fixes of [13]). Figure 1 contains some demonstration of steps of stemming by the proposed set of rules. In case of verb stemming set of rules should be followed strictly. Observation of noun inflection shows that it generally happens independently [13]. As the inflection of the noun occurs at a limited number, the set of rules are also simple, and infections are easy to eliminate. Figure 2 contains some examples and flow of a process for Bengali noun stemming. But in the case of noun stemming it is not mandatory to follow all the rules one after another. It serves the purpose of our symptom-based disease dataset.</s>
|
<s>Also during the data cleaning procedure, unnecessary characters were eliminated. E. Data Validation To ensure data quality (correctness and usefulness), the concept of crowdsourcing is used for dataset validation. Participation of the group of people for this task was voluntary, and participants are well notified about the purpose of crowdsourcing. Group of people participated – Doctor (2), Graduate student (3), Undergraduate student (6) and student of college (3). The data with symptoms in English and Bengali translation was given to all the participants, and they were asked to give it a rating. For data correctness and quality the average rating from the group of doctors is 9.25 out of 10. For checking the quality of the translation, data were given between the groups of students. The average translation rating is 8.86 out of 10. Also, participants were asked to fix the problem that they have observed during the validation process. IV. MODEL DESCRIPTION To conduct a comparative study with processed core data (stemmed data), we have experimented on the dataset for classification with and without word vectorization in the form of naïve approach to study proved well know classification approach that performed quite well in languages like English. Here, we will be discussing our ANN and CNN based model from experimental setup along with the optimization. Fig. 1. Rules of Verb Inflection Elimination Rules. 2019 7th International Conference on Smart Computing & Communications (ICSCC)Fig. 2. Rules of Noun Inflection Elimination. A. The architecture of the ANN System The ANN-based model was a simple model, based on the neural network. For this system, we have experimented classification of Bengali in several approaches: the basic model and word vectored model (BOW). Structure of the system – 1) Processing: From the unique word in the corpus, each training sentence is reduced to binary array. This concept is similar to the basics of bag-of-word. Some problems occurred, e.g., the binary array gets significantly bigger with greater amount of data as well as the computation gets slower. Word vectorization seemed to reduce this problem. 2) Model: Our core function of ANN is a 3-layer neural network. Where, first one is the input layer, and the last one is the output layer, having one hidden layer in between. To normalize values and its derivative to measure the error rate, we have experimented with several activation functions (Sigmoid, Tanh & ReLU). Tanh seemed to perform faster and accurate in our case. 3) Parameter and Training: After having some experiment with the learning rate we have set our learning rate as 0.001 and a dropout of 0.5, means randomly selected node will be dropping out 50% weight at each weight update cycle. We have conducted 50000 epochs of training. B. The architecture of CNN System This system uses a simple convolution neural network with single-channel architecture, based on the study and experiment [5]. Word vector that was already trained by us on the dataset is used over the one layer of convolution. The word vectors are kept static</s>
|
<s>during the process. 1) Formulation of CNN: For i-th word in a sentence let vi ∈ Rk be the k-dimensional word vector. For representing a sentence with length l – 𝑣𝑖:𝑛𝑣𝑖𝑣𝑖+1 𝑣𝑛So, 𝑣𝑖:𝑛 is referring as a concatenation () of words 𝑣𝑖 , 𝑣𝑖+1 , , 𝑣𝑛 . A filter x ∈ Rk is applied to a window of h words in order to introduce a new feature to operation convolution. From window of words 𝑣𝑖∶ 𝑖+ℎ−1 generation of new window follows as – 𝑚𝑖𝑓 (𝑤 𝑣𝑖∶ 𝑖+ℎ−1 + 𝑏)Where f represents a non-linear function and b as bias term, b ∈ R. A feature map is produced by applying this filter to each possible word window. If we consider M (where M ∈ Rn-h+1) as a feature map, then we can represent it as – 𝑀𝑚1𝑚2, 𝑚𝑛−ℎ+1Each feature was taken from the maximum value corresponding to the particular filter M by applying max-over-time pooling operation, Collobert et al., 2011. This process will extract one feature from one filter. Also, we are allowing the model to get multiple features from changing window size through multiple filters. Lastly, the probability for classification comes from a fully connected softmax layer from the calculation of features. 2) Parameters and Training: For this model we use: Convolution filters of 3, 4 and 5 [5]. They are applying a dropout rate of 0.25 for a batch size 50. First, the trained model was saved, and the accurate measurement is conducted separately for each specific input; is provided in Section V. V. WORD EMBEDDING SYSTEM AND FINDINGS For dimensionality reduction and contextual similarity, word embedding is used to represent words into a corresponding vector of real numbers. Word vector concept BOW is used with ANN model. Having some performance issue and drawback other techniques are used with CNN model. It is popular to use an unsupervised neural language model to initialize word vectors (word2vec, fastText etc.). We have implemented word2vec model with corresponding CBOW and skip-gram model for Bengali. Also, fastText has been implemented to gain a comparison. Our dataset contains 6680 words (stemmed) in total and 1441 unique stemmed words. For our small dataset, we have tried to keep the dimensionality of embedding vector low, 20 for both of these systems and we looked for 5 number of context word. The result we have got is somehow not relatable in some cases. Also for our small dataset, the cosine similarity between neighboring words seemed high. Few terms are mostly seen in the dataset, example - “pain”, “skin”, “nose” and “fever” etc. (Bengali form: “ব্যথো” [betha], “ োমড়ো” [chamra], “েোক” [nak] and “জ্বর” [jor]). The most similar word related to the key term “pain” depending on cosine similarity, are shown on Table VI. 2019 7th International Conference on Smart Computing & Communications (ICSCC)TABLE VI. MOST SIMILAR WORDS Model Context Words (Top 4, left to right) Bengali (stemmed) word2vec CBOW কর [kor] মধয [moddho] য [ja] সঙ্গ [shongo] word2vec skip-gram কর [kor] মধয [moddho] য</s>
|
<s>[ja] রক্তেোে [roktopat] fastText রক্তেোে [roktopat] স্বোভোমব্ক [shavabik] রক্ত [rokto] অস্বোভোব্ [oshavab] For term “pain”, the detected context words seem almost similar except য [ja], detected with word2vec models. Overall fastText performed better with an average score of 0.988 (word2vec CBOW – 0.708 and word2vec skip-gram 0.748) in this particular case. Score of these test cases is demonstrated in Figure 3. Overall, word2vec with skip-gram scores better, used in the main CNN model. Fig. 3. Word embedding scores. VI. ANALYSIS We have used 20 test cases to analyze our systems. In case of analyzing with voice data, we have used a Bengali speech recognition system [6]. The Bengali speech recognition system is in alpha state and having a WER (word error rate) of 0.37. Also, it tends to perform better without any noise. For our test setup, both 20 cases of voice and text data are used. We have analyzed the ANN system with (ANN-stemmed) and without (ANN-non-stemmed) stemming of dataset to show the importance of word stemming in Bengali and how the performance degrades with word inflected data (demonstrated on Table VII). While testing our model the accuracy was examined by the statistical analysis of detection of a disease according to the doctors6, 7, compared to our system and efficiency from the system. TABLE VII. SYSTEM ACCURACY System Text Accuracy Speech Accuracy Disease Detection (out of 20) ANN-non-stemmed 68.32% 57.45% 14 ANN-stemmed 74.56% 62.36% 16 CNN 81.88% 63.77% 17 Our observation – the CNN system worked better overall with an accuracy of 81.88%. By the implementation of word stemming, the ANN-stemmed system outperformed the non-stemmed system by 6.24%. This difference might not seem much because of the amount of data. Although the performance of voice interconnected system seem pretty low, as the text output from the system is incorrect in some cases. Amount of data used in our system, SMV (support vector machine) might perform better. But, we have tried to conclude with neural networks model considering the amount of data will be much higher. Also, the trained vector models were only applied to the CNN model, as theoretically it performs better than ANN [12]. VII. CONCLUSION In this paper, we introduce an extensive study on Bengali text level classification. Also, it represents a novel approach for symptom-based disease classification. The performance of neural network systems on top of word embedding models even with the small amount of data proves the efficiency of the system. Also, the multilayer convolutional network creates more features while training, thus semantic representations are learned in order to text comparison. We could gain a much better result with greater amount of conversational data. Also the word2vec and fastText model could perform better so well the whole system. The basics can also be used to gather knowledge for the similar kind of task in other languages. In future, our target is to implement our technique with larger scale and fine gained set of medical classification. Relating to computer vision literature, our convolutional networks will perform</s>
|
<s>better with a larger dataset. We aim to check our hypothesis with more patient-doctor conversational data. REFERENCES [1] Medical diagnosis, “https://en.wikipedia.org/wiki/Medical_diagnosis” accessed on March 9, 2019. [2] M. R. Ullah, M. A. R. Bhuiyan and A. K. Das, “IHEMHA: Interactive healthcare system design with emotion computing and medical history analysis,” 2017 6th International Conference on Informatics, Electronics and Vision, 2017. [3] R. A. Tuhin, B. K. Paul, F. Nawrine, M. Akter and A. K. Das, " An Automated System of Sentiment Analysis from Bangla Text using Supervised Learning Techniques," 2019 4th International Conference on Computer and Communication Systems (ICCCS), Singapore, 2019. [4] A. K. Das, T. Adhikary, M. A. Razzaque, M. Alrubaian, M. M. Hassan, Z. Uddin, and B. Song, “Big media healthcare data processing in cloud: a collaborative resource management perspective,” Cluster Computing, June 2017. [5] Y. Kim, “Convolutional Neural Networks for Sentence Classification”, 2014 Conference on EMNLP, October 2014. [6] J. Islam, M. Mubassira, M. R. Islam and A. K. Das, “A Speech Recognition System for Bengali Language using Recurrent Neural Network,” (ICCCS), Singapore, 2019. [7] Marafino, Ben J., J. M. Davies, N. S. Bardach, M. L. Dean and R. A. Dudley. “N-gram support vector machines for scalable procedure and diagnosis classification, with applications to clinical free text data from the intensive care unit.” JAMIA 2014. [8] Wang, Lipo, F. Chu and W. Xie. “Accurate Cancer Classification Using Expressions of Very Few Genes.” IEEE/ACM Transactions on Computational Biology and Bioinformatics, 2007. [9] G. S. Luis, G. H. J. Manuel, “CNN text classification model using Word2Vec”, June 2017. [10] Mikolov, Tomas, I. Sutskever, K. Chen, G. S. Corrado and J. Dean. “Distributed Representations of Words and Phrases and their Compositionality.” NIPS 2013. [11] M. HUGHES, I. LI, S. KOTOULASa and T. SUZUMURA, “Medical Text Classification using Convolutional Neural Networks”, Stud Health Technol Inform, 2017. [12] A. Conneau, H. Schwenk, L. Barrault, Y. Lecun, “Very Deep Convolutional Networks for Text Classification”, EACL 2017. [13] M. R. Mahmud, M. Afrin, M. A. Razzaque, E. Miller and J. Iwashige, "A rule based Bengali stemmer," 2014 International Conference on Advances in Computing, Communications and Informatics (ICACCI), New Delhi, 2014. 2019 7th International Conference on Smart Computing & Communications (ICSCC)View publication statsView publication statshttps://www.researchgate.net/publication/335935059</s>
|
<s>Detecting Abusive Comments in Discussion Threads Using Naïve BayesDetecting Abusive Comments in Discussion Threads Using Naïve Bayes Md. Abdul Awal Department of Computer Science and Engineering, Khulna University of Engineering & Technology, Khulna-9203, Bangladesh awal.kuet@yahoo.com Md. Shamimur Rahman Department of Computer Science and Engineering, Khulna University of Engineering & Technology, Khulna-9203, Bangladesh shamimur052@gmail.com Jakaria Rabbi Department of Computer Science and Engineering, Khulna University of Engineering & Technology, Khulna-9203, Bangladesh jakaria.rabbi@yahoo.com Abstract—Comments are supported by various websites and provide a simple approach to increment user involvement. Users can generally comment on different types of media such as: social networks, blogs, forums and news articles. As discussions increasingly move toward online forums, the issue of insulting and abusive comments is becoming prevalent. In addition,a lots of comments are available due to these social media. Hence, it is not feasible for a human moderator to check each comments one by one and flag them as abusive or not abusive. For this reason, an automated classifier which is quick and efficient is necessary to detect such type of comments. To fulfill above purpose, in this paper a Naïve Bayes classifier is designed to detect abusive comments expressed in Bangla. Using a training corpus collected from “Youtube.com”, the Naïve Bayes classifier is employed to categorize comments as abusive or not abusive. Finally, the performance is evaluated by using 10-fold cross-validation on unprocessed data. Keywords—Abusive comments, Naïve Bayes, machine learning, text classification, 10-fold cross-validation. I. INTRODUCTION Human interaction with social networks, blogs, forums and online news portals has been increased drastically in the previous couple of years. Social networks, blogs, forums and online news portals unite users to form a strong association generally based on a way of communication via messages, chats and comments. Comments capacitate a casual and interactive way of providing personal point of view. Generally commenters are unopposed to express their sentiments, share their responses, and offer their learning. Readers obtain additional facts over the article from comments and usually they also react to comments by giving reply. Users generally utilize “thumbs up” or “thumbs down” sign in order to short response to a comment [15]. In addition detailed responses are also feasible – prompting to “comment threads”. Consequently comments give a feeling of group interest by a low passage obstruction. The comments can appear as any composed content whether it is in English, Bangla or something else. Most of the time, the commenting framework is an essential part of making a group in a website. This framework normally allows anonymous posting that gives users the chance to misconduct the commenters or posters on the framework. So simply like some other community feature, comments are defenseless to manhandle. That’s why, identification and blocking of abusive comments are indispensable for the transparency of comments. The consequences of abusive comments are multifarious[15]. • Since readers need to filter through comments spam to get good comments, they can lose their enthusiasm to a website. • Normally commenters are discouraged to comment in an environment which may</s>
|
<s>be full of spam and their comments are probably going to be suffocated in an ocean of spam. • The owners or proprietors of sites may observe less user involvement and graduallypoor quality traffic. Abuse or misconduct on a commenting framework varies from spam to comments which are infelicitous. Users often recognize this content highly invective. As a result, the websites can obtain negative feedbacks from users and also lose their traffic. So the moderators have a critical undertaking in securing the fairness of a website [21]. They impose particular rules and regulations about what types of comments are allowed to post. Suppose, an abusive comment could assault a user utilizing pejorative terms, then it is the responsibility of a moderator to decide if this comment should be allowed or not for posting. Generally human being plays the role of a moderator, who have to read each of the comments to categorize them as abusive or not. However, manually reviewing and detecting offensive comments are tiresome and time killing task and hence not feasible, reliable and usable in practical sense. In order to identify and block abusive and offensive contents of a website, some automated software such as “Appen” and “Internet Security Suite” have been used [2]. These software packages juststopped webpages from loading into a web browser which contained scurrilous contents. The methodboth interrupts the readability and usability of website and fails to identify exquisite insulting contents. The purpose of this research is to detect abusive comments expressed in Bangla. At first, the dataset of English comments is collected from “Youtube.com” [4]. Then the annotated Bangla dataset is generated from this collected dataset. Naïve Bayes classifier is trained on this dataset. Finally 10-fold cross-validation technique is applied to measure the accuracy of the classifier. The organization of the paper can be described as below: previous works are presented in section II and research methodology is described in section III.Section IV shows the results of the experimental analysis. Finally, the conclusion is presented in section V. 978-1-5386-8524-2/18/$31.00 ©2018 IEEE 2018 2nd Int. Conf. on Innovations in Science, Engineering and Technology (ICISET) 27-28 October 2018, Chittagong, Bangladesh 163II. EXISTING WORKS The task of textual annoyance or abusive commentsidentification in text has been marked by scientists as a classification task. Abusive comments classification research with machine learning began with Yin et al.’s [5] paper. The authors proposed a supervised machine learning technique to detect textual harassment, in which texts are illustratedbased on word frequency features, sentiment features and features whichtake the similarity to neighboring posts. One of the first works to address abusive language was [6], which used a supervised classification technique in conjunction with N-gram, manually developed regular expression patterns, contextual features that take into account the abusiveness of previous sentences. Dinakar et al. [7]collected dataset from YouTube videos in different topics that contain comments and applied binary and multiclass classifiers. The experimental results indicate that topic-sensitive binary classifiers enhanced the performance of generic multi-class classifiers. Dadvar et al. [8] applied</s>
|
<s>a rule-based expert system, a supervised machine learning model, and a hybrid approach to automatically detect cyberbullying. The author showed that the expert system performs better than the machine learning and hybrid approach model. Nahar et al. [9] proposed a semi-supervised leaning methodwhich will enlarge training data samples and use a fuzzy SVM algorithm. The improved training method automatically extracts and increases training set from the unlabeled streaming text, while learning is conducted by utilizing a little training set provided as an initial input. From the experimental results it is observed that the proposed improvedtechniqueperforms better than all other techniques, and is applicable in the practicalscenarios, whileenough labelled datasetis unavailable for training. In [10] authors developed and applied a new method to annotate cyberbullying, which indicates the presence and cruelty of cyberbullying, a post author’s role (harasser, victim or bystander) and a number of fine-grained categories related to textual harassment such as insults and threats. The experimental results shown the possibility of fine-grained cyberbullying detection.Reynolds et al. [11] collected data from social networking site“FromSpring”. They applied machine learning algorithm on this dataset. Amazon Web service Turk was usedto label the collected data. In case of identifying true positives, the accuracy of their technique was 78.5%. Altaf Mahmud et al [12] created a set of semantic rules to distinguish factual and insultingcomments by parsing comments, but they did notconsider direct involvementof participants and nonparticipants. Razaviet al [13] proposed an automatedabusive content detection approach that extracts features at various conceptual levels using bag-of-words model and applies multilevel classification to detect flame in text. Most recently, Nobatacollected a corpus of Yahoo! Finance and News comments to detect abusive language [14]. The author extracted character N-gram, linguistic, syntactic, and distributional semantic features from this dataset to train the proposed model. Kant et al. [15] developed an abusive content detection framework based on frequent subsequence mining. The authors enhanced the PRISM algorithm to obtain a new algorithm namedmcPRISM that can mine frequent sequences from abusivecontents with expected level of accuracy.Chavan et al. [16] included pronouns, skip-gram, TFeIDF, and N-grams as extra features to improve the accuracy of their proposed model. A Lexical Syntactic Feature (LSF) based approach was proposed by Chen et al. [2] to detect abusive contents and identify probable offensive users in social media. The authors included a user’s writing style, structure and specific textual harassing contents as features to predict the user’s probability to send out abusive contents. In case of offensive sentence detectionprecision and recall value were 98.24% and 94.34% respectively andin case of offensive user detection precision and recall value were 77.9% and 77.8% respectively. Xiang et al. [22] applied machine learning and topic modeling approaches to identify profanity-related abusive contents on Twitter. They acquired a true positive rate of approximately 75%, outperforming keyword-based techniques. In [1], the authors proposed an approach to extract opinion from text expressed both in English and Bangla. In this purpose they used Naïve Bayes classifier to extract the opinion. Three levels such as weak,</s>
|
<s>steady and strong were used as the task of opinion mining. Most of the research works mentioned above only deal with the detection of abusive comments expressed in English. But the purpose of this research is to detect abusive comments expressed in Bangla. III. METHODOLOGY Offensive language detection in social networks, blogs, forums and news articles is very complicatedjob. The textualsubstances in thiscircumstance are informal, not structured and even incorrectly spelled [2]. While protective techniques accepted by various websites areinadequate, scientists have studied efficient ways to detectinvective contents utilizing text mining techniques. Fig. 1. The flowchart for detecting abusive comments. To serve this purpose, Naïve Bayes classifier is used in this research to detect abusive comments expressed in Bangla. The methodology to break down information from data needs the majorsteps, which are: 1) data acquisition and preprocess, 2) feature extraction, and 3) model selection. The significant difficulties of utilizing text miningapproach to distinguishhostile contents depend on the feature selection stage. All the steps of the methodology are depicted in Fig. 1. 164A. Data acquisition and preprocess: Abusive contents classification is as yet a comparatively recent research topic in NLP and there are legitimate and privacy issues with making this information public. That’s why, few datasets have been curated particularly for this problem. In this study a comment is considered as abusive if either the primarymotive is insult or it retainsinvective or offensive words, phrases or languages. As Bangla dataset for abusive comments detection isunavailable for research, the Bangla dataset is generated from English dataset in two different ways: 1) direct translation to Bangla and 2) dictionary based translation to Bangla [1], [22]. The translation is done by“Google Translator”. For this purpose, the English dataset is collected from “Youtube.com” [7], [21]. The dataset contains 2665 instances or English comments. Among them, 1451 were labeled as not abusive or positive comments. The remaining, 1214 comments were marked as abusive or spam. In order to train Naïve Bayes classifier, the dataset must be converted into feature vector. Hence, various natural language processing techniques such as normalization and stemming are applied to remove unwanted strings,like URLs, IP addresses, or other special array of characters [7], [16], [20]. After preprocessing the actual Bangla dataset is generated from this preprocessed English dataset. Table I shows some samples of Bangla comment. TABLE I. TRANSLATION FROM ENGLISH TOBANGLA. English Translated Bangla sorry but you are just a bitch lady... দঃুিখত িকn আপিন ধু eকিট দু িরtা মিহলা... your head is full of shit আপনার মাথায় েগাবর ভরাyou’re a tail-less monkey তুi eকটা েলজকাটা বানর you are a pig আপিন eকটা য়ার B. Feature extraction: To train Naïve Bayes classifier, commentsmust be transferred into feature vector. So as toextract features from text, it is necessary to partition text into chunk which is called tokenization. The features are consists of tokens which are collected from text after tokenization. Then every part oftext is reduce to a vector of tokens, where 1 denotes the presence of that token and 0 denotes</s>
|
<s>the absence of that token in a document. The preprocessed dataset is then prepared into a bag-of-word (BOW) vector that calculates the occurrence of a particular word in a particular comment. C. Model selection: A classifier can be utilized to moderate a website and operates quicker than having human moderators or users to flag comments. The taskrelated to the dataset applied in this paper is binary classification. For this reason, Naïve Bayes approach is chosen for the classification of abusive comments. The Naïve Bayes approach is easy to implement and computationally efficient. Naïve Bayes is a subset of Bayesian decision theory. The Bayes Theorem enables us to compute the likelihood of an occurrence that prompt to a result. Bayes Theorem states that: ( | ) = ( | ) × ( )( ) Suppose, C is the set of i classes and D is a document to classify. The probability of a class C given a document D can be computed by Bayes Theorem as follows: ( | ) = ( | ) × ( )( ) The probability for each class in C is calculated and then consider the class that is attached with the highest probability to obtain the classification. The probability of a class P(C) is simply the probability of the class C. If is the total no of documents in class C and D is the total no of document in dataset then ( ) = The ratio of the number of documents in the class C to the aggregate number of whole documents of all classes denotes the probability that a document is in class C. If D is expanded into individual features, then probability of a document P(D|C) can be calculated as follows [26]: ( | ) = ( , , , … , ,| ) The assumption is that all the words are independently likely, and somewhat named conditional independence and the probability is calculated as [26]: ( | ) ∗ ( | ) ∗ ( | ) ∗ …∗ ( | ) To calculate the conditional probability for each class, following pseudo code is used[26]: calculate the no of comments in every class for each training comment for everyclass if (token_value = 1) token_count = token_count + 1 total_token_count = total_token_count + 1 for every class forevery token conditional_probability = token_count / total_token_count return conditional_probability IV. EXPERIMENTAL ANALYSIS All the outcomes from the implementation of Naïve Bayes are given in this section. A comment is classified into two polarities: abusive and not abusive. In Naïve Bayes, the probabilistic value is used to determine the class level. Assume that, the conditions for the probability of a bit of informationassociating to class 1 is p1(x, y) and class 2 is p2(x,y). To classify a new bit of information with features (x,y) the following rules are used: (1) (3) (4) (5) (2) 165If p1(x, y) > p2(x, y), then the class label is 1. If p2(x, y) > p1(x, y), then the class label is</s>
|
<s>2. From (5) it is noticed that, a bunch of probabilities are multiplied together to find out the probability that a document associates to a particular class. Multiplication of too many small numbers causes underflow or produces an incorrect result. To solve this problem, natural logarithm of each probabilities are multiplied together to get the actual probability.Because it is known that, ln( ∗ ) = ln( ) + ln ( ). It helps us to minimize the effect that comes from underflow or round-off error problem. Fig. 1 [26] plots two functions, ( ) and ( ( )) . From the Fig. 2 it is observed that, both the functions increase and decrease in the same areas and they have their peaks in the same areas.This alsoindicatesthat the natural logarithm of a function can be utilizedasreplacement of a functiontofind the maximum value of that function. The dataset used in this paper, contains 2665 instances or English comments. Then 10-fold cross-validation technique is applied to evaluate theperformance of the predictive model. The main dataset is divided into a training set to train the model, and a test set to evaluate it. The dataset is arbitrarily divided into 10 equivalent subsets. Among the 10 subsets, a single subset is randomly choose totest the accuracy of the model, and other 9subsets are utilized as training data to train the model. The cross-validation process is reiteratedten times (the folds), with each of the 10 subsets used only once as the validation data. Finally, a single estimation is produced from the average of 10 results. Fig. 2.Arbitrary functions f(x) and ln(f(x)) increasing together. Precision-Recall metric is used to measure the accuracy of Naïve Bayes classifier. Precision (P) is defined as the ratio of the true positives( )to the total number of true positives plus the number of false positives ( ). That is: = + Recall (R) is defined as the ratio of the true positives ( ) to the total number of true positives plus the number of false negatives( ). That is: = + Accuracy is measured by the percentage of comments in the test set that the classifier correctly labels. That is: ( ) = ++ + + × 100 The F1 score is also used to measure the accuracy of a classifier.It takes into account both the precision P and the recall R of the test to calculate the score.Theweighted average of the precision and recall value is used to define F1 score.The value 1 and 0 respectively indicates the best case and worst case scenario of F1 score. F1 score is calculated as follow: = 2 ∗ ∗+ The confusion matrix of experimental analysis is presented in table II. The experimental results presented in table III show that the precision and the recall value to detect abusive comments in discussion threads are 0.81 and 0.77 respectively. The table also shows that, overall accuracy and F1score of the Naïve Bayes classifier are 80.57% and 0.39 respectively. TABLE II. CONFUSION MATRIX OF</s>
|
<s>EXPERIMENTALANALYSIS. Actual Class Abusive Not Abusive Abusive 103 True Positives 24 False Positives Not Abusive 30 False Negatives 121 True Negatives TABLE III.RESULT OF EXPERIMENTAL ANALYSIS. Precision Recall Accuracy F1 Score = 103103 + 24= 0.81 = 103103 + 30= 0.77 = 103 + 121103 + 121 + 30 + 24 × 100 = 80.57 % = 0.81 ∗ 0.770.81 + 0.77= 0.39 TABLE IV.EXCEPTIONAL CASE OF ABUSIVE COMMENTS. English Bangla Actual Polarity Detected Polarity Those who are prostitutes treated disdainfully in our society েবশয্াবৃিt যারা কের তােদরেক আমােদর সমােজ ঘৃণার েচােখ েদেখ Not Abusive Abusive When talking, it is a very bad thing to be cursed by someone as illegitimate, bastard, prostitute and a son of a bitch.কথা বলার সময় কাuেক জারজ, হারামজাদা, পিততা eবং ktার বাcা বেল গািল েদয়া খুব খারাপ কাজ। Not Abusive Abusive There are some cases when Naïve Bayes classifier fails to detect abusive comments. Table IV shows such type of comments among them that cannot be accurately detected by 0.2 0.1 0.2 0.50.3 0.4 0.0 0.0 1.0 0.8 0.6 0.4 -2.5 0.1 0.2 0.50.3 0.4 0.0 -3.0 -2.0 -1.5 -1.0 0.0 -0.5 (8) (7) (9) (7) (6) 166Naïve Bayes classifier. Here “েবশয্াবৃিt”, “জারজ”, “হারামজাদা”, “পিততা”, “ktারবাcা”, and “ঘৃণা” are some extensively used vulgar words in Bangla Language. But in the above comments the overall meaning that is semantic of these words is not offensive. In order to flag such type of comments as not abusive, the semantic of these comments must be considered. V. CONCLUSIONS As the volume of online user yielded contents are rapidly increasing, it is essential to apply accurate and automated techniques to detect abusive contents. Hence, Naïve Bayes classifier is applied in this paper, to automatically detect abusive comments in discussion threads. To achieve the goal, the dataset is collected from “Youtube.com”. Some pre-processing techniques are applied on this collected dataset to clean unwanted texts and prepare the dataset for training the classifier. As a significant effort, the technique has acquired excellent accuracy to detect abusive comments in discussion threads which is depicted in the experimental analysis section. In case of abusive comments detection, the technique used in this paper will minimize the editorial efforts of a human moderator by an order of magnitude. A future plan is to detect abusive comments including the article it references, any comments preceding or replied to, as well as information about the commenter’s past behavior or comments. REFERENCES [1]. K. M.A. Hasan, M. S. Sabuj, Z. Afrin, “Opinion Mining using Naïve Bayes”, In: IEEE International WIE Conference on Electrical and Computer Engineering (WIECON-ECE), pp. 511-514, IEEE. [2]. Y. Chen, Y. Zhou, S. Zhu, H. Xu,“Detecting offensive language in social media to protect adolescent online safety”, In Privacy, Security, Risk and Trust (PASSAT), 2012 International Conference on Social Computing (SocialCom), pages 71–80. IEEE, 2012. [3]. C. Nobata, J. R. Tetreault, A. Thomas, Y. Mehdad, Y. Chang,“Abusive language detection in online user content”, In Proceedings of the 25th International Conference on World Wide Web, WWW</s>
|
<s>2016, Montreal, Canada, April 11 - 15, 2016, pages 145–153, 2016. [4]. M. M. Nabi, M. T.Altaf, S. Ismail,“Detecting Sentiment from Bangla Text using Machine Learning Technique and Feature Analysis”, International Journal of Computer Applications 153(11):28-34, November 2016. [5]. D. Yin, Z. Xue, L. Hong, B.D. Davison, A. Kontostathis, L. Edwards,“Detection of harassment on web 2.0”, In: Content Analysis in the WEB 2.0, (CAW2.0) Workshop at WWW, Madrid, Spain, 2009. [6]. S. O. Sood, J. Antin, E. F. Churchill,“Using crowdsourcing to improve profanity detection”, In AAAI Spring Symposium: Wisdom of the Crowd, 2012. [7]. K. Dinakar,R. Reichart, H. Lieberman,“Modeling the detection of textual cyberbullying”, Workshop on the Social Mobile Web in 5th International AAAI Conference on Weblogs and Social Media, Spain 2011. [8]. M. Dadvar, D. Trieschnigg, F. D. Jong, “Experts and machines against bullies: A hybrid approach to detect cyberbullies”, In: Canadian Conference on Artificial Intelligence, Springer (2014) 275–281. [9]. V. Nahar, S. Al-Maskari,X. Li, C. Pang, “Semi-supervised learning for cyberbullying detection in social networks”, In: ADC, Springer (2014) 160–171. [10]. C. V.Hee, E. Lefever, B. Verhoeven, J. Mennes, B. Desmet, G. D. Pauw, W. Daelemans,V. Hoste, “Detection and fine-grained classification of cyberbullying events”, In: Recent Advances in NLP Conference (RANLP), (2015) 672–680. [11]. K. Reynolds, A. Kontostathis, L. Edwards,“Using Machine Learning to Detect Cyberbullying”,10th International Conference on Machine Learning and Applications and Workshops (ICMLA), 2011, vol.2, no.pp.241,244,18-21Dec.2011. [12]. A. Mahmud,K. Z. Ahmed, M. Khan,“Detecting flames and insults in text”, In: Proceedings of the Sixth International Conference on Natural Language Processing (2008). [13]. A. H. Razavi, D. Inkpen,S. Uritsky, S. Matwin, “Offensive language detection using multi-level classification”, In: Proceedings of the 23rd Canadian Conference on Artificial Intelligence, pp. 16–27 (2010). [14]. G. Xiang, B. Fan, L. Wang, J. Hong, C. Rose,“Detecting offensive tweets via topical feature discovery over a large scale twitter corpus”, In Proc. CIKM, pages 1980–1984, New York, NY, USA, 2012. ACM. [15]. R. Kant, S. Sengamedu, K. Kumar,“Comment spam detection by sequence mining”, In WSDM, pages 183–192. ACM, 2012. [16]. V. S. Chavan,S. Shylaja, “Machine learning approach for detection of cyberaggressive comments by peers on social media network”, In Advances in computing, communications and informatics (ICACCI), 2015 International Conference on (pp. 2354e2358), IEEE. [17]. A. Das, S. Bandyopadhyay,“SentiWordNet for Bangla”, February, 23th to 24th, In Knowledge Sharing Event-4: Task 2: Building Electronic Dictionary Mysore, 2010. [18]. K. M. A. Hasan, S. Islam,G. M. Mashrur-E-Elahi, M. N.Izhar,“Sentiment Recognition from Bangla Text”,DOI: 10.4018/978-1 4666-3970-6.ch014, pp. 1-10, 2010. [19]. F. K. Ventirozos, I. Varlamis, G.Tsatsaronis, “Detecting aggressive behavior in discussion threads using text mining”, 18th International Conference on Computational Linguistiscs and Intelligent Text Processing, June, 2017. [20]. S. Chowdhury, W. Chowdhury, “Performing sentiment analysis in Bangla microblog posts”, in: Proceedings of International Conference on Informatics, Electronics & Vision, 2014, pp. 1–6. [21]. Maus, Adam. “SVM approach to forum and comment moderation.” Class Projects for CS (2009). [22]. Google Translator: https://translate.google.com. [23]. https://www.frontgatemedia.com/a-list-of-723-bad-words-to-blacklist-and-how-to-use-facebooks-moderation-tool/ [24]. https://github.com/wooorm/profanities/blob/HEAD/support.md [25]. http://www.cs.cmu.edu/%7Ebiglou/resources/bad-words.txt [26]. Peter Harrington, “Machine Learning in Action”, Manning Publications Co. 167 /ASCII85EncodePages false</s>
|
<s>/AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.7 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 0 /ParseDSCComments false /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo false /PreserveFlatness true /PreserveHalftoneInfo true /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Remove /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /AbadiMT-CondensedLight /ACaslon-Italic /ACaslon-Regular /ACaslon-Semibold /ACaslon-SemiboldItalic /AdobeArabic-Bold /AdobeArabic-BoldItalic /AdobeArabic-Italic /AdobeArabic-Regular /AdobeHebrew-Bold /AdobeHebrew-BoldItalic /AdobeHebrew-Italic /AdobeHebrew-Regular /AdobeHeitiStd-Regular /AdobeMingStd-Light /AdobeMyungjoStd-Medium /AdobePiStd /AdobeSongStd-Light /AdobeThai-Bold /AdobeThai-BoldItalic /AdobeThai-Italic /AdobeThai-Regular /AGaramond-Bold /AGaramond-BoldItalic /AGaramond-Italic /AGaramond-Regular /AGaramond-Semibold /AGaramond-SemiboldItalic /AgencyFB-Bold /AgencyFB-Reg /AGOldFace-Outline /AharoniBold /Algerian /Americana /Americana-ExtraBold /AndaleMono /AndaleMonoIPA /AngsanaNew /AngsanaNew-Bold /AngsanaNew-BoldItalic /AngsanaNew-Italic /AngsanaUPC /AngsanaUPC-Bold /AngsanaUPC-BoldItalic /AngsanaUPC-Italic /Anna /ArialAlternative /ArialAlternativeSymbol /Arial-Black /Arial-BlackItalic /Arial-BoldItalicMT /Arial-BoldMT /Arial-ItalicMT /ArialMT /ArialMT-Black /ArialNarrow /ArialNarrow-Bold /ArialNarrow-BoldItalic /ArialNarrow-Italic /ArialRoundedMTBold /ArialUnicodeMS /ArrusBT-Bold /ArrusBT-BoldItalic /ArrusBT-Italic /ArrusBT-Roman /AvantGarde-Book /AvantGarde-BookOblique /AvantGarde-Demi /AvantGarde-DemiOblique /AvantGardeITCbyBT-Book /AvantGardeITCbyBT-BookOblique /BakerSignet /BankGothicBT-Medium /Barmeno-Bold /Barmeno-ExtraBold /Barmeno-Medium /Barmeno-Regular /Baskerville /BaskervilleBE-Italic /BaskervilleBE-Medium /BaskervilleBE-MediumItalic /BaskervilleBE-Regular /Baskerville-Bold /Baskerville-BoldItalic /Baskerville-Italic /BaskOldFace /Batang /BatangChe /Bauhaus93 /Bellevue /BellMT /BellMTBold /BellMTItalic /BerlingAntiqua-Bold /BerlingAntiqua-BoldItalic /BerlingAntiqua-Italic /BerlingAntiqua-Roman /BerlinSansFB-Bold /BerlinSansFBDemi-Bold /BerlinSansFB-Reg /BernardMT-Condensed /BernhardModernBT-Bold /BernhardModernBT-BoldItalic /BernhardModernBT-Italic /BernhardModernBT-Roman /BiffoMT /BinnerD /BinnerGothic /BlackadderITC-Regular /Blackoak /blex /blsy /Bodoni /Bodoni-Bold /Bodoni-BoldItalic /Bodoni-Italic /BodoniMT /BodoniMTBlack /BodoniMTBlack-Italic /BodoniMT-Bold /BodoniMT-BoldItalic /BodoniMTCondensed /BodoniMTCondensed-Bold /BodoniMTCondensed-BoldItalic /BodoniMTCondensed-Italic /BodoniMT-Italic /BodoniMTPosterCompressed /Bodoni-Poster /Bodoni-PosterCompressed /BookAntiqua /BookAntiqua-Bold /BookAntiqua-BoldItalic /BookAntiqua-Italic /Bookman-Demi /Bookman-DemiItalic /Bookman-Light /Bookman-LightItalic /BookmanOldStyle /BookmanOldStyle-Bold /BookmanOldStyle-BoldItalic /BookmanOldStyle-Italic /BookshelfSymbolOne-Regular /BookshelfSymbolSeven /BookshelfSymbolThree-Regular /BookshelfSymbolTwo-Regular /Botanical /Boton-Italic /Boton-Medium /Boton-MediumItalic /Boton-Regular /Boulevard /BradleyHandITC /Braggadocio /BritannicBold /Broadway /BrowalliaNew /BrowalliaNew-Bold /BrowalliaNew-BoldItalic /BrowalliaNew-Italic /BrowalliaUPC /BrowalliaUPC-Bold /BrowalliaUPC-BoldItalic /BrowalliaUPC-Italic /BrushScript /BrushScriptMT /CaflischScript-Bold /CaflischScript-Regular /Calibri /Calibri-Bold /Calibri-BoldItalic /Calibri-Italic /CalifornianFB-Bold /CalifornianFB-Italic /CalifornianFB-Reg /CalisMTBol /CalistoMT /CalistoMT-BoldItalic /CalistoMT-Italic /Cambria /Cambria-Bold /Cambria-BoldItalic /Cambria-Italic /CambriaMath /Candara /Candara-Bold /Candara-BoldItalic /Candara-Italic /Carta /CaslonOpenfaceBT-Regular /Castellar /CastellarMT /Centaur /Centaur-Italic /Century /CenturyGothic /CenturyGothic-Bold /CenturyGothic-BoldItalic /CenturyGothic-Italic /CenturySchL-Bold /CenturySchL-BoldItal /CenturySchL-Ital /CenturySchL-Roma /CenturySchoolbook /CenturySchoolbook-Bold /CenturySchoolbook-BoldItalic /CenturySchoolbook-Italic /CGTimes-Bold /CGTimes-BoldItalic /CGTimes-Italic /CGTimes-Regular /CharterBT-Bold /CharterBT-BoldItalic /CharterBT-Italic /CharterBT-Roman /CheltenhamITCbyBT-Bold /CheltenhamITCbyBT-BoldItalic /CheltenhamITCbyBT-Book /CheltenhamITCbyBT-BookItalic /Chiller-Regular /Cmb10 /CMB10 /Cmbsy10 /CMBSY10 /CMBSY5 /CMBSY6 /CMBSY7 /CMBSY8 /CMBSY9 /Cmbx10 /CMBX10 /Cmbx12 /CMBX12 /Cmbx5 /CMBX5 /Cmbx6 /CMBX6 /Cmbx7 /CMBX7 /Cmbx8 /CMBX8 /Cmbx9 /CMBX9 /Cmbxsl10 /CMBXSL10 /Cmbxti10 /CMBXTI10 /Cmcsc10 /CMCSC10 /Cmcsc8 /CMCSC8 /Cmcsc9 /CMCSC9 /Cmdunh10 /CMDUNH10 /Cmex10 /CMEX10 /CMEX7 /CMEX8 /CMEX9 /Cmff10 /CMFF10 /Cmfi10 /CMFI10 /Cmfib8 /CMFIB8 /Cminch /CMINCH /Cmitt10 /CMITT10 /Cmmi10 /CMMI10 /Cmmi12 /CMMI12 /Cmmi5 /CMMI5 /Cmmi6 /CMMI6 /Cmmi7 /CMMI7 /Cmmi8 /CMMI8 /Cmmi9 /CMMI9 /Cmmib10 /CMMIB10 /CMMIB5 /CMMIB6 /CMMIB7 /CMMIB8 /CMMIB9 /Cmr10 /CMR10 /Cmr12 /CMR12 /Cmr17 /CMR17 /Cmr5 /CMR5 /Cmr6 /CMR6 /Cmr7 /CMR7 /Cmr8 /CMR8 /Cmr9 /CMR9 /Cmsl10 /CMSL10 /Cmsl12 /CMSL12 /Cmsl8 /CMSL8 /Cmsl9 /CMSL9 /Cmsltt10 /CMSLTT10 /Cmss10 /CMSS10 /Cmss12 /CMSS12 /Cmss17 /CMSS17 /Cmss8 /CMSS8 /Cmss9 /CMSS9 /Cmssbx10 /CMSSBX10 /Cmssdc10 /CMSSDC10 /Cmssi10 /CMSSI10 /Cmssi12 /CMSSI12 /Cmssi17 /CMSSI17 /Cmssi8 /CMSSI8 /Cmssi9 /CMSSI9 /Cmssq8 /CMSSQ8 /Cmssqi8 /CMSSQI8 /Cmsy10 /CMSY10 /Cmsy5 /CMSY5 /Cmsy6 /CMSY6 /Cmsy7 /CMSY7 /Cmsy8 /CMSY8 /Cmsy9 /CMSY9 /Cmtcsc10 /CMTCSC10 /Cmtex10 /CMTEX10 /Cmtex8 /CMTEX8 /Cmtex9 /CMTEX9 /Cmti10 /CMTI10 /Cmti12 /CMTI12 /Cmti7 /CMTI7 /Cmti8 /CMTI8 /Cmti9 /CMTI9 /Cmtt10 /CMTT10 /Cmtt12 /CMTT12 /Cmtt8 /CMTT8 /Cmtt9 /CMTT9 /Cmu10 /CMU10 /Cmvtt10 /CMVTT10 /ColonnaMT /Colossalis-Bold /ComicSansMS /ComicSansMS-Bold /Consolas /Consolas-Bold /Consolas-BoldItalic /Consolas-Italic /Constantia</s>
|
<s>/Constantia-Bold /Constantia-BoldItalic /Constantia-Italic /CooperBlack /CopperplateGothic-Bold /CopperplateGothic-Light /Copperplate-ThirtyThreeBC /Corbel /Corbel-Bold /Corbel-BoldItalic /Corbel-Italic /CordiaNew /CordiaNew-Bold /CordiaNew-BoldItalic /CordiaNew-Italic /CordiaUPC /CordiaUPC-Bold /CordiaUPC-BoldItalic /CordiaUPC-Italic /Courier /Courier-Bold /Courier-BoldOblique /CourierNewPS-BoldItalicMT /CourierNewPS-BoldMT /CourierNewPS-ItalicMT /CourierNewPSMT /Courier-Oblique /CourierStd /CourierStd-Bold /CourierStd-BoldOblique /CourierStd-Oblique /CourierX-Bold /CourierX-BoldOblique /CourierX-Oblique /CourierX-Regular /CreepyRegular /CurlzMT /David-Bold /David-Reg /DavidTransparent /Dcb10 /Dcbx10 /Dcbxsl10 /Dcbxti10 /Dccsc10 /Dcitt10 /Dcr10 /Desdemona /DilleniaUPC /DilleniaUPCBold /DilleniaUPCBoldItalic /DilleniaUPCItalic /Dingbats /DomCasual /Dotum /DotumChe /DoulosSIL /EdwardianScriptITC /Elephant-Italic /Elephant-Regular /EngraversGothicBT-Regular /EngraversMT /EraserDust /ErasITC-Bold /ErasITC-Demi /ErasITC-Light /ErasITC-Medium /ErieBlackPSMT /ErieLightPSMT /EriePSMT /EstrangeloEdessa /Euclid /Euclid-Bold /Euclid-BoldItalic /EuclidExtra /EuclidExtra-Bold /EuclidFraktur /EuclidFraktur-Bold /Euclid-Italic /EuclidMathOne /EuclidMathOne-Bold /EuclidMathTwo /EuclidMathTwo-Bold /EuclidSymbol /EuclidSymbol-Bold /EuclidSymbol-BoldItalic /EuclidSymbol-Italic /EucrosiaUPC /EucrosiaUPCBold /EucrosiaUPCBoldItalic /EucrosiaUPCItalic /EUEX10 /EUEX7 /EUEX8 /EUEX9 /EUFB10 /EUFB5 /EUFB7 /EUFM10 /EUFM5 /EUFM7 /EURB10 /EURB5 /EURB7 /EURM10 /EURM5 /EURM7 /EuroMono-Bold /EuroMono-BoldItalic /EuroMono-Italic /EuroMono-Regular /EuroSans-Bold /EuroSans-BoldItalic /EuroSans-Italic /EuroSans-Regular /EuroSerif-Bold /EuroSerif-BoldItalic /EuroSerif-Italic /EuroSerif-Regular /EUSB10 /EUSB5 /EUSB7 /EUSM10 /EUSM5 /EUSM7 /FelixTitlingMT /Fences /FencesPlain /FigaroMT /FixedMiriamTransparent /FootlightMTLight /Formata-Italic /Formata-Medium /Formata-MediumItalic /Formata-Regular /ForteMT /FranklinGothic-Book /FranklinGothic-BookItalic /FranklinGothic-Demi /FranklinGothic-DemiCond /FranklinGothic-DemiItalic /FranklinGothic-Heavy /FranklinGothic-HeavyItalic /FranklinGothicITCbyBT-Book /FranklinGothicITCbyBT-BookItal /FranklinGothicITCbyBT-Demi /FranklinGothicITCbyBT-DemiItal /FranklinGothic-Medium /FranklinGothic-MediumCond /FranklinGothic-MediumItalic /FrankRuehl /FreesiaUPC /FreesiaUPCBold /FreesiaUPCBoldItalic /FreesiaUPCItalic /FreestyleScript-Regular /FrenchScriptMT /Frutiger-Black /Frutiger-BlackCn /Frutiger-BlackItalic /Frutiger-Bold /Frutiger-BoldCn /Frutiger-BoldItalic /Frutiger-Cn /Frutiger-ExtraBlackCn /Frutiger-Italic /Frutiger-Light /Frutiger-LightCn /Frutiger-LightItalic /Frutiger-Roman /Frutiger-UltraBlack /Futura-Bold /Futura-BoldOblique /Futura-Book /Futura-BookOblique /FuturaBT-Bold /FuturaBT-BoldItalic /FuturaBT-Book /FuturaBT-BookItalic /FuturaBT-Medium /FuturaBT-MediumItalic /Futura-Light /Futura-LightOblique /GalliardITCbyBT-Bold /GalliardITCbyBT-BoldItalic /GalliardITCbyBT-Italic /GalliardITCbyBT-Roman /Garamond /Garamond-Bold /Garamond-BoldCondensed /Garamond-BoldCondensedItalic /Garamond-BoldItalic /Garamond-BookCondensed /Garamond-BookCondensedItalic /Garamond-Italic /Garamond-LightCondensed /Garamond-LightCondensedItalic /Gautami /GeometricSlab703BT-Light /GeometricSlab703BT-LightItalic /Georgia /Georgia-Bold /Georgia-BoldItalic /Georgia-Italic /GeorgiaRef /Giddyup /Giddyup-Thangs /Gigi-Regular /GillSans /GillSans-Bold /GillSans-BoldItalic /GillSans-Condensed /GillSans-CondensedBold /GillSans-Italic /GillSans-Light /GillSans-LightItalic /GillSansMT /GillSansMT-Bold /GillSansMT-BoldItalic /GillSansMT-Condensed /GillSansMT-ExtraCondensedBold /GillSansMT-Italic /GillSans-UltraBold /GillSans-UltraBoldCondensed /GloucesterMT-ExtraCondensed /Gothic-Thirteen /GoudyOldStyleBT-Bold /GoudyOldStyleBT-BoldItalic /GoudyOldStyleBT-Italic /GoudyOldStyleBT-Roman /GoudyOldStyleT-Bold /GoudyOldStyleT-Italic /GoudyOldStyleT-Regular /GoudyStout /GoudyTextMT-LombardicCapitals /GSIDefaultSymbols /Gulim /GulimChe /Gungsuh /GungsuhChe /Haettenschweiler /HarlowSolid /Harrington /Helvetica /Helvetica-Black /Helvetica-BlackOblique /Helvetica-Bold /Helvetica-BoldOblique /Helvetica-Condensed /Helvetica-Condensed-Black /Helvetica-Condensed-BlackObl /Helvetica-Condensed-Bold /Helvetica-Condensed-BoldObl /Helvetica-Condensed-Light /Helvetica-Condensed-LightObl /Helvetica-Condensed-Oblique /Helvetica-Fraction /Helvetica-Narrow /Helvetica-Narrow-Bold /Helvetica-Narrow-BoldOblique /Helvetica-Narrow-Oblique /Helvetica-Oblique /HighTowerText-Italic /HighTowerText-Reg /Humanist521BT-BoldCondensed /Humanist521BT-Light /Humanist521BT-LightItalic /Humanist521BT-RomanCondensed /Imago-ExtraBold /Impact /ImprintMT-Shadow /InformalRoman-Regular /IrisUPC /IrisUPCBold /IrisUPCBoldItalic /IrisUPCItalic /Ironwood /ItcEras-Medium /ItcKabel-Bold /ItcKabel-Book /ItcKabel-Demi /ItcKabel-Medium /ItcKabel-Ultra /JasmineUPC /JasmineUPC-Bold /JasmineUPC-BoldItalic /JasmineUPC-Italic /JoannaMT /JoannaMT-Italic /Jokerman-Regular /JuiceITC-Regular /Kartika /Kaufmann /KaufmannBT-Bold /KaufmannBT-Regular /KidTYPEPaint /KinoMT /KodchiangUPC /KodchiangUPC-Bold /KodchiangUPC-BoldItalic /KodchiangUPC-Italic /KorinnaITCbyBT-Regular /KristenITC-Regular /KrutiDev040Bold /KrutiDev040BoldItalic /KrutiDev040Condensed /KrutiDev040Italic /KrutiDev040Thin /KrutiDev040Wide /KrutiDev060 /KrutiDev060Bold /KrutiDev060BoldItalic /KrutiDev060Condensed /KrutiDev060Italic /KrutiDev060Thin /KrutiDev060Wide /KrutiDev070 /KrutiDev070Condensed /KrutiDev070Italic /KrutiDev070Thin /KrutiDev070Wide /KrutiDev080 /KrutiDev080Condensed /KrutiDev080Italic /KrutiDev080Wide /KrutiDev090 /KrutiDev090Bold /KrutiDev090BoldItalic /KrutiDev090Condensed /KrutiDev090Italic /KrutiDev090Thin /KrutiDev090Wide /KrutiDev100 /KrutiDev100Bold /KrutiDev100BoldItalic /KrutiDev100Condensed /KrutiDev100Italic /KrutiDev100Thin /KrutiDev100Wide /KrutiDev120 /KrutiDev120Condensed /KrutiDev120Thin /KrutiDev120Wide /KrutiDev130 /KrutiDev130Condensed /KrutiDev130Thin /KrutiDev130Wide /KunstlerScript /Latha /LatinWide /LetterGothic /LetterGothic-Bold /LetterGothic-BoldOblique /LetterGothic-BoldSlanted /LetterGothicMT /LetterGothicMT-Bold /LetterGothicMT-BoldOblique /LetterGothicMT-Oblique /LetterGothic-Slanted /LevenimMT /LevenimMTBold /LilyUPC /LilyUPCBold /LilyUPCBoldItalic /LilyUPCItalic /Lithos-Black /Lithos-Regular /LotusWPBox-Roman /LotusWPIcon-Roman /LotusWPIntA-Roman /LotusWPIntB-Roman /LotusWPType-Roman /LucidaBright /LucidaBright-Demi /LucidaBright-DemiItalic /LucidaBright-Italic /LucidaCalligraphy-Italic /LucidaConsole /LucidaFax /LucidaFax-Demi /LucidaFax-DemiItalic /LucidaFax-Italic /LucidaHandwriting-Italic /LucidaSans /LucidaSans-Demi /LucidaSans-DemiItalic /LucidaSans-Italic /LucidaSans-Typewriter /LucidaSans-TypewriterBold /LucidaSans-TypewriterBoldOblique /LucidaSans-TypewriterOblique /LucidaSansUnicode /Lydian /Magneto-Bold /MaiandraGD-Regular /Mangal-Regular /Map-Symbols /MathA /MathB /MathC /Mathematica1 /Mathematica1-Bold /Mathematica1Mono /Mathematica1Mono-Bold /Mathematica2 /Mathematica2-Bold /Mathematica2Mono /Mathematica2Mono-Bold /Mathematica3 /Mathematica3-Bold /Mathematica3Mono /Mathematica3Mono-Bold /Mathematica4 /Mathematica4-Bold /Mathematica4Mono /Mathematica4Mono-Bold /Mathematica5 /Mathematica5-Bold /Mathematica5Mono /Mathematica5Mono-Bold /Mathematica6 /Mathematica6Bold /Mathematica6Mono /Mathematica6MonoBold /Mathematica7 /Mathematica7Bold /Mathematica7Mono /Mathematica7MonoBold /MatisseITC-Regular /MaturaMTScriptCapitals /Mesquite /Mezz-Black /Mezz-Regular /MICR /MicrosoftSansSerif /MingLiU /Minion-BoldCondensed /Minion-BoldCondensedItalic /Minion-Condensed /Minion-CondensedItalic /Minion-Ornaments /MinionPro-Bold /MinionPro-BoldIt /MinionPro-It /MinionPro-Regular /Miriam /MiriamFixed /MiriamTransparent /Mistral /Modern-Regular /MonotypeCorsiva /MonotypeSorts /MSAM10 /MSAM5 /MSAM6 /MSAM7 /MSAM8 /MSAM9 /MSBM10 /MSBM5 /MSBM6 /MSBM7 /MSBM8 /MSBM9 /MS-Gothic /MSHei /MSLineDrawPSMT /MS-Mincho /MSOutlook /MS-PGothic /MS-PMincho /MSReference1 /MSReference2 /MSReferenceSansSerif /MSReferenceSansSerif-Bold /MSReferenceSansSerif-BoldItalic /MSReferenceSansSerif-Italic /MSReferenceSerif /MSReferenceSerif-Bold /MSReferenceSerif-BoldItalic /MSReferenceSerif-Italic /MSReferenceSpecialty /MSSong /MS-UIGothic /MT-Extra /MTExtraTiger /MT-Symbol /MT-Symbol-Italic /MVBoli /Myriad-Bold /Myriad-BoldItalic /Myriad-Italic /Myriad-Roman /Narkisim /NewCenturySchlbk-Bold /NewCenturySchlbk-BoldItalic /NewCenturySchlbk-Italic /NewCenturySchlbk-Roman /NewMilleniumSchlbk-BoldItalicSH /NewsGothic</s>
|
<s>/NewsGothic-Bold /NewsGothicBT-Bold /NewsGothicBT-BoldItalic /NewsGothicBT-Italic /NewsGothicBT-Roman /NewsGothic-Condensed /NewsGothic-Italic /NewsGothicMT /NewsGothicMT-Bold /NewsGothicMT-Italic /NiagaraEngraved-Reg /NiagaraSolid-Reg /NimbusMonL-Bold /NimbusMonL-BoldObli /NimbusMonL-Regu /NimbusMonL-ReguObli /NimbusRomNo9L-Medi /NimbusRomNo9L-MediItal /NimbusRomNo9L-Regu /NimbusRomNo9L-ReguItal /NimbusSanL-Bold /NimbusSanL-BoldCond /NimbusSanL-BoldCondItal /NimbusSanL-BoldItal /NimbusSanL-Regu /NimbusSanL-ReguCond /NimbusSanL-ReguCondItal /NimbusSanL-ReguItal /Nimrod /Nimrod-Bold /Nimrod-BoldItalic /Nimrod-Italic /NSimSun /Nueva-BoldExtended /Nueva-BoldExtendedItalic /Nueva-Italic /Nueva-Roman /NuptialScript /OCRA /OCRA-Alternate /OCRAExtended /OCRB /OCRB-Alternate /OfficinaSans-Bold /OfficinaSans-BoldItalic /OfficinaSans-Book /OfficinaSans-BookItalic /OfficinaSerif-Bold /OfficinaSerif-BoldItalic /OfficinaSerif-Book /OfficinaSerif-BookItalic /OldEnglishTextMT /Onyx /OnyxBT-Regular /OzHandicraftBT-Roman /PalaceScriptMT /Palatino-Bold /Palatino-BoldItalic /Palatino-Italic /PalatinoLinotype-Bold /PalatinoLinotype-BoldItalic /PalatinoLinotype-Italic /PalatinoLinotype-Roman /Palatino-Roman /PapyrusPlain /Papyrus-Regular /Parchment-Regular /Parisian /ParkAvenue /Penumbra-SemiboldFlare /Penumbra-SemiboldSans /Penumbra-SemiboldSerif /PepitaMT /Perpetua /Perpetua-Bold /Perpetua-BoldItalic /Perpetua-Italic /PerpetuaTitlingMT-Bold /PerpetuaTitlingMT-Light /PhotinaCasualBlack /Playbill /PMingLiU /Poetica-SuppOrnaments /PoorRichard-Regular /PopplLaudatio-Italic /PopplLaudatio-Medium /PopplLaudatio-MediumItalic /PopplLaudatio-Regular /PrestigeElite /Pristina-Regular /PTBarnumBT-Regular /Raavi /RageItalic /Ravie /RefSpecialty /Ribbon131BT-Bold /Rockwell /Rockwell-Bold /Rockwell-BoldItalic /Rockwell-Condensed /Rockwell-CondensedBold /Rockwell-ExtraBold /Rockwell-Italic /Rockwell-Light /Rockwell-LightItalic /Rod /RodTransparent /RunicMT-Condensed /Sanvito-Light /Sanvito-Roman /ScriptC /ScriptMTBold /SegoeUI /SegoeUI-Bold /SegoeUI-BoldItalic /SegoeUI-Italic /Serpentine-BoldOblique /ShelleyVolanteBT-Regular /ShowcardGothic-Reg /Shruti /SILDoulosIPA /SimHei /SimSun /SimSun-PUA /SnapITC-Regular /StandardSymL /Stencil /StoneSans /StoneSans-Bold /StoneSans-BoldItalic /StoneSans-Italic /StoneSans-Semibold /StoneSans-SemiboldItalic /Stop /Swiss721BT-BlackExtended /Sylfaen /Symbol /SymbolMT /SymbolTiger /SymbolTigerExpert /Tahoma /Tahoma-Bold /Tci1 /Tci1Bold /Tci1BoldItalic /Tci1Italic /Tci2 /Tci2Bold /Tci2BoldItalic /Tci2Italic /Tci3 /Tci3Bold /Tci3BoldItalic /Tci3Italic /Tci4 /Tci4Bold /Tci4BoldItalic /Tci4Italic /TechnicalItalic /TechnicalPlain /Tekton /Tekton-Bold /TektonMM /Tempo-HeavyCondensed /Tempo-HeavyCondensedItalic /TempusSansITC /Tiger /TigerExpert /Times-Bold /Times-BoldItalic /Times-BoldItalicOsF /Times-BoldSC /Times-ExtraBold /Times-Italic /Times-ItalicOsF /TimesNewRomanMT-ExtraBold /TimesNewRomanPS-BoldItalicMT /TimesNewRomanPS-BoldMT /TimesNewRomanPS-ItalicMT /TimesNewRomanPSMT /Times-Roman /Times-RomanSC /Trajan-Bold /Trebuchet-BoldItalic /TrebuchetMS /TrebuchetMS-Bold /TrebuchetMS-Italic /Tunga-Regular /TwCenMT-Bold /TwCenMT-BoldItalic /TwCenMT-Condensed /TwCenMT-CondensedBold /TwCenMT-CondensedExtraBold /TwCenMT-CondensedMedium /TwCenMT-Italic /TwCenMT-Regular /Univers-Bold /Univers-BoldItalic /UniversCondensed-Bold /UniversCondensed-BoldItalic /UniversCondensed-Medium /UniversCondensed-MediumItalic /Univers-Medium /Univers-MediumItalic /URWBookmanL-DemiBold /URWBookmanL-DemiBoldItal /URWBookmanL-Ligh /URWBookmanL-LighItal /URWChanceryL-MediItal /URWGothicL-Book /URWGothicL-BookObli /URWGothicL-Demi /URWGothicL-DemiObli /URWPalladioL-Bold /URWPalladioL-BoldItal /URWPalladioL-Ital /URWPalladioL-Roma /USPSBarCode /VAGRounded-Black /VAGRounded-Bold /VAGRounded-Light /VAGRounded-Thin /Verdana /Verdana-Bold /Verdana-BoldItalic /Verdana-Italic /VerdanaRef /VinerHandITC /Viva-BoldExtraExtended /Vivaldii /Viva-LightCondensed /Viva-Regular /VladimirScript /Vrinda /Webdings /Westminster /Willow /Wingdings2 /Wingdings3 /Wingdings-Regular /WNCYB10 /WNCYI10 /WNCYR10 /WNCYSC10 /WNCYSS10 /WoodtypeOrnaments-One /WoodtypeOrnaments-Two /WP-ArabicScriptSihafa /WP-ArabicSihafa /WP-BoxDrawing /WP-CyrillicA /WP-CyrillicB /WP-GreekCentury /WP-GreekCourier /WP-GreekHelve /WP-HebrewDavid /WP-IconicSymbolsA /WP-IconicSymbolsB /WP-Japanese /WP-MathA /WP-MathB /WP-MathExtendedA /WP-MathExtendedB /WP-MultinationalAHelve /WP-MultinationalARoman /WP-MultinationalBCourier /WP-MultinationalBHelve /WP-MultinationalBRoman /WP-MultinationalCourier /WP-Phonetic /WPTypographicSymbols /XYATIP10 /XYBSQL10 /XYBTIP10 /XYCIRC10 /XYCMAT10 /XYCMBT10 /XYDASH10 /XYEUAT10 /XYEUBT10 /ZapfChancery-MediumItalic /ZapfDingbats /ZapfHumanist601BT-Bold /ZapfHumanist601BT-BoldItalic /ZapfHumanist601BT-Demi /ZapfHumanist601BT-DemiItalic /ZapfHumanist601BT-Italic /ZapfHumanist601BT-Roman /ZWAdobeF /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 150 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 2.00333 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 150 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 2.00333 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.00167 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile</s>
|
<s>false /Description << /ARA <FEFF06270633062A062E062F0645002006470630064700200627064406250639062F0627062F0627062A002006440625064606340627062100200648062B062706260642002000410064006F00620065002000500044004600200645062A064806270641064206290020064506390020064506420627064A064A0633002006390631063600200648063706280627063906290020062706440648062B0627062606420020062706440645062A062F062706480644062900200641064A00200645062C062706440627062A002006270644062306390645062706440020062706440645062E062A064406410629061B0020064A06450643064600200641062A062D00200648062B0627062606420020005000440046002006270644064506460634062306290020062806270633062A062E062F062706450020004100630072006F0062006100740020064800410064006F006200650020005200650061006400650072002006250635062F0627063100200035002E0030002006480627064406250635062F062706310627062A0020062706440623062D062F062B002E> /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e55464e1a65876863768467e5770b548c62535370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc666e901a554652d965874ef6768467e5770b548c52175370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /CZE <FEFF005400610074006f0020006e006100730074006100760065006e00ed00200070006f0075017e0069006a007400650020006b0020007600790074007600e101590065006e00ed00200064006f006b0075006d0065006e0074016f002000410064006f006200650020005000440046002000760068006f0064006e00fd00630068002000700072006f002000730070006f006c00650068006c0069007600e90020007a006f006200720061007a006f007600e1006e00ed002000610020007400690073006b0020006f006200630068006f0064006e00ed0063006800200064006f006b0075006d0065006e0074016f002e002000200056007900740076006f01590065006e00e900200064006f006b0075006d0065006e007400790020005000440046002000620075006400650020006d006f017e006e00e90020006f007400650076015900ed007400200076002000700072006f006700720061006d0065006300680020004100630072006f00620061007400200061002000410064006f00620065002000520065006100640065007200200035002e0030002000610020006e006f0076011b006a016100ed00630068002e> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000650067006e006500720020007300690067002000740069006c00200064006500740061006c006a006500720065007400200073006b00e60072006d007600690073006e0069006e00670020006f00670020007500640073006b007200690076006e0069006e006700200061006600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200075006d002000650069006e00650020007a0075007600650072006c00e40073007300690067006500200041006e007a006500690067006500200075006e00640020004100750073006700610062006500200076006f006e00200047006500730063006800e40066007400730064006f006b0075006d0065006e00740065006e0020007a0075002000650072007a00690065006c0065006e002e00200044006900650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000520065006100640065007200200035002e003000200075006e00640020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f0073002000640065002000410064006f00620065002000500044004600200061006400650063007500610064006f007300200070006100720061002000760069007300750061006c0069007a00610063006900f3006e0020006500200069006d0070007200650073006900f3006e00200064006500200063006f006e006600690061006e007a006100200064006500200064006f00630075006d0065006e0074006f007300200063006f006d00650072006300690061006c00650073002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f006200650020005000440046002000700072006f00660065007300730069006f006e006e0065006c007300200066006900610062006c0065007300200070006f007500720020006c0061002000760069007300750061006c00690073006100740069006f006e0020006500740020006c00270069006d007000720065007300730069006f006e002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /GRE <FEFF03a703c103b703c303b903bc03bf03c003bf03b903ae03c303c403b5002003b103c503c403ad03c2002003c403b903c2002003c103c503b803bc03af03c303b503b903c2002003b303b903b1002003bd03b1002003b403b703bc03b903bf03c503c103b303ae03c303b503c403b5002003ad03b303b303c103b103c603b1002000410064006f006200650020005000440046002003ba03b103c403ac03bb03bb03b703bb03b1002003b303b903b1002003b103be03b903cc03c003b903c303c403b7002003c003c103bf03b203bf03bb03ae002003ba03b103b9002003b503ba03c403cd03c003c903c303b7002003b503c003b903c703b503b903c103b703bc03b103c403b903ba03ce03bd002003b503b303b303c103ac03c603c903bd002e0020002003a403b10020005000440046002003ad03b303b303c103b103c603b1002003c003bf03c5002003ad03c703b503c403b5002003b403b703bc03b903bf03c503c103b303ae03c303b503b9002003bc03c003bf03c103bf03cd03bd002003bd03b1002003b103bd03bf03b903c703c403bf03cd03bd002003bc03b5002003c403bf0020004100630072006f006200610074002c002003c403bf002000410064006f00620065002000520065006100640065007200200035002e0030002003ba03b103b9002003bc03b503c403b103b303b503bd03ad03c303c403b503c103b503c2002003b503ba03b403cc03c303b503b903c2002e> /HEB <FEFF05D405E905EA05DE05E905D5002005D105D405D205D305E805D505EA002005D005DC05D4002005DB05D305D9002005DC05D905E605D505E8002005DE05E105DE05DB05D9002000410064006F006200650020005000440046002005E205D105D505E8002005D405E605D205D4002005D505D405D305E405E105D4002005D005DE05D905E005D4002005E905DC002005DE05E105DE05DB05D905DD002005E205E105E705D905D905DD002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E05D905D505EA05E8002E002D0033002C002005E205D905D905E005D5002005D105DE05D305E805D905DA002005DC05DE05E905EA05DE05E9002005E905DC0020004100630072006F006200610074002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E> /HRV (Za stvaranje Adobe PDF dokumenata pogodnih za pouzdani prikaz i ispis poslovnih dokumenata koristite ove postavke. Stvoreni PDF dokumenti mogu se otvoriti Acrobat i Adobe Reader 5.0 i kasnijim verzijama.) /HUN <FEFF00410020006800690076006100740061006c006f007300200064006f006b0075006d0065006e00740075006d006f006b0020006d00650067006200ed007a00680061007400f30020006d0065006700740065006b0069006e007400e9007300e900720065002000e900730020006e0079006f006d00740061007400e1007300e10072006100200073007a00e1006e0074002000410064006f00620065002000500044004600200064006f006b0075006d0065006e00740075006d006f006b0061007400200065007a0065006b006b0065006c0020006100200062006500e1006c006c00ed007400e10073006f006b006b0061006c00200068006f007a006800610074006a00610020006c00e9007400720065002e0020002000410020006c00e90074007200650068006f007a006f00740074002000500044004600200064006f006b0075006d0065006e00740075006d006f006b00200061007a0020004100630072006f006200610074002000e9007300200061007a002000410064006f00620065002000520065006100640065007200200035002e0030002c0020007600610067007900200061007a002000610074007400f3006c0020006b00e9007301510062006200690020007600650072007a006900f3006b006b0061006c0020006e00790069007400680061007400f3006b0020006d00650067002e> /ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 5.0 e versioni successive.) /JPN <FEFF30d330b830cd30b9658766f8306e8868793a304a3088307353705237306b90693057305f002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b4f7f75283057307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a3067306f30d530a930f330c8306e57cb30818fbc307f3092884c3044307e30593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020be44c988b2c8c2a40020bb38c11cb97c0020c548c815c801c73cb85c0020bcf4ace00020c778c1c4d558b2940020b3700020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d002000650072002000650067006e0065007400200066006f00720020007000e5006c006900740065006c006900670020007600690073006e0069006e00670020006f00670020007500740073006b007200690066007400200061007600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002e> /POL <FEFF0055007300740061007700690065006e0069006100200064006f002000740077006f0072007a0065006e0069006100200064006f006b0075006d0065006e007400f300770020005000440046002000700072007a0065007a006e00610063007a006f006e00790063006800200064006f0020006e00690065007a00610077006f0064006e00650067006f002000770079015b0077006900650074006c0061006e00690061002000690020006400720075006b006f00770061006e0069006100200064006f006b0075006d0065006e007400f300770020006600690072006d006f0077007900630068002e002000200044006f006b0075006d0065006e0074007900200050004400460020006d006f017c006e00610020006f007400770069006500720061010700200077002000700072006f006700720061006d006900650020004100630072006f00620061007400200069002000410064006f00620065002000520065006100640065007200200035002e0030002000690020006e006f00770073007a0079006d002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f00620065002000500044004600200061006400650071007500610064006f00730020007000610072006100200061002000760069007300750061006c0069007a006100e700e3006f002000650020006100200069006d0070007200650073007300e3006f00200063006f006e0066006900e1007600650069007300200064006500200064006f00630075006d0065006e0074006f007300200063006f006d0065007200630069006100690073002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /RUM <FEFF005500740069006c0069007a00610163006900200061006300650073007400650020007300650074010300720069002000700065006e007400720075002000610020006300720065006100200064006f00630075006d0065006e00740065002000410064006f006200650020005000440046002000610064006500630076006100740065002000700065006e007400720075002000760069007a00750061006c0069007a00610072006500610020015f006900200074006900700103007200690072006500610020006c0061002000630061006c006900740061007400650020007300750070006500720069006f0061007201030020006100200064006f00630075006d0065006e00740065006c006f007200200064006500200061006600610063006500720069002e002000200044006f00630075006d0065006e00740065006c00650020005000440046002000630072006500610074006500200070006f00740020006600690020006400650073006300680069007300650020006300750020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e00300020015f00690020007600650072007300690075006e0069006c006500200075006c0074006500720069006f006100720065002e> /RUS <FEFF04180441043f043e043b044c04370443043904420435002004340430043d043d044b04350020043d0430044104420440043e0439043a043800200434043b044f00200441043e043704340430043d0438044f00200434043e043a0443043c0435043d0442043e0432002000410064006f006200650020005000440046002c0020043f043e04340445043e0434044f04490438044500200434043b044f0020043d0430043404350436043d043e0433043e0020043f0440043e0441043c043e044204400430002004380020043f04350447043004420438002004340435043b043e0432044b044500200434043e043a0443043c0435043d0442043e0432002e002000200421043e043704340430043d043d044b04350020005000440046002d0434043e043a0443043c0435043d0442044b0020043c043e0436043d043e0020043e0442043a0440044b043204300442044c002004410020043f043e043c043e0449044c044e0020004100630072006f00620061007400200438002000410064006f00620065002000520065006100640065007200200035002e00300020043800200431043e043b043504350020043f043e04370434043d043804450020043204350440044104380439002e> /SLV <FEFF005400650020006e006100730074006100760069007400760065002000750070006f0072006100620069007400650020007a00610020007500730074007600610072006a0061006e006a006500200064006f006b0075006d0065006e0074006f0076002000410064006f006200650020005000440046002c0020007000720069006d00650072006e006900680020007a00610020007a0061006e00650073006c006a00690076006f0020006f0067006c00650064006f00760061006e006a006500200069006e0020007400690073006b0061006e006a006500200070006f0073006c006f0076006e0069006800200064006f006b0075006d0065006e0074006f0076002e00200020005500730074007600610072006a0065006e006500200064006f006b0075006d0065006e0074006500200050004400460020006a00650020006d006f0067006f010d00650020006f0064007000720065007400690020007a0020004100630072006f00620061007400200069006e002000410064006f00620065002000520065006100640065007200200035002e003000200069006e0020006e006f00760065006a01610069006d002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f0074002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002c0020006a006f0074006b006100200073006f0070006900760061007400200079007200690074007900730061007300690061006b00690072006a006f006a0065006e0020006c0075006f00740065007400740061007600610061006e0020006e00e400790074007400e4006d0069007300650065006e0020006a0061002000740075006c006f007300740061006d0069007300650065006e002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d00200070006100730073006100720020006600f60072002000740069006c006c006600f60072006c00690074006c006900670020007600690073006e0069006e00670020006f006300680020007500740073006b007200690066007400650072002000610076002000610066006600e4007200730064006f006b0075006d0065006e0074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /TUR <FEFF005400690063006100720069002000620065006c00670065006c006500720069006e0020006700fc00760065006e0069006c0069007200200062006900720020015f0065006b0069006c006400650020006700f6007200fc006e007400fc006c0065006e006d006500730069002000760065002000790061007a0064013100720131006c006d006100730131006e006100200075007900670075006e002000410064006f006200650020005000440046002000620065006c00670065006c0065007200690020006f006c0075015f007400750072006d0061006b0020006900e70069006e00200062007500200061007900610072006c0061007201310020006b0075006c006c0061006e0131006e002e00200020004f006c0075015f0074007500720075006c0061006e0020005000440046002000620065006c00670065006c0065007200690020004100630072006f006200610074002000760065002000410064006f00620065002000520065006100640065007200200035002e003000200076006500200073006f006e0072006100730131006e00640061006b00690020007300fc007200fc006d006c00650072006c00650020006100e70131006c006100620069006c00690072002e> /ENU (Use these settings to create Adobe PDF documents suitable for reliable viewing and printing of business documents. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.)>> setdistillerparams /HWResolution [600 600] /PageSize [612.000 792.000]>> setpagedevice</s>
|
<s>BENGALI FAKE NEWS DETECTION USING MACHINE LEARNING ADITI BALO ID: 152-15-6064 AND JAMIUL ISLAM ID: 152-15-6147 AND ABDULLAH AL BAKI ID: 152-15-6169 This Report Presented in Partial Fulfillment of the Requirements for the Degree of Bachelor of Science in Computer Science and Engineering Supervised By Mr. Sheikh Abujar Lecturer Department of CSE Daffodil International University DAFFODIL INTERNATIONAL UNIVERSITY DHAKA, BANGLADESH MAY 2019 ©Daffodil International University iii ©Daffodil International University iii ACKNOWLEDGEMENT First, we express our heartiest thanks and gratefulness to almighty GOD for his divine blessing makes us possible to complete the final year project successfully. We are really grateful and wish our profound indebtedness to Mr. Sheikh Abujar, Lecturer, Department of CSE, Daffodil International University, Dhaka. Deep Knowledge & keen interest of our supervisor in the field of “Natural Language Processing” to carry out this project. His endless patience, scholarly guidance, continual encouragement, constant and energetic supervision, constructive criticism, valuable advice, reading many inferior draft and correcting them at all stage have made it possible to complete this project. We would like to express our heartiest gratitude to the Almighty GOD and Prof. Dr. Syed Akhter Hossain, Head, Department of CSE, for his kind help to finish our project International and also to other faculty member and the staff of CSE department of Daffodil University. We would like to thank our entire course mate in Daffodil International University, who took part in this discuss while completing the coursework. Finally, we must acknowledge with due respect the constant support and patients so four parents. ©Daffodil International University iv ABSTRACT This project based on NLP (Natural Languages Processing) techniques. The aim of this project is to identify the fake news of non-reputed news portal and to deliver the natural news or actual news to the news reader. As fake news misleads a great mishap among people so as a request of public demand we are going to run a model to detect all kinds of fake news. We collect data from reputed and non-reputed online news portal. This model build base on Bag of word (convert text format into vactorize format), tfidf matrix (extract facture) and RandomForestClassifier (for train and test dataset).By using this model we also create a predication where we test out site data is this data news fake or real. By using word tokenize we collect maximum number of keywords into our dataset for fake news and real news actually our model give result by comparing the dataset keywords. Our model achieves 86% accuracy. ©Daffodil International University v TABLE OF CONTENTS CONTENTS Page Board of examiners I Declaration II Acknowledgements III Abstract IV CHAPTER 1: INTRODUCTION 1-2 1.1 Introduction 1 1.2 Motivation 1 1.3 Research Question 2 1.4 Expected Output 2 1.5 Report Layout 2 CHAPTER 2: BACKGROUND 3-6 2.1 Introduction 3 2.2 Related Works 3 2.3 Research Summary 4 2.4 Scope of the Problem 5 2.5 Challenges 5 CHAPTER 3: RESEARCH METHODOLOGY 6-17 3.1 Introduction 6 3.2 Data Collection Procedure 7 3.3 Implementation Requirements 9 ©Daffodil International</s>
|
<s>University vi CHAPTER 4: EXPERIMENTAL RESULT AND DISCUSSION 18-21 4.1 Introduction 18 4.2 Experimental Result 18 4.3 Descriptive Analysis 20 4.4 Summary 20 CHAPTER 5: CONCLUSION AND FUTURE WORK 21-22 5.1 Summary 21 5.2 Conclusion 21 5.3 Future Work 22 REFERENCE 23 PLAGIARISM REPORT 24 ©Daffodil International University vii LIST OF FIGURES FIGURES PAGE NO Figure 3.1: Data preparation, train & testing flowchart 7 Figure 3.2: Data Collection Procedure 8 Figure 3.3: Sample data 10 Figure 3.4: Sample text to numeric format 12 Figure: 3.5 Random Forest tree 15 Figure 3.6: Confusion Matrix 16 Figure 3.7: Real News NLP Image 17 Figure 3.8: Fake News NLP Image 17 Figure 4.1: Real News Keywords Percentiles 19 Figure 4.2: Fake News Keywords Percentiles 19©Daffodil International University viii LIST OF TABLES TABLE PAGE NO Table 3.1: Data Category Table 11 Table 3.2: Textual data process using tf-idf 13 Table 3.3: World Could Analysis Table 16 Table 4.1: World Could Analysis Table 18 Table 5.1: Accuracy of Different Fake news Detection Model 21 ©Daffodil International University 1 CHAPTER 1 INTRODUCTION 1.1 Introduction Now a day modern world provide different social media platforms, website because of upgrade our technology and from this site and platforms we get different news and also we consumes this news. Most of them people tend to seek out news from this site and platforms than traditional news organization. This traditional organization provide them real news but most of the time we get false news from different social platform and media. For this reason we can use machine learning techniques, natural language processing (NLP) to detect fake news. Through machine learning approach, we will learn machine which news is fake and which news is real. Then it can be capable to identify false news. And natural language processing is one of the most well-known fields that allows computer to process and manipulate human language. It is used for easy to learn and readable any language to machine. Fake news detection for bangle is more difficult and challenging than other language because of structure of bangle fake news is very confusing. Though some researchers are improve to detect fake news and use different method and algorithms but very few for bangle. There is still a lot of change to improve more. 1.2 Motivation Fake is such a news which makes a humor among people. Sometimes people become too much frustrated by fake news. Mainly this type of news is like a social crime because it create a great mishap among the mass people. Nowadays it has become too difficult to identify news whether it is fake or real. For this reason we have research on some research model basis on fake news and afterword we have been able to build a model to detect any kind of news whether it I fake news or real news. Using our model one can be able to find news how much percent fake or how much the percent is real. Here many a keywords has</s>
|
<s>been included into this system dataset to detect rapidly about the news fake or real. By this model it is hope that one can be able to remove his/her detection in using online news into his/her works or research. This must be helpful guide to anyone to ensure his doubt about news. In future this model would be more update to detect the online news fluffily. Finally it can be said that online news reader would be more benefited by using this model. ©Daffodil International University 2 1.3 Research Question What is the effect of a supervised system to improve the accuracy level of real and fake news detection? RandomForestClassifier is the supervised system? The purpose of this study is to find out the performance level of a supervised system in the area of news detection. Supervised system performs well in this field, where unsupervised system still has to improve. • How collect fresh data and validated data. • How store data and arrange data? • How remove regular expiration? • How remove data noise and html extra tag? • How remove punctuation and stop words? 1.4 Expected Output In this system news title, category, description and link will be submitted as input. Then using some build in function remove all type of noise and test the news using our model. This model gives result to compare the key word of fake news and real news and it stores some keyword for fake news and real news. It also stores this key word by training all fake news and real news dataset. So when any news test our model that is compared with the key words of stores dataset. When it finds more key word of fake news, it finally declares that “It’s a fake news” on the contrary when it finds the real news keyword it reveals that “It’s a real news” from fake news and real news dataset. Thus, detecting the keywords of fake news or real news our model gives a final decision or result whether the news is real or fake. 1.5 Report Layout We divide this report into five sections. This is the first section where we talk about motivation for our work and the expected outcome. In the second section (CHAPTER2 We discuss about related works in this field, scope of the problems, challenges etc. In the third section (CHAPTER 3) we discuss about data collection procedure and implementation. Section four (CHAPTER 4) is for experimental result and analysis. Conclusion and future work are discussed in CHAPTER5. ©Daffodil International University 3 CHAPTER 2 BACKGROUND 2.1 Introduction By using the linguistic rule or stochastic rule or both, many fake news detection model was developed for different languages. In this part we discus about those research paper which paper we used our reference. 2.2 Related Works Distorted news and “alternate facts” were not a problem in society two years ago, despite the long-term deep changes in the news market [1]. The social concern about these kinds</s>
|
<s>of news has been rather deeply accelerated by the term “fake news”, coined by the US elected President, Donald Trump, conveying its origins in the political arena. For example, among other fake news that emerged during the Trump campaign one of the most popular ones consisted on the Pope Francis reported endorsement of Donald Trump for president of the US. The news piece was advanced by the website “Ending The Fed”, managed by a Romanian youngster. BBC [4] also refers to the advancement of particular (often extreme) political causes as one of the main sources of fake news, defining them as false information deliberately circulated by those who have scant regard for the truth and act under the motivation of fostering political causes or obtaining revenue out of the online traffic. In this domain, Facebook has faced an increasing criticism over its role in the 2016 US presidential election because it allowed the propagation of fake news disguised as news stories coming from unchecked websites. This spreading of false information during the election cycle was so severe that Facebook was labelled as “dust cloud of nonsense.”[7] The fact is that the presidential election year has shown how the lines have blurred between facts and speculation, with people profiting off the spread of fake news. There were more than 100 news sites that made up pro-Trump content traced to Macedonia, according to a BuzzFeed News investigation8.Then again, reality checking approaches depend on computerized confirmation of recommendations made in the news articles [9] to survey the honesty of their cases [11]. Learning databases, for example, DBpedia 2 have been utilized to question the Web in an organized way. The consequences of such inquiries would then be able to be utilized to test whether unique sources additionally contain data affirming the news guarantee [15]. Different works have utilized interpersonal organization movement [10] on a particular news thing to evaluate its believability, for example by distinguishing tweets voicing wariness about the honesty of a case made in a news article [13]. Despite the fact that reality checking approaches are getting to be progressively ground-breaking, a noteworthy downside is that they are based on the reason that the data can be confirmed utilizing outside sources, for example FakeCheck.org and Snopes.com. In any case, this isn't a direct assignment, as outer sources probably won't be accessible, especially for simply distributed news things. Along these lines, the reality checking approach is prevalently helpful for the discovery of misdirection in writings for which outer, obvious data is accessible. Besides, likewise identified with the present paper is take a shot at the programmed ID of beguiling content, which has investigated spaces, for example, discussions, buyer audits sites, internet promoting, internet dating, and crowdfunding stages [12]. While counterfeit news recognition is firmly identified with trickery discovery [5], there are essential contrasts between the two errands. To start with, counterfeit news makers for the most part look for political or monetary profit just as self-©Daffodil International University 4 promotion while double</s>
|
<s>crossers have inspirations that are all the more socially determined, for example, self-security, strife or on the other hand hurt shirking, impression the executives or personality covering. Second, they vary fundamentally in their objective and in the structure they spread: counterfeit news things are typically scattered at bigger scale through the Internet and online networking while trickery is all the more explicitly focused at people. Be that as it may, since the two undertakings manage tricky substance, we conjecture that there are phonetic perspectives that may be shared between these assignments. In this way, we center on the etymological methodology and expand upon a rising assemblage of research on PC robotized verbal trickiness recognition 2.3 Research Summary A supervised system with KNN, Decision Tree, Naïve Bayes, Logistic Regression, SVM performs satisfactorily for detect fake news. But due to the huge dataset requirement, it will be very laborious to develop. Its performance level depends on the size of training data. Moreover, the neural network has done this job pretty well, butstill hard to develop. Now it is the term for an unsupervised system. A limited work has done in this section for detection purposes, which performs fairly well. But all of them have a number of limitations; those make a scope for improvement. So, we develop a supervised detecting system, powered by and modified version of previously used algorithm [14]. In this system, each word is inspected for appropriate keywords. There are also some rules that help to make decisions to get an appropriate keyword. 2.4 Scope of the problem The problem is the part of an experiment. There have a number of scopes for occurring problems. • Manually collect data: Collect data without any software and facing problem to detect data which data is fake and which data is real and it was herculean task for us. • Arrange data category: some data collect without category. In this data set category depending data title and description. Then all dataset arrange category wise. To arrange data into category wise • Detect Data: All data seems look like same for this detect reason detect data is so default which is real and which is fake • Reducing noise data: Reducing data noise use some build in function but some stop words are not reduce and this stop words is so effect for dataset. Because stop word key are common for real news data and fake news data. ©Daffodil International University 5 2.5 Challenge As this is the first time we are going to detecting the Bangla fake news on Bengali language so this may called a great challenge for us to this project. This is only reason is we didn’t find any full research paper in detecting Bangla news. As the grammar of Bengali language is too different to English grammar so to identify the Bangla fake news we have to build a new model that can detect Bangla news easily. This task is not so easy like to detecting English news.</s>
|
<s>Besides, collection of data for this project was so difficult to us because the fake and real news has no other exceptional identity to detect them. So, we have to research more to collect all types of data. ©Daffodil International University 6 CHAPTER 3 RESEARCH METHODOLOGY 3.1 Introduction The goal of this study is to find out the performance of a supervised system in the purpose of fake news detection. To successfully conduct the thesis below steps were taken. • To build dataset collect data from different news portal reputed and non-reputed news portal. • Arrange data into category wise and every data set one class. • Prepare data for train and testing by using re, string and beautiful soup function (remove regular expiration and html extra tag). • Data read by pd.read_csv() function and show sample data using data.head() function. • Data.shape to show rows and columns and data.columns to show all dataset columns. • Data.isnull().sum tho check missing data • data['Class'].value_counts() to show real data and fake data of dataset. • data['Category'].value_counts() to show all category and calculate all category. • data.dropna(inplace=True) by using this function prepare data fully and if any data is missing then this function trace this field and make it as a true value. • Then reducing nosing and normalization remove all type of noise. • Target variable encoding to create data shape and partition data. • By using bag of word convert text documents into corresponding numerical features. Count Vectorizer: The most straightforward one, it counts the number of times a token shows up in the document and uses this value as its weight. Finally, an unsupervised Bangla POS tagger based on suffix analysis is proposed to increase the accuracy level. • By using tf-idf convert numeric data into matrix format and Finally Construct training and testing sets then model train using Randomforestclassifier. ©Daffodil International University 7 Flow Chart: All dataset preparation for training and testing flow chart Figure 3.1 we shown how to process data. In our process, at first we collect text data that contain fake and real news from different reputed and non-reputed news platform. Then this data convert into CSV form for readable to machine using panda’s library of python. Then this text data convert into numeric format using bag of word process. In this process, we collect frequency of each word and we assign maximum feature and document frequency. From this method we get numeric format of text data. Then we calculate TF and IDF value. Bag of word calculated frequency in specific document. For this reason we calculate TF and IDF value. Because TF and IDF calculate frequency calculated all document. Then it convert into matrix and create train and test data. In our work, 80 news we use for train and 20 news we use for test. At we use RandomForestClassifier algorithms. It is a supervised algorithms it will classify fake and real news from train data. From this algorithms we get our expected result. 3.2</s>
|
<s>Data collection Procedure: Our collection of data divided into five parts (Title, Category, Description, Link, Class) and classes divided into two parts (Real & Fake), Real class (256) Fake class (244) our total dataset is 500 in this data 80% training used and 20% testing used. Real class data collect Bangladeshi reputed online news portal site such as Daily Prothom alo, Bangladesh Pratidin, Ittefaq, Daily KalerKantho Daily NayaDiganta bdnews24.com etc. Text Data Bag of Word TF-IDF Matrix Facture [Vector Format] [Matrix Format] RandomForestClassifier Figure 3.1: Data preparation, train & testing flowchart [Train and testing data] ©Daffodil International University 8 The Fake class data collect some non-reputed online news source such as Dhaka Channel Khbor24.com etc. Afterwards, we check all sorts of data by Google scraping whether it is fake or real. For Google scraping we have to search by news title and made a result for that news and to define that how much the percent of this news is shown in fake news portal or real news portal. Then we made a Google form where we took the public opinion on all kinds of news to define how much people support it as fake news and how much people consider it as a real news. Depending all above description we made two classes of news, one is fake news class and other is real news. In figure 3.2 we shown some main topic and we show flowchart how we collect data and how to detect data. Figure 3.2 we also show which site give real data and which site give fake data for this reason this flowchart we divided tow part for news site one is reputed and another is non-reputed. For collecting data we also noticed news category that’s means which published for joke and which news published for entitlement. Some time we get some news this news title is so attractive but when we read this news then we show anther things. So to detect news properly should be analysis all news and tis called read beyond. Consider the source Reputed Non-Reputed Read Beyond Supporting Source Is It Joke? Human Opinion Check Author Check Date Google Scraping Figure 3.2: Data Collection Procedure ©Daffodil International University 9 • Consider the source: To collect valeted data must be consider data source so we collect real data consider the reputed source and collect fake data from non-reputed source. • Read Beyond: To collect data must be read full data only see headline not comment about full so collect valeted data must be read or check full potion of data • Check Author: To identify data that`s the data is real or fake must be check author because some non-reputed author publish fake news and reputed author always publish real news so it’s so important to detect data and collect valeted data. • Supporting source: Some reputed site always publish real news and this site are many famous to using this site we detect some non-reputed site news for this at</s>
|
<s>first we collect the non-reputed news data then we search this data into Google and find that this data of news published the reputed site if this type of news published some reputed site then its data might be consider as a real data and it get better performance for collect valeted data • Check Date: To collect news we focused about news date that’s means that news we collect those news publish which date • It’s a joke: To collecting data we checked this data is type of joke or funny because when we arrange data into category wise then it’s be helpful to arrange data into category. • Google Scraping: To collect valeted data we scrape Google by hold news title by use news title we search Google and find some result if this news is real then show some reputed news site and if this news is fake then show non-reputed site or only show one site because actually fake news published non-reputed site and if is news is fake then it published individual site. • Human Option: To detect fake news and real news he create a Google from where we show our all dataset fake news and real news and we create 2 radio button for counting human option. 3.3 Implementation Requirement Bangla is very inflectional language, where each word may have more than one meaning based on inflection. For detecting every word and remove noise we use some library function by using this library function we remove regular expression, html tag, punctuation. For removing this we use some requirement. ©Daffodil International University 10 Requirement • Python: Python is a programming language. For machine learning python language is the best language of the world for this reason we use python for implement our model. • Pandas: Pandas is a library function of python language. We use pandas as pd for data manipulation and analysis. • Numpy: Numpy is a library function of python language. We use numpy for provides a high performance multidimensional array and basic tools to compute with and manipulated these array. • Itertools: The Python itertools module is a collection of tools for handling iterators. Simply put, iterators are data types that can be used in a for loop. • Matplotlib: Matplotlib is a plotting library for the Python programming language and its numerical mathematics extension NumPy. It provides an object-oriented API for embedding plots into applications using general-purpose GUI toolkits like Tkinter, wxPython, Qt, or GTK+. • Sklearn: Learning and predicting. ... In scikit-learn, an estimator for classification is a Python object that implements the methods fit(X, y) and predict (T). An example of an estimator is the class sklearn.svm. SVC, which implements support vector classification. Sample data Table: Figure 3.3 show dataset sample using shape.head () function. We arrange 500 dataset here we just show five dataset as a sample data. In our dataset we keep news title news category news description news link and class (class means which news</s>
|
<s>is real and which news is fake here we declare tow class fake (for fake news) and real (for real news). Figure 3.3: Sample data ©Daffodil International University 11 Table 3.1: Data Category Table Data Category: In table 3.1 we have shown all category of our dataset. Here we represent ten types of category and how much times each of category is present like রাজনীতি, খেলাধুলা, আন্তজজাতিক, জািীয় etc. To show this category table we call data['Category'].value_counts(). Reducing noise on dataset: It depends how you characterize the "clamor" and how it is caused. Since you didn't give much data about your case, I'll accept your inquiry as "how to make the bend smooth". Kalman channel can do this, however it's excessively perplexing, I'd incline toward basic IIR channel. Fake news and real has some mutual word, bracket, tag beacause of this item to detect fake news is very difficult .For this reason we use normalization function and import some library of python. At first Beautiful Soup (text.strip(), "lxml") using this method we call for text script .its return returnsoup.get_text() through this method that give only text and remove others. And it will be store on soup. Next method returnre. Sub ('\[[^]]*\]', '', text) it is remove different bracket then return only text. At last through this method we get noise free text. After reducing noise data have fully prepared for training and testing without reducing noise data machine can`t detect keywords for this reason data reducing is necessary for training and testing. Category Name Category Value খেলাধুলা 86 রাজনীতি 68 তিননাদন 46 আন্তজজাতিক 55 জািীয় 73 তিক্ষাঙ্গন 11 তিজ্ঞান-প্রযুক্তি 29 অর্ জনীতি 8 স্বাস্থ্য 5 ির্যপ্রযুক্তি 2 ©Daffodil International University 12 Bag of Words Bag of words model is one of a series of techniques from a field of computer science known as Natural Language Processing or NLP to extract features from text. The way it does this is by counting the frequency of words in a document. Mainly bag of word used for numeric format. In one single line same word occur many time. We can count their frequency that’s called numeric format. In figure 3.4 ডাকসু occur 3 time from this text script, then নুরুল occur 4 time, তিতি occur 2 time, ঢাকাতিশ্বতিদযালয় 1 time occur. Finally using of bag of word process, we get frequency of word. And we can simulate this text script and represent numeric format. In this process we import pickle library. This library used for vectorization. CountVectorizer(max_features=1500, min_df=5, max_df=0.7) this method we use for frequency define. Here df means that document frequency, max-df remove too much frequency. Here max df =.7 means that more than 70 percent frequency it will remove. And min frequency remove number of low frrquency. Here min_df=5 that means less than 5 frequency it will remove. [6] TF-IDF The bag of words approach works fine for converting text to numbers. However, it has one drawback. It assigns a score to a word based on its occurrence in a particular</s>
|
<s>document. It doesn't take into account the fact that the word might also be having a high frequency of occurrence in other documents as well. TFIDF resolves this issue by multiplying the term frequency of a word by the inverse document frequency. The TF stands for "Term Frequency" while IDF stands for "Inverse Document Frequency [6]”. Figure: 3.4 Simple text numeric format ©Daffodil International University 13 The term frequency is calculated as: 𝑇𝑒𝑟𝑚𝑓𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑦=X/Z……………………………………………………………………… (I) X =𝑁𝑢𝑚𝑏𝑒𝑟𝑜𝑓𝑂𝑐𝑐𝑢𝑟𝑟𝑒𝑛𝑐𝑒𝑠𝑜𝑓𝑎𝑤𝑜𝑟𝑑 Z =𝑇𝑜𝑡𝑎𝑙𝑤𝑜𝑟𝑑𝑠𝑖𝑛𝑡ℎ𝑒𝑑𝑜𝑐𝑢𝑚𝑒𝑛𝑡 And the Inverse Document Frequency is calculated as: 𝐼𝐷F= [V/N]………………………………………………………………………… (II) V =𝑇𝑜𝑡𝑎𝑙𝑛𝑢𝑚𝑏𝑒𝑟𝑜𝑓𝑑𝑜𝑐𝑢𝑚𝑒𝑛𝑡𝑠 N =𝑁𝑢𝑚𝑏𝑒𝑟𝑜𝑓𝑑𝑜𝑐𝑢𝑚𝑒𝑛𝑡𝑠𝑐𝑜𝑛𝑡𝑎𝑖𝑛𝑖𝑛 𝑔𝑡ℎ𝑒𝑤𝑜𝑟𝑑 Table 3.2: Textual data process using tf-idf WORD TF IDF The 1/7 Log (2/2) = 0 Car 1/7 Log (2/1) = 0.3 Truck 0 Log (2/1) = 0.3 Is 1/7 Log (2/2) = 0 Driven 1/7 Log (2/2) = 0 On 1/7 Log (2/2) = 0 The 1/7 Log (2/2) = 0 Road 1/7 Log (2/1) = 0.3 Highway 1/7 Log (2/1) = 0.3 In table 3.2, sentence the car truck is driven on the road highway from this sentence we calculate TF and IDF. At first ‘The’ value calculated TF=1/7 that means in this sentence ‘The’ occur 1 time and here total number of word 7. So TF gives result 1/7.And TDF calculated log (2/2) that means here total number of document =2 and word containing number of document=2. Every word calculated in same way. In this method we get tf and idf value then it is convert array through this method and using this method we create our dataset to apply this method we use tfidfconverter.fit_transform(X).toarray (). ©Daffodil International University 14 Classification Algorithm • Logistic Regression • Decision trees • Support vector machine (SVM) • Naïve Bayes • Random forest • Liner regression • Polynomial regression • SVM for regression All classification and regression algorithm come under supervised learning so we can use any types of above algorithm. We use above all algorithm but we get batter result for Random Forest Classifier algorithm so we use random forest for train and testing dataset. Random Forest Classifier: Precondition: A training set X := (a1, b1), . . . ,(an, bn), features E, and number of trees in forest T. 1 function RandomForest(X, T) 2 G← ∅ 3 for i ∈ 1, . . . , T do 4 X (i) ← A bootstrap sample from X 5 rt ← RandomizedTreeLearn(X (i) , E) 6 G ← G ∪ {rt} 7 end for 8 return G 9 end function 10 function RandomizedTreeLearn(X , E) 11 At each node: ©Daffodil International University 15 12 e ← very small subset of E 13 Split on best feature in E 14 return the learned tree 15 end function Figure 3.5 we shown random forest classifier tree and above we define random forest algorithm pseudocode. Figure 3.5 we show many tree (tree-n that’s means here belong many trees) actually random forest algorithm create many tree using data its called vectorisiton. This algorithm create mane vector tree and we know that</s>
|
<s>which algorithm is best that algorithm create many tree. Figure 3.5 we show some tree and every tee gabe a class such that class A, class B, etc. At last we get final class using this to majority and voting those above class. And in pseudocode, at first we represented our training set, variable (a, b) that contain fake news and real news. And total number of tree in forest T that contain 500 news. Firstly we assign a variable G and store null value. Then we start loop.it will continue until T that means it travers 500 news. Then variable rt is storing randomized news and feature of news. Then its value and G’s mutual value store on variable G and return this value. Then small subset of feature such as real of fake news will be store on variable (e). Finaly from this training set function will return specific feature of tree. That means which new is fake and which news is real.at last end of the function. Figure: 3.5 Random Forest tree [17] ©Daffodil International University 16 Confusion Matrix: Figure 3.6 shown confusion matrix represent true positive and negative value. Here total fake news is 37 and total real news is 49.Real and fake news combination is 12 and 2. Here true positive value represent real news and false positive value represent total fake news. And false positive and false negative value represent fake and real news combination. From this confusion matrix, we get accuracy sum of TP and TN divided by sum of positive and negative. Table 3.3: Word Could Analysis Table Step Real Fake word tokenize 5826 11025 Keyword 20 19 Word Cloud Analysis table: Table 3.3 shown our machine able to collect how much token for fake news and real news. Our machine collect 5826 token for real news and 11025 token for fake news. And our Machen collect some keywords (keywords is maximum number of token which token is store into our machine) for real news collect 20 kay words and for fake news collect 19 key words . Using this key words our machine draw two images for real news nlp (show figure 3.3.7) and fake news nlp(show figure 3.3.8) NLP Images: Here we show the dataset out put keyworeds drawing image. Draw this image we use word could analysis library function. At first we collect keywords using word_tokenize function then arrange keywords their parity using sorted function and at draw image use plt.figure, plt.imshow, plt.axis. plt.tight_layout function and showing this image use plt.show() function. Figure 3.6: Confusion Matrix ©Daffodil International University 17 Figure 3.7: Real News NLP Image Real News keywords NLP image: Figure 3.7 shown some keywords which we collect from our model from real news. Here his key words is more highlight those key words machine get maximum number. Figure 3.8: Fake News NLP Image Fake News keywords NLP image: Figure 3.8 shown some keywords which we collect from our model from fake news. Here his key words is</s>
|
<s>more highlight those key words machine get maximum number. ©Daffodil International University 18 CHAPTER 4 EXPERIMENTAL RESULT AND DISCUSSION 4.1 Introduction In this part we discussion about our experimental result and show accuracy table of our model. In this part we also show the key words bar chart and discuss the model descriptive analysis and the summary. 4.2 Experimental results The efficiency of a system can be measured from its accuracy level. Our proposed algorithm is applied to the testing dataset, which is collected from different popular online newspaper. There are almost 100 dataset available in the testing dataset. Accuracy is measured from the ratio of the number of correctly data the total number of data. Our system can detect 86 news out of 100 news. Our system micro average precision 0.86, recall 0.86, fl-score 0.86, support 100 and macro average precision 0.88, recall 0.86, fl-score 0.86, support 100 and weighted average precision 0.87, recall 0.86, fl-score 0.86 support 100. Final recall and fl-score 0.86 it means, our system obtains 86% accuracy, which is not a bad figure. Our system detects Fake news and Real News. The result is shown in Table 4.1 Table 4.1: Experiment Result Total Dataset Train Test Accuracy % 500 80% 20% 86% Experiential Result: Table 4.1 shown the accuracy level of our model. This model will collect some keywords by train all dataset the number of fake news keywords is 5026 and real news is 11026. There are two bar chart will be shown below on the basics of these keywords presenters. Keywords Chart: For fake news and real news our system collect some keywords. For fake news system detect some keywords and calculate this keywords that’s means which key word in more for fake news and real news system sore number of keyword and using this keywords system detect fake news and real news. In below figure 4.1 and 4.2 we shown real news and fake news keyword bar chart. ©Daffodil International University 19 Figure 4.1: Real News Keywords Percentiles Real News Key Words Chart: Figure 4.1 shown about the keywords of real news. This keyword collect our machine as a percentiles for this using percentiles we dare this bar chart her. For real news we total collect 5026 keywords. For that ও keyword get 500 times, এ (300), কনর, (200), এই(150) thuse for 20 keyword draw bar chart. Figure 4.2: Fake News Keywords Percentiles Fake News Keywords Chart: Figure 4.2 shown about the keywords of fake news. This keyword collect our machine as a percentiles for this using percentiles we dare this bar chart her. 100200300400500600ও এরে এই নাো এক হেরে সরেন মার্চ পেক্রReal News token100200300400500600700800এই না ওে এো এক কনয যাক হেপে কFake News token©Daffodil International University 20 For real news we total collect 11025 keywords. For that ও keyword get 500 times, এই (700), না (600), িার(500) thuse for 10 keyword draw bar chart. 4.3 Descriptive Analysis By analyzing the result, we identify some constraints. Those are mentioned below-Here Firstly, we</s>
|
<s>collect some news without category for this news we set category to depended news title and description for this reason some news category are not properly set. A news result depended her category so when any news category is not set properly for this reason model get different result that’s means if nay news category change and test this news then those news result could be change Secondly, this model manly works news description. The collect keywords from description and compare ta model stories key words. So when any news are testing and give some potion or news then it get one result and if give maximum potion or full potion then it’s could get another result, many grammatical rules are not applicable to them. 4.4 Summary After result experiment it is seen that the model declares result on the basis of description and category. To show a result the model search category wise keyword from description to show a final result. To find the key words, the model compare with the stored keywords which is included into the dataset. After that it detect the keyword of fake news and real news portal into the input news portal and then it fixed a decision on the basis of similarity of dissimilarity of the stored keyword sand finally declare that the news is fake or the news is real news. ©Daffodil International University 21 CHAPTER 5 CONCLUSION AND FUTURE WORK 5.1 Summary After result experiment it is seen that the model declares result on the basis of description and category. To show a result the model search category wise keyword from description to show a final result. To find the key words, the model compare with the stored keywords which is included into the dataset. After that it detect the keyword of fake news and real news portal into the input news portal and then it fixed a decision on the basis of similarity of dissimilarity of the stored keyword sand finally declare that the news is fake or the news is real news. Table 5.1: Accuracy of Different Fake news Detection Model Model Rada Mihalcea [11] Hadeer Ahmed [6] Kia Shu [1] Our System Accuracy % 76% 92% 80% 86% Comparing Accuracy: Table 5.1 shown some research paper model accuracy and compare those model accuracy to our model. All model build English language fake news but we are the first team who word bangle fake news detection. And we got 86% accuracy it’s a great archive for our team. 5.2 Conclusion We have been able to build our model successfully. This model is now active to identify fake news and real news. To build this model about 500 length dataset has been made in which 256 is real and 244 is fake data. Then this raw data was turned into numeric format using bag of words, tdfidf matrix has been used to transfer numeric data into matrix feature. This matrix feature has been trained by using RandomForestClassifier. Of the total</s>
|
<s>data 80% is for train and rest for test. After testing the rest of 20% data we got 86% accuracy. So this model accuracy is about 86% which is more than others research work. As this is the first research on Bengali fake news detection, so it can be said that the accuracy by this model has got is like a successful work. Our model have some limitation such that our model get result to calculate data keywords for this our model get some keyword for real data and fake data. And arrange this data into category wise for this reason when we test out site news or our model then if just change news category then our model could get different result. ©Daffodil International University 22 5.3 Future Work In future the aim of this model to create a database where all sorts of news keyword will be stored into database category wise. A news alarm will be include in this model to define clearly about applied news. Nevertheless, it will show how much people consider this model as fake and how much people is on the behalf of this news is real. It will also logically define by comparing or showing the client with strong online news portal who has already submitted this news on their webpage. ©Daffodil International University 23 REFERENCES [1] Kai Shu, Deepak Mahudeswaran, Suhang Wang, Dongwon Lee, Huan Liu, “Fake News Net: A Data Repository with News Content, Social Context and Spatial temporal Information for Studying Fake News on Social Media” The 9th International Conference on Emerging Ubiquitous Systems and Pervasive Networks (Submitted on 5 Sep 2018 (v1), last revised 27 Mar 2019 (this version, v3). [2] Kai Shu, Amy Sliva, Suhang Wang, Jiliang Tang, Huan Liu, “ Fake News Detection on Social Media A Data Mining Perspective” ACM SIGKDD Explorations Newsletter, Volume 19 Issue 1, June 2017. [3] Naman Singh ; Tushar Sharma ; Abha Thakral ; Tanupriya Choudhury “Detection of Fake Profile in Online Social Networks Using Machine Learning” 2018 International Conference on Advances in Computing and Communication Engineering (ICACCE) [4] Verónica Pérez-Rosas, Bennett Kleinberg, Alexandra Lefevre, RadaMihalcea, “Automatic Detection of Fake News”, Computation and Language (cs.CL) Submitted on 23 Aug 2017. [5] Benjamin Riedel, Isabelle Augenstein, Georgios P. Spithourakis, Sebastian Riedel, “A simple but tough-to-beat baseline for the Fake News Challenge stance detection task” (Submitted on 11 Jul 2017 (v1), last revised 21 May 2018 (this version, v2)). [6] Hadeer Ahmed, authorIss, TraoreSherifSaad, “Detection of Online Fake News Using N-Gram Analysis and Machine Learning Techniques, Conference paper, First Online: 11 October 2017. [7] Minyoung Huh, Andrew Liu, Andrew Owens, Alexei A. Efros, “Fighting Fake News Image Splice Detection via Learned Self-Consistency” Computer Vision and Pattern Recognition (cs.CV). [8] Benjamin D. Horne, SibelAdali, “Fake News Packs a Lot in Title, Uses Simpler, Repetitive Content in Text Body, More Similar to Satire than Real News” Published at The 2nd International Workshop on News and Public Opinion at ICWSM. [9] Emerson F.Cardoso Renato, M.SilvaTiago,</s>
|
<s>A.Almeida, “Towards automatic filtering of fake reviews” Volume 309, 2 October 2018. [10] W. KenRedekop, “Fake news, big data, and the opportunities and threats of targeted actions” Health Policy and Technology, 7(2), 113-114, 2018. [11] Veronica P ´ erez-Rosas ´ 1 , Bennett Kleinberg2 , Alexandra Lefevre1 Rada Mihalcea1, “Automatic Detection of Fake News” University of Michigan 2Department of Psychology, University of Amsterdam. [12] A.Peters, E.Tartari, N.Lotfinej, P.Parneix, D.Pittet, “Fighting the good fight: the fallout of fake news in infection prevention and why context matters’’ 2018 Published by Elsevier Ltd on behalf of The Healthcare Infection Society. [13] S.MoJangPhD, TiemingGeng, Jo-YunQueenie, LiaRuofanXia, Chin-TserHuangPhD, HwalbinKimPhD, JijunTangPhDb, “A computational approach for examining the roots and spreading patterns of fake news: Evolution tree analysis’’ 2018 Elsevier Ltd. [14]Mauridhi Hery Purnomo, Surya Sumpeno, Esther IrawatiSetiawan, DianaPurwitasaria, ‘’Keynote Speaker II: Biomedical Engineering Research in the Social Network” Analysis Era: Stance Classification for Analysis of Hoax Medical News in Social Media’’, 2017 Published by Elsevier B. [15] Monther Aldwairi, Ali Alwahedi, “Detecting Fake News in Social Media Networks” Volume 141, 2018, Pages 215-222. [16] Avaro Figueira, Luciana Oliveira, “The current state of fake news: challenges and opportunities’’ CENTERIS / ProjMAN / HCist 2017, 8-10 November 2017, Barcelona, Spain Volume 121, Pages 817-825. [17] Sholk Gilda “Evaluating machine learning algorithms for fake news detection” 2017 IEEE 15th Student Conference on Research and Development (SCOReD). https://dl.acm.org/author_page.cfm?id=99659138359&coll=DL&dl=ACM&trk=0https://dl.acm.org/author_page.cfm?id=81548216956&coll=DL&dl=ACM&trk=0https://dl.acm.org/author_page.cfm?id=99658751959&coll=DL&dl=ACM&trk=0https://dl.acm.org/author_page.cfm?id=81496640622&coll=DL&dl=ACM&trk=0https://dl.acm.org/author_page.cfm?id=81367594306&coll=DL&dl=ACM&trk=0https://arxiv.org/search/cs?searchtype=author&query=P%C3%A9rez-Rosas%2C+Vhttps://arxiv.org/search/cs?searchtype=author&query=Kleinberg%2C+Bhttps://arxiv.org/search/cs?searchtype=author&query=Lefevre%2C+Ahttps://arxiv.org/search/cs?searchtype=author&query=Mihalcea%2C+Rhttps://arxiv.org/abs/1707.03264v1https://www.sciencedirect.com/science/journal/09252312/309/supp/Chttps://app.dimensions.ai/discover/publication?and_facet_journal=jour.1047767&and_facet_source_title=jour.1047767https://app.dimensions.ai/discover/publication?and_facet_journal=jour.1047767&and_facet_source_title=jour.1047767https://www.sciencedirect.com/science/article/pii/S1877050918318210#!https://www.sciencedirect.com/science/article/pii/S1877050918318210#!https://www.sciencedirect.com/science/journal/18770509/141/supp/Chttps://www.sciencedirect.com/science/journal/18770509/121/supp/Chttps://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=8293873https://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=8293873©Daffodil International University 24</s>
|
<s>Microsoft Word - POPULARITY ASSESSMENT OF CRICKET PLAYER BASED ON BANGLA TEXT IN SOCIAL MEDIA (rev_6).docx© Daffodil International University POPULARITY ASSESSMENT OF CRICKET PLAYER BASED ON BANGLA TEXT IN SOCIAL MEDIA Yeasir Arefin Tusher ID: 152-15-5944 Md Rubel ID: 152-15-6037 AND Raisa Tabassum ID: 152-15-6022 This Report Presented in Partial Fulfillment of the Requirements for the Degree of Bachelor of Science in Computer Science and Engineering Supervised By Md. Tarek Habib Assistant Professor Department of CSE Daffodil International University Co-Supervised By Md. Sadekur Rahman Assistant Professor Department of CSE Daffodil International University DAFFODIL INTERNATIONAL UNIVERSITY DHAKA, BANGLADESH MAY 2019© Daffodil International University iii ACKNOWLEDGEMENT First, we express our heartiest thanks and gratefulness to almighty God for His divine blessing makes us possible to complete the final year project/internship successfully. We are really grateful and wish our profound our indebtedness to Md. Tarek Habib, Assistant Professor, Department of CSE, Daffodil International University, Dhaka. Deep Knowledge & keen interest of our supervisor in the field of “Natural Language Processing” to carry out this project. His endless patience, scholarly guidance, continual encouragement, constant and energetic supervision, constructive criticism, valuable advice, reading many inferior drafts and correcting them at all stage have made it possible to complete this project. We would like to express our heartiest gratitude to Dr. Syed Akhter Hossain, Professor and Head, Department of CSE, for his kind help to finish our project and also to other faculty member and the staff of CSE department of Daffodil International University. We would like to thank our entire course mate in Daffodil International University, who took part in this discuss while completing the course work. Finally, we must acknowledge with due respect the constant support and patients of our parents. © Daffodil International University ABSTRACT Social media has turned into a center field of Data Mining in the most recent decade, in light of the fact that the utilization of internet based Social media has been expanded in all respects quickly. In this research, the most popular social media Twitter has been focused by us, where users can post their opinion, sentiment or expression within 140 characters with various issue of interests, such as Politics, Entertainment, Sports, Lifestyle etc. Twitter is particularly famous on different fields for example sentiment analysis, emotion detection, trend detection and so on. They give API's to data miner which are exceptionally helpful for picking up user’s data for analysis. In our research, we have collected tweets about some cricket players of Bangladesh National Cricket team written in Bengali language. We construct a model to predict whether a tweet is positive or negative and compute player’s popularity by figuring out the complete number of positive tweets about every player. However, sentiment analysis of Bengali text is very difficult and complicated task due to lack of language resource. We used various Machine Learning algorithms and compared them with each other to get the best accuracy. © Daffodil International University TABLE OF CONTENTS CONTENS PAGES Board of examiners i Declaration ii</s>
|
<s>Acknowledgements iii Abstract iv CHAPTER CHAPTER 1: INTRODUCTION 1-4 1.1 Introduction 1 1.2 Motivation 2 1.3 Rationale of the Study 2 1.4 Research Questions 3 1.5 Expected Output 4 1.6 Report Layout 4 CHAPTER 2: BACKGROUND 5-9 2.1 Introduction 5 2.2 Related Works 6 2.3 Research Summery 8 2.4 Scopes of the problem 9 CHAPTER 3: RESEARCH METHODOLOGY 10-19 3.1 Introduction 10 3.2 Research Subject and Instrumentation 10 3.3 Data Collection Procedure 11 3.4 Statistical Analysis 12 3.5 Implementation Requirements 14 © Daffodil International University CHAPTER 4: EXPERIMENTAL RESULTS AND DISCUSSION 20-29 4.1 Introduction 20 4.2 Experimental Results 20 4.3 Descriptive analysis 26 4.4 Summery 29 CHAPTER 5: SUMMARY, CONCLUSION, RECOMMENDATION AND IMPLICATION FOR FUTURE RESEARCH 30-32 5.1 Summary of the Study 30 5.2 Conclusions 31 5.3 Recommendations 31 5.4 Implication for Further Study 32 APPENDIX: A 33 REFERENCES 34-35 © Daffodil International University vii LIST OF FIGURES FIGURES PAGES Fig 3.3.1 TwitterPiCollector 12 Fig 3.4.1 13 Fig 3.4.2 13 Fig 3.5.1 Pre-Processing Ventures 15 Fig 3.5.2.1 Word Embedding 18 Fig 3.5.2.2 Padding 19 Fig 4.2.1 Confusion matrix for Logistic Regression 21 Fig 4.2.2 Confusion matrix of MNB 22 Fig 4.2.3 1D Convolution 23 Fig 4.2.4 Model Architecture 25 Fig 4.2.5 Accuracy, Loss, Confusion matrix 26 Fig 4.3.1 Precision, Recall, F-measure Score 28 Fig 4.3.2 Popularity Score 29 © Daffodil International University viii LIST OF TABLES TABLES PAGES Table 3.5.1.1 Data Cleaning and Pre-Processing 16 Table 3.5.2.1 Ratio of Training and Testing 17 Table 3.5.2.2 Vectorize without N-gram 18 Table 3.5.2.3 Vectorize with Bi-gram 18 Table 4.2.1 Accuracy for logistic regression 21 Table 4.2.2 Accuracy of MNB 22 Table 4.2.3 Accuracy 26 Table 4.3.1 Model accuracy 27© Daffodil International University CHAPTER 1 INTRODUCTION 1.1 Introduction Gathering information is an important part of our daily life. Daily we search our news feed what's new on today and also hunting for interesting topics on social media. Social media has been considered as the center region for information mining as it contains the client information as remarks, audits, posts, likes hates and furthermore different stages like Blogs, Forums, bring with heaps of client created information. The information on the social media incorporates the feelings of the client for example how decidedly or contrarily the client is composing his remarks or surveys. The energy and the cynicism include the critical traits portraying client's mind-set and feelings. Sentiment analysis is an important field in natural language processing in the present world. It enhances its popularity on social media very rapidly. Many works have already been done in this field for example business sector uses sentiment analysis for product review by using a piece of text from social media posts and comments, and reduced the time complexity of an organization. Sentiment analysis on natural language processing is a process by which we can analyze a person’s opinion, emotion, attitude. It is done by sentiment analyzer tool which is involved with machine learning algorithms. Almost every sentence has some specific words which express whether the sentence is</s>
|
<s>positive or negative or neutral by using sentiment analysis tools which detect the polarity (i.e., positivity, negativity) of a string of sentence. Thus, it provides us sentiment about an individual. In the present world, Bangla is spoken as the first language by almost 200 million people where 160 million people are Bangladeshi. There are approximately 3 billion people using social media worldwide where 95.13% of people uses Facebook and 1.35% people uses Twitter in Bangladesh. Most of the Bangladeshis are reliable to express and share their thoughts and opinions on microblogging and social networking sites like Facebook, twitter etc. by means of writing blogs, posts, comments which contains a person’s point of view in Bangla Language. © Daffodil International University Though many works have been done with product reviews, Popularity measurement of a specific individual from social media has been done in English but not done in Bangla yet. So, this is the main purpose of our research. Popularity measurement is the process of finding the popularity of an individual which demonstrates how much an individual is popular in a particular field for example, it would be an athlete, a dancer, a singer, a politician etc. Every person has their own opinion about some popular fellows and they would like to share their opinions in social media which contain positive and negative opinions about the popular ones. So, in our research, we demonstrated a model that can rank popularity of some cricket players. 1.2 Motivation Cricket is the most popular game in Bangladesh. There is an exhilaration when a tournament is played and people clap their hands when a player scores higher and takes wicket. Thus, they take attraction from people, but the opposite happens when a player cannot do well. Sometimes we can see the ranking of player by their popularity on the newspaper. But these popularity changes by time and our point is here to take benefit of it. This research will be based on finding popularity of a player of a specific time. People share their contemporaneous thoughts on social media as well as micro-blogging sites like as Face-book, Twitter, BlogSpot etc. Natural language processing i.e. sentiment analysis is very lengthy and old process. Many works have been done on this field but not in Bengali. So, we get motivated to take a part of enriching Bengali language. 1.3 Rationale of the Study Dataset for Bengali text is rare. There are some available data, but they are outdated and one more thing is that no work has been done to find popularity of specific person especially Bangladeshi cricket players. To find popularity of a specific time we need updated data. © Daffodil International University Our first goal is to build a tool which will automatically give us updated data, extract sentiment from them whether it is positive or negative and our second approach is to find popularity of some cricket players of Bangladesh National Cricket team. As no work has been done to finding popularity, so our</s>
|
<s>work on this research will be slightly different and unique from others. 1.4 Research Questions Every larger problem has been solved by solving the smaller portion of it. Things are being complex by combining the simplest objects. To understand the actual problem of this research we need to first understand the subproblem of this. So, in this section we will try to understand the sub-problems first. We have selected the following problems which will be answered in this paper stepwise. 1.4.1 Data collection The gist of our all work is the data. So, what is data and how to collect them? It is the most important and the challenging task. 1.4.2 Features of data After getting data we need to prepare it for future work and find some features for machines to work with it. So how to prepare data and what features we extract from it? 1.4.3 The model There are many types of way to go with, I.e. Machine Learning is one of them. So how much data we will give machine to learn and how much to test? Another point is which algorithm will give us the best accuracy? 1.4.4 Compare result To find the most accurate result we need to apply different types algorithms and compare them one with another which is quite difficult task. © Daffodil International University 1.5 Expected Output The main motive of our research is to find reputation of person I.e. Bangladeshi cricket players and also find a list of ranking by popularity. Our first target is to trace whether the sentence has positive value or negative. The model's efficiency will be experimented by the way of usage of various kinds of machine learning algorithm. We will also test how good algorithms response to testing data. Our research has some sub-objectives such as – to get the most accurate result. 1.6 Report Layout This paper is organized in a such way that will help us easily to understand the actual goal and the working procedure. It is written by following the standard project reporting template of Daffodil International University which is mainly structured into 5 sections. Chapter 1, which is this chapter discusses all about our research motivation, rationale of the study, research question and expected outcome. Chapter 2, the following chapter includes the background details of Bangla Language Processing, also the concise history of Sentiment identification, scope of the problem and its challenges. Chapter 3 gives us detailed information about our research methodology and techniques which we used including the process of data collection and methods of determining sentiment of a sentence. It also provides statistical analysis of our research. Chapter 4 brings up the experimental result of applied algorithms and techniques as well as descriptive analysis of our work. And finally, Chapter 5 tells about limitations, conclusions, future works, and a summary of the research. © Daffodil International University Chapter 2 Background 2.1 Introduction Social media has become a core field of data mining in the last decade, because the</s>
|
<s>use of social media has been increased very rapidly. In this research, the most popular social media Twitter has been focused by us, where users can post their opinion, sentiment or expression within 140 characters with various issue of interests, such as Politics, Entertainment, Sports, Lifestyle etc. Twitter is very much popular on various fields e.g. sentiment analysis, emotion detection, trend detection etc. They provide API's for data miners which are very useful for gaining users data for analysis. In our research, we have collected tweets about some cricket players of Bangladesh National Cricket team written in Bengali language. We build a model to predict whether a tweet is positive or negative and compute players popularity by calculating total number of positive tweets about each player. The proposed system basically extracts tweets in a specific time series for finding the featured players. However, sentiment analysis of Bengali text is very difficult and complicated task due to lack of language resource. To overcome the complexity and find out the popularity of a single player we firstly labeled each tweet whether it is positive or neutral, as our main goal is to determine popularity; so, we merged other sentiment as neutral. Finally, we have summarized the output in percentage value and sorted the final result. © Daffodil International University 2.2 Related Works Previous works on English text mining focused on Emotion detection, Sentiment Analysis, Trend Detection etc. In 2009, Paul Ferguson and his group took a shot at paragraph level investigation to expand exactness of report level assumption examination [1]. As of late, two analysts named Pamunkeys and Putri (2017) chipped away at word sense disambiguation for the lexical-based feeling investigation [2]. Till now, a great deal of work has been done on area-based investigation utilizing word library. For instance, Cruz Laura and his group (2017) composed a section about applying lexical library for fixed domain [3]. Devina Ekawati and Masayu Leylia Khodra took a shot at (2017) viewpoint-based survey examination [4]. Sentence-based notion investigation was finished by Alexandre Trilla and Francesc Alías (2013). They attempted to actualize this to improve precision in their content to discourse program [5]. Parinya Sanguansat (2016) attempted to execute paragraph to vector for business information examination from web-based social networking [6]. Huy Nguyen and Minh-Le Nguyen (2017) worked on sentence-level sentiment analysis on different social medias like twitter, and their focus was on the improvement of the accuracy of the sentence-level analysis [7]. They have proposed another technique for the improvement of their outcome. Specialist Mike Thelwall (2016) chipped away at the opinion quality detection program named SentiStrength that was created amid the Cyber Emotions venture [8]. It was created to identify the quality of assumptions communicated in social Webtexts. In his work, he depicted how SentiStrength works utilizing lexical methodology andusing its own principles and terms. Scientist Soumi Dutta and his group (2015) worked on sentiment analysis of online content using WordNet [9]. S. M. Mazharul Hoque Chowdhury and his group (2019) analyzed paragraph level</s>
|
<s>sentiment with step by step process using lexicon-based approach. They tried to analyze different types of data using existing method which are created by other researchers [10]. In their examination, they proposed a strategy utilizing WordNet to distinguish assumption from various social media. Therefore, it tends to be said that a ton of research has been done and still in process to increment the precision, grow new arrangements, make new instruments. © Daffodil International University All researches are done a lot on English language and some other languages. Previous works on Bengali text mining focused on Emotion detection, Sentiment Analysis, Trend Detection etc. So, our rest of the research part has contained our mainly focused area which is Bangla language. Md. Al-Amin and his group claimed that they proposed a new approach for extracting sentiment classification from 1600 Bengali comments of blogs and articles. They use word embedding and word2vec approach to obtain word level sentiment and the Unification of the two proposed approach gives them 75.5% of accuracy [11]. Kamal Sarkar and Mandira Bhowmick tried various combination of n-grams and sentiWordNet features to find best combination of features and their observation says that, SVM classifier trained with unigram and sentiWordNet features performed best performance [12]. Sanjida Akter and Muhammad Tareq Aziz applied Machine Learning and Lexicon based Hybrid approach to classify sentiment on facebook group posts and comments [13]. Animesh Kumar Paul and his team first applied Multinomial Naïve Bayes (MNB) and negation handling on Bangla data. They used amazons watches dataset which contains 68356 reviews for both English and Bangla [14]. Shaika Chowdhury and her team constructed a semi-supervised bootstrapping approach to develop a training corpus which do not need manual annotation. Their binary classifier approach did a very good accuracy using various combination of features [15]. Md. Asimuzzaman and his team used some Bangla tweets as their training data and Supervised Adaptive Neuro-Fuzzy Interface System to predict the sentiment [16]. Kamal Sarkar created CNN (convolutional neural network) and DBN (deep belief network) based model on 1000 and 6225 bangla words for comparing with other machine learning approaches [17]. Their experiments revealed that the performance of their proposed CNN (Convolutional Neural Network)-based system is better than their implemented DBN (Deep Belief Network)-based system and some existing Bengali sentiment polarity detection systems. In trend detection, though a few works have been done which is focused on English language, no work has been done on Bangla language before. So, in this part we represent the appearance of trend detection in English from previous work. Dhananjay C. Dandapat and his team analyzed the popularity detection of Television media in business intelligence using tweet data [18]. They used TF-IDF weighing © Daffodil International University scheme to calculate distance between tweets and cluster and then pick the clusters based on highest score. 2.3 Research Summery By reviewing most, the works on the above 2.2, we can see that Sentiment analysis on social tweets has been done a lot work focused on English text</s>
|
<s>mining but not enough for Bangla text mining. Since our main focus is Sentiment analysis on Bangla tweets. So, we can see most of the works have been done with tweets, facebook posts, comments etc. One of the studies constructed a corpus from Bengali comments of blogs and articles and applied word embedding method to this corpus [11]. Another work performing machine learning algorithms for finding sentiment from twitters of Bengali languages [12]. Emphasized the pattern of grammar and semantic of the sentences in the field of sentiment analysis in Bangla microblogs [16]. Comparing the performance of proposed CNN-based Bengali sentiment polarity detection model with the DBN-based model [17]. All of the work has been done by mining text and comparing with one method to another method on anonymous tweets which means no work has been done with specific area like trend detection on Bangla. Research is a composed method to discover arrangements of existing issues or issues that no one has dealt with previously. It very well may be utilized for taking care of another issue or it tends to be the extension of past work on a specific field. Our Research is on recognizing assessment extremity and opinion from Bengali content that is related with NLP (Natural Language Processing). AI (Artificial Intelligence) is testing the person to surpass individual’s execution. So, in this research, we researched on cricketers to detect how much a cricketer is popular for a specific time period. We collected continuously updated data from tweets named by each cricketer name and then labeled them by giving polarity on each sentence. Each data contains some specific words which denotes for the quantity of popularity. This process has been done by our proposed different algorithms and finally we picked an algorithm which satisfied us with highest accuracy. © Daffodil International University 2.4 Scopes of the problem Opinion mining from content is from early on a substance-based characterization which expound the idea from Natural Nanguage Processing including Machine Learning too. Opinion mining is an intriguing field of study. These days, it has been adding qualities to the business as in light of the fact that estimation examination puts together its outcomes with respect to factor that are so naturally sympathetic, it will undoubtedly end up one of the major drivers of numerous business choices in future. Improved precision and consistency in content mining strategies can defeats the present issues. Right now, as the following influx of information revelation, content examination is accomplishing high business values. In this research, we will break down Bengali content from Facebook status for finding related estimation of each sentence like positive or negative. Detecting popularity of each player, we will at that point attempt to discover particular opinion of each sentence like whether complement of an individual blogger is positive or negative or neutral about an individual cricketer. Every day we are haunting for exciting news and try to find out trendy talking about cricket. Because of haunting of news feed from electric</s>
|
<s>media, regularly we are searching about new trends by following some micro-bloggers page which consisted of opinion. Not exclusively will wagering sway the connection between associations, betting foundations, information suppliers, and the administration, it's as of now changing the manner in which fans can communicate with diversions. It also easy for a new fan to understand quickly which player is masterpiece for a particular time period and can choose a player to support on gaming period as an inspiration. It is also helpful for business sponsors to detect which player is perfect for them to sponsor and by this way the player can be on the top position of the market value. © Daffodil International University Chapter 3 Research Methodology 3.1 Introduction This study aims at finding some cricket players popularity by predicting sentiment from tweets which are written in Bengali texts. This chapter contains detailed outline how the data can be collected and predict sentiment from them. The instrument that is used to extract the sentiment from Bengali text of tweets and also described the procedures that were followed to carry out this data extraction. In this chapter we also provide the methods which were used to analyze the data. Lastly, the implementation issues and requirements that were followed in the process are discussed 3.2 Research Subject and Instrumentation This problem is in the domain of Natural Language Processing. Although the complete problem extended to popularity measurement of cricket players. The wide thought of the issue is what makes it dynamically exceptional to its sort. The problem can be connected with handle in every language available. Regardless, our research work is compelled to one language only, Bengali. The reason behind picking this language is the need of progress in Natural Language Processing. Various work done previously but it needs more attention as it is one of the most speaking language in the world. Thusly we picked 'Bengali' as the chosen language for this research work. For our research purposes, we have gathered 2952 Bengali sentences from Twitter that contains different sentiment. To bring out those tweets we use 'Raspberry Pi ' for running python script that extract twitter data using API key. Our work is to identify positivity or negativity of sentences by applying sentiment analysis techniques. Some well performed algorithm like Naïve Bayes, Logistic Regression, Deep Neural Network, Convolutional Neural Network, etc. are used in case of sentimental analysis. © Daffodil International University 3.3 Data Collection Procedure Internet based social sites like Twitter, Facebook and so on are a noteworthy center point for clients to express their sentiments on the web. Sentiment analysis which is likewise called opinion mining, includes in structure a framework to gather and analyze conclusions about the item made in blog entries, remarks, or surveys. We analyzed some cricket player’s popularity by their fans tweet. Twitter is very popular for providing API’s to data miner which are exceptionally valuable for picking up users tweets for research. We used twitter's 'STREAMING API' for collect the</s>
|
<s>tweets about 13 different players of Bangladesh Cricket Team. Twitter's Streaming API is a push of information as tweets occur in real-time. It is used for scrapping real time data. In our research, it is important to scrap real time data because of measuring popularity of a player in specific time. We wrote a python script that search for specific player’s name in twitter with the help of Streaming API key. When people wrote about those names of players, the API push that tweet in our script and then the data has been saved into the disk in different CSV files. We searched for thirteen different players name in Bengali language from 1st October 2018 till 31st December 2018. The script has been run during 3 months continuously in a 'Raspberry Pi' which was ran in a Debian system. We named this bot as a “TwitterPiCollector”. The tweets saved into the disk we have searched for, stored into 13 different CSV files. Only few numbers of people write tweets in Bengali language, so that we didn't get huge amount of data. We have now total 2952 tweets about 13 different players name. Figure 3.3.1 shows the architecture of TwitterPiCollector. © Daffodil International University Fig 3.3.1: TwitterPiCollector In our dataset, we had a big number of positive data and rest of them carried out negative, neutral or any other sentiment. As our main goal is to determination of player’s popularity, so that we need only the positive data. That’s why we labeled positive as 1 and considered rest of them are neutral or as 0. Label annotation has been done and verified by many numbers of peoples. 3.4 Statistical Analysis We run a python script with twitter streaming API using 'Raspberry PI' to gain data, querying for 13 different players name of Bangladesh cricket team in Bengali language. We ran the API stream from 01 October 2018 till 31 December 2018. The Python Script Extracted live tweets in the real time when any people written any of the players' names in twitter. Now we have exactly 2952 data for our research. Per player data distribution is given in figure 3.4.1 and 3.4.2 © Daffodil International University Fig 3.4.1 Fig 3.4.2 From those figures, we see that Mashrafee Murtaza (মাশরািফ িবন মতু+জা) is the most featured player in social media. We got 617 tweets about him, where almost 280 or 45.3% are positive and rest of data contains other sentiment (e.g. negative, neutral etc.). Also 439 times tweets about Tamim Iqbal (তািমম ইকবাল) where 291 or 66.29% of them are positive tweets. Soumya Sarker (েসৗম3 সরকার), Mahmudullah Riyad (মাহমুদু6াহ িরয়াদ), Imrul Kayes (ইম8ল কােয়স), Taskin Ahmed (তাসিকন আহেমদ) also have more than 50% of positive review in the range of number of tweets about them. 100200300400500600700র$লেয়সমাহমুদু1াহয়াদমাশরািফ মত5 6জ8মেহিদ হাসান … মুশক5 রমু9ািফজুরমাননাি 8হাসাইসািমানসািল … 8সৗকারতািবালতাসিকনহেমData distribution for each playerPositive data for each player Neutral Data for each player20%40%60%80%100%র$লেয়সমাহমুদু1াহয়াদমাশরািফ মত5 6জ8মেহিদ হাসান … মুশক5 রমু9ািফজুরমাননাি 8হাসাইসািমানসািল হাস8সৗকারতািবালতাসিকনহেমData distribution for each playerPositive data for</s>
|
<s>each player Neutral Data for each player© Daffodil International University We also see that in Figure 3.4.1, Nasir Hossain (নািসর েহাসাইন) and Sabbir Rahman (সাি;র রহমান) has Lowest peak in the chart. That means people do not talk about them in the social media. This chart provides us a summary of popularity assessment, where as a human being we can decide which player is most popular in this media. But computer cannot understand and decide by itself without knowing any knowledge. So, we need to train computer first about our model which is sentiment analysis based and after that it can understand and decide how to interact with our datasets. 3.5 Implementation Requirements Above all else, the data was gathered through twitter API using python script and the data stored in CSV (Comma Separated Values) files. In this purpose we used • Raspberry Pi (running Debian system and connected to the internet) • Tweepy (to access Twitter API) We labeled the data from CSV files using Microsoft Office Excel 2016. We build some Sentiment Analysis model on the dataset using Keras (It is capable of running on top of TensorFlow), Sckit-Learn library on python. We also used ‘keras.preprocessing’, NumPy, Pandas, Matplotlib for pre-processing, visualizing and analyzing dataset. We assessed the models by figuring diverse quality estimates like accuracy, precision, recall, f-measure using Sklern. All the usage and method were done in a 64 bit, Windows 10 machine. For documenting our point, we have kept up a couple of steps. Portray Figure 3.5.1 demonstrating the pre-processing ventures of research methodology © Daffodil International University Fig 3.5.1: Pre-Processing Ventures 3.5.1 Data Cleaning and Pre-Processing The noise and conflicting words are wiped out and more than one data resources are mixed. Twitter data also contains huge amount of links and stop words. We removed them but we didn't removed emoticons, because it also contains sentiment. Some tweets were mixture of Bengali and English. We convert the English words to Bengali manually. Hashtag and retweets have been deleted, because it doesn’t contain any sentiment. Extra space, New line and punctuation marks also been delated. © Daffodil International University Table 3.5.1.1: Data Cleaning and Pre-Processing BEFORE CLEANING AFTER CLEANING ইম8ল কােয়স 💪" #BANvZIM #Cricket ইম8ল কােয়স 💪 RT @dalim1975: কাল অি= মাশরািফ িছেলা ১৬ েকািট বাংলােদশীর সDদ, আর আজ েথেক মাশরািফ ধুই ৫ % আওয়ামীলীেগর। #MashrafiBinMortuza #Politics #Bdpolitics কাল অি= মাশরািফ িছেলা ১৬ েকািট বাংলােদশীর সDদ, আর আজ েথেক মাশরািফ ধুই ৫ % আওয়ামীলীেগর সরকােরর সহায়তায় উইেকট েপেলন মাশরািফ। সরকােরর সহায়তায় উইেকট েপেলন মাশরািফ েটN িOেকট ইিতহােসর Pথম েবালার িহসােব েরকেড+র সামেন দািঁড়েয় সািকব আল হাসান https://t.co/WG5FbjK1dJ েটN িOেকট ইিতহােসর Pথম েবালার িহসােব েরকেড+র সামেন দাঁিড়েয় সািকব আল হাসান RT @dalim1975: কাল অি= মাশরািফ িছেলা ১৬ েকািট বাংলােদশীর সDদ, আর আজ েথেক মাশরািফ ধুই ৫ % আওয়ামীলীেগর। কাল অি= মাশরািফ িছেলা ১৬ েকািট বাংলােদশীর সDদ, আর আজ েথেক মাশরািফ ধুই ৫ % আওয়ামীলীেগর। িমরাজ ইউ িবউিট 😍 অসাধারন একজন িফUার, িলটন, তািমম ভাল দুইটা ক3াচ ধরেছ, েমাXািফজ তার িনজY ফেম+ িফের আসেল ৩-৪টা উইেকট তার</s>
|
<s>পাওয়া িনয়ম হেয় দািড়েয়েছ, ম3াশ ও ভাল েবািলং করেছ, সািকেবর ে\ক]টা অি^র, সবেচেয় আ_য+জনক িমরাজ shimron hetmyer েক বারবার আউট করেছ। িমরাজ ইউ িবউিট 😍 অসাধারন একজন িফUার, িলটন, তািমম ভাল দুইটা ক3াচ ধরেছ, েমাXািফজ তার িনজY ফেম+ িফের আসেল ৩-৪টা উইেকট তার পাওয়া িনয়ম হেয় দািড়েয়েছ, ম3াশ ও ভাল েবািলং করেছ, সািকেবর ে\ক]টা অি^র, সবেচেয় আ_য+জনক িমরাজ shimron hetmyer েক বারবার আউট করেছ । তািমম েসৗম3 িলটেনর উপর চরম েaেপেছন ব3ািটং েকাচ! তািমম েসৗম3 িলটেনর উপর চরম েaেপেছন ব3ািটং েকাচ! Table shows some example of data cleaning and pre-processing. Sometimes people used emoticons without any space. This will be problematic for feature selection. We added extra space for those emoticons. 3.5.2 Splitting Training and Test data © Daffodil International University After cleaning and pre-processing we separate positive and neutral data for each player from 13 separate files containing labeled data. After doing that, we took 20% positive and neutral data as testing data from all of those separate files. When train and test split has been done, we combined all training data in a CSV file and also did that thing for testing sets. This procedure preserves the ratio of training and testing for positive and neutral data of each player. Table 3.5.2.1: Ratio of Training and Testing Player Name Total data Positive Neutral Positive Train Neutral Train Positive Test Neutral Test ইম8ল কােয়স 174 96 78 76 62 20 16 িলটন দাস 320 124 196 99 156 25 40 মাহমুদু6াহ িরয়াদ 191 122 69 97 55 25 14 মাশরািফ িবন মতু+জা 617 280 337 224 269 56 68 েমেহিদ হাসান িমরাজ 81 34 47 27 37 7 10 মুশিফকুর রিহম 318 165 153 132 122 33 31 মুXািফজুর রহমান 121 60 61 48 48 12 13 নািসর েহাসাইন 6 2 4 1 3 1 1 সাি;র রহমান 7 4 3 3 2 1 1 সািকব আল হাসান 446 212 234 169 187 43 47 েসৗম3 সরকার 136 96 40 76 32 20 8 তািমম ইকবাল 439 291 148 232 118 59 30 তাসিকন আহেমদ 96 53 43 42 34 11 9 3.5.3 Feature Selection Feature extraction from the pre-preparing dataset is an essential part, to know the capacity of the calculations it helps a great deal. We have used 2952 tweets as our training and testing set. We used quite a few feature selection techniques to find best result. First of all, we used count vectorizer from Scikit-learn which makes spares vector of occurrence count of words. Using this we got 1245 sparse matrix of unique © Daffodil International University words. Then we tested our model with Bi-gram and Tri-gram features. Bi-gram features perform well then trigram and no n-gram. Table 3.5.2.2: Vectorize without N-gram মাশরািফ তুিমই বস ভােলা েখেলেছ ইম8ল কােয়স মাশরািফ তুিমই বস 1 1 1 0 0 0 0 ভােলা েখেলেছ 0 0 0 1 1 0 0 ইম8ল কােয়স ভােলা েখেলেছ 0 0 0 1 1 1 1 Table 3.5.2.3: Vectorize with Bi-gram মাশরািফ তুিম তুিমই বস মাশরািফ তুমই বস ভােলা েখেলেছ ভাল েখেলেছ</s>
|
<s>মাশরািফ তুিমই বস 1 1 1 1 1 0 0 0 ভােলা েখেলেছ 0 0 0 0 0 1 1 1 Another feature selection technique has been applied for sequence model through our dataset is “Word Embedding” from ‘keras.preprocessing’. Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing where words or phrases from the vocabulary are mapped to vectors of real numbers. Word embeddings don't comprehend the content as a human would, however they rather map the measurable structure of the language utilized in the corpus. Given picture is an example of word embedding. Fig 3.5.2.1: Word Embedding © Daffodil International University One issue that we have is that every content grouping has by and large unique length of words. To counter this, we should use pad_sequence() which just cushions the arrangement of words with zeros. After doing Word embedding and padding with our dataset with sequence model we got maximum accuracy. Fig 3.5.2.2: Padding 3.5.4 Algorithm This study is supervised machine learning based where unstructured dataset has been labeled manually and verified by different people. Different classification model has been applied to our dataset to predict sentiment from tweets. We did it is to have a prevalent look at the last yield. This additionally empowered us to have a relative report among the predicting models. We gauged the results of various models dependent on these rules: Accuracy, Precision and Recall. © Daffodil International University Chapter 4 Experimental Results and Discussion 4.1 Introduction The research is totally based on experiment however it is important for achieving the expected goal. Experimenting with various algorithm helps us to use perfect model that best fit in our dataset. In this chapter the experimental result and total accuracy has been discussed in details. 4.2 Experimental Results We ran multiple classification models on our dataset to predict cricket player’s popularity assessment on this study. we did it is to have a superior look at the final output. This additionally empowered us to have a comparative report among the predictive models. We measured the results of various models dependent on these standards: Accuracy, Confusion matrix, Precision, Recall, f1-score and Support. 4.2.1 Logistic Regression The Logistic Regression (LR) model provides shrinkage for performing text categorization and select features simultaneously [19]. It utilizes a Laplace prior stay away from over-fitting and produces sparse predictive models for text data [20]. estimation of 𝑷(𝒄|𝒇) has the parametric structure of Logistic Regression is: 𝑷(𝒄|𝒇) = 𝒛(𝒇) 𝐞𝐱𝐩 ((𝝀𝒊,𝒄𝑭𝒊,𝒄(𝒇, 𝒄)) Where normalization function is 𝑷(𝒄|𝒇), 𝝀 is a vector of weight parameters for the feature set [21] and 𝑭𝒊,𝒄 is a binary function. It takes as inputs a feature and a class label. It is defined as: 𝑭𝒊,𝒄2 = 3𝟏, 𝒏(𝒇) > 𝟎 𝒂𝒏𝒅 𝒄9 = 𝒄𝟎, 𝒐𝒉𝒆𝒓𝒘𝒊𝒔𝒆 © Daffodil International University When certain features exist, this binary function is triggered. The opinion is guessed with a certain way. For example, if the bigram “ভােলা েখেলেছ” appears, a feature function might</s>
|
<s>eliminate and the sentiment of the document is guessed to be positive [22]. Figure 4.2.1 and table 4.2.1 shows the confusion matrix, precision, recall, f1-score and support of our dataset after running Logistic Regression. Fig 4.2.1: Confusion matrix for Logistic Regression Table 4.2.1: Accuracy for logistic regression Neutral (0) Positive (1) Average precision 0.71 0.71 0.71 Recall 0.67 0.74 0.74 f1-score 0.69 0.73 0.73 Support 288 313 601 © Daffodil International University 4.2.2 Multinomial Naïve Bayes This algorithm is a straightforward probabilistic classifier with a solid restrictive autonomy assumption that it is ideal for characterizing classes with highly dependent features [23]. Based on the Bayes theorem Positive or Neutral classes of each tweet has been calculated using the probability. In Bayes’ theorem, 𝑷(𝑪𝒊|𝑬) is the probability that text document E is of class Ci and defines it as follows [23]. 𝑷(𝑪𝒊|𝑬) =𝑷(𝑪𝒊)𝑷(𝑬|𝑪𝒊)𝑷(𝑬) 𝑪𝒊 ∈ 𝑪 Figure 4.2.2 and table 4.2.2 shows the confusion matrix, precision, recall, f1-score and support of our dataset after running Multinomial Naïve Bayes. Fig 4.2.2: Confusion matrix of MLB Table 4.2.2: Accuracy of MLB Neutral (0) Positive (1) Average precision 0.73 0.69 0.71 Recall 0.62 0.79 0.71 f1-score 0.67 0.74 0.71 Support 288 313 601 © Daffodil International University 4.2.3 Convolutional Neural Networks Convolutional layer takes a patch of input feature with filter kernel size that take a dot product of the multiplied weight of the filter. One dimensional ConvNets invariant to translations helps certain sequence recognition in different position. When we work with sequential data, like text, we work with one dimensional convolution. The convolutional filter consists of filters which move across the feature vector and then select important features. It takes patch of input features with the size of the kernel. The dot product of the multiplied weights of the filter has been taken with the help of patch. The one-dimensional ConvNets is invariant to interpretations, which implies that specific sequences can be perceived at a different position. This can be useful for specific patterns in the text. Fig 4.2.3: 1D Convolution Patch from input features TIME/ORDER OF SEQUENCE Dot product with filter weights OUTPUT FEATURES INPUT FEATURES Kernel size © Daffodil International University Pooling: Pooling is used for reduce the output dimensionality and make fixed size output matrix but keep important features. In this way, we used max pooling on the feature maps. Feature vectors are actually the output. Be that as it may, every convolution produces feature maps of various shapes. We perform max-pooling on them which creates feature vector for individual feature map. Then we made big feature vector by concatenating them. Activation Function: A neuron is active or not is decided by the activation function. We used sigmoid activation function in the feature map. To compute gradient, it needs less computation and its performance is better for binary classifications. The mathematical function of the sigmoid function is given below. 𝒇(𝒛) =𝟏 + 𝒆D𝒛 Where, z is a neurons output. Fully Connected Layer: All neurons of the next level have been</s>
|
<s>locally connected to each neuron in the fully connected layer. After then the max-pooled data fed to fully connected layers to calculate accuracy and loss and also probability distribution. Dropout technique has been applied in this layer for avoiding overfitting. SoftMax classifier has been used to calculate probabilities and BinaryCrossEntropy used to calculate loss. Optimizer: Optimizers minimizes the gradient by adjusting the weights in the back propagation. In this research we used Adam optimizer for minimize the gradient. Adam Optimizer is computationally proficient, requires less memory, invariant in rescaling the gradients diagonally [24]. It is likewise productive to manage issue like noisy and/or sparse gradients. © Daffodil International University 4.2.3.1 Model Architecture and Results To get best performance it's important to test with various hyper parameter. We tested with various hyper parameters and compared the obtained accuracy between them. We change the batch size but keep the filter size constant. Also, we keep the default learning rate. Testing with various hyper parameter best accuracy gained in batch size 5. From this observation we can also see that changing in batch size increase or decrease the accuracy. For calculating the loss function, we use Binary Cross Entropy. Binary cross entropy is just a special case of categorical cross entropy. The equation for binary cross entropy loss is the exact equation for categorical cross entropy loss with one output node. For example, binary cross entropy with one output node is the equivalent of categorical cross entropy with two output nodes. To optimize a machine learning algorithm, a loss function has been used. And performance has been measured by accuracy matrix that described our model's performance and loss function. Figure 4.2.4, 4.2.5 shows r the model architecture, Confusion and loss accuracy matrices respectively. Table 4.2.3 shows that the Accuracy of our model. Fig 4.2.4: Model Architecture Embedding_1: Embedding OutpuNone, 100,100 Global_max_poling_1D: GlobalMaxPooling1D Output None,128 Conv_1D: Conv1D OutpuNone, 91,128 Dense_1: Dense OutpuNone,10 Dense_2: Dense OutpuNone,1 745,029 © Daffodil International University Fig 4.2.5: Accuracy, Loss, Confusion matrix Table 4.2.3: Accuracy 4.3 Descriptive analysis In this section a comprehensive study among the classifiers result in our dataset will be shown. We have used four performance measure used for calculating the performance of the algorithms. Confusion matrix for every classifier has been calculated. So, we have all necessary data for measuring performance of the algorithms. Following table 4.3.1 gives us the model accuracy result of all algorithms that we have experimented Neutral (0) Positive (1) Average precision 0.80 0.79 0.79 Recall 0.76 0.82 0.79 f1-score 0.78 0.80 0.79 Support 288 313 601 © Daffodil International University Table 4.3.1: Model accuracy no. Algorithms Result 1. Logistic Regression 70.72% 2. Logistic Regression + bigram 71.38% 3. Multinomial Naïve Bayes 71.54% 4. Multinomial Naïve Bayes + bigram 70.89% 6. Convolutional Neural Network + Word Embedding + Padding Sequence 79.12% As we know that deep learning models perform then other model. Here it is clearly reflected that Convolutional Neural Network performed outstanding performance than other machine learning models. Another performance measuring,</s>
|
<s>we have used is precision. Precision of any classifier is the measures the percentage of correct assignments among all the documents [25]. 1 is best possible value for precision and 0 is worst possible value. Below equation shows the calculation for precision: 𝑷𝒓𝒆𝒄𝒊𝒔𝒊𝒐𝒏 =𝑻𝑷 + 𝑭𝑷 Here, TP is True Positive and FP is False Positive. On the other hand, Recall is the measure to decide the completeness [26]. All the more decisively, it is the rates of the actual positive samples that are labeled as positive [26]. Best and worst values for recall are 1 and 0. Following equation shows the calculation for the recall: 𝑹𝒆𝒄𝒂𝒍𝒍 =𝑻𝑷 + 𝑭𝑵 Here, TP is True Positive and FN is False Negative. Another measuring method which is F-measure score is the harmonic mean of precision and recall [27] The F score is utilized to measure a test's accuracy, and it adjusts the utilization of precision and recall to do it. The F score can give an increasingly reasonable proportion of a test's performance by utilizing both precision and recall. The F score is regularly used in document classification performance This score is calculated according to: © Daffodil International University 𝑭𝟏 = 𝟐 𝑷𝒓𝒆𝒄𝒊𝒔𝒊𝒐𝒏 . 𝑹𝒆𝒄𝒂𝒍𝒍𝑷𝒓𝒆𝒄𝒐𝒔𝒐𝒑𝒏 + 𝑹𝒆𝒄𝒂𝒍𝒍 Figure 4.3.1 bellow describes the precision and recall and f-measure scores that we calculated. Fig 4.3.1: Precision, Recall, F-measure Score From this above performance measure score it is clear that the Deep learning model has the maximum accuracy in our dataset. So, Popularity assessment of cricket player has been done from the output result of this algorithm. 4.3.1 Popularity of Cricket Players After selecting the best model for sentiment classification, it is time to calculate how popular a player is. It can be done by calculating the positive sentiment for each player. The output result of the classification model is a Numpy array that contains 0's and 1's. 1 denoted to positive sentiment and 0's are for other sentiment. As popularity assessment needs only the positive sentiment that’s why we calculate total number of 1's for each player and divide it by total 1's in the output array. 𝑷𝒐𝒑𝒖𝒍𝒂𝒓𝒊𝒕𝒚𝑺𝒄𝒐𝒓𝒆 = P𝑷𝒑𝒐𝒔 Here, 𝑷𝒑𝒐𝒔 total positive score of each player. After calculated the popularity score of each player we got the following result shown in fig 4.3.2 0.660.680.70.720.740.760.780.80.82Logistic Regression MNB CNNPrecision, Recall and f-measure scorePrecision Recall F-Measures© Daffodil International University Fig 4.3.2: Popularity Score From this figure it is clear that Mashrafee Bin Murtaza (মাশরািফ িবন মতু+জা) has the maximum popularity score. Tamim Iqbal (তািমম ইকবাল) and Shakib Ul Hasan (সািকব আল হাসান) has second and third peak of the chart respectively. In this chart Nasir Hossain (নািসর েহাসাইন) and Sabbir Rahman (সাি;র রহমান) has lowest popularity score, that means they are less popular in this chart. Here, is most important thing being this popularity chart does not the ultimate ranking. It is just for a specific time period. By changing time, it can be changed due to player’s performance. 4.4 Summery Section 4.2 shows the</s>
|
<s>experimental results we have experimented. There are 3 algorithms that we applied to fed in our dataset. A comprehensive study shows that CNN model gives us promising accuracy. Thus, we selected as our final model. In section 4.3 we described the architecture of our model and then the loss and accuracy matrix of our model. Finally, we rank some Bangladeshi cricketers with the help of the positive outcome of the model. The ranking was done for a specific time period. It can be changed due to players performance. র$লেয়সমাহমুদু1াহয়াদমাশরািফ মত5 6জ8মেহিদ হাসান িমরাজমুশক5 রমু9ািফজুরমাননাি 8হাসাইসািমানসািল হাস8সৗকারতািবালতাসিকনহেমPopularity ScorePositive output© Daffodil International University Chapter 5 Summary, Conclusion, Recommendation and Implication for Future Research 5.1 Summary of the Study In this research we attempted to distinguish popularity from Bengali content utilizing sentiment analysis. Studying different tweets, classifying the expressions into either positive or negative using various Machine Learning algorithms and then determining popularity from that was our main goal. Popularity of a single person such as a player of Bangladesh National Cricket Team varies from time with their act of playing. We all know that performance of a player does not remain same all time, it's not parallel for measurement. So, to detect popularity of a specific time we need real time data of that explicit time. We have used Raspberry Pi with our own python script to gather Tweets which we ran for three months. We used STREAMING API using Push method from twitter which they provide for publicly use. With this API we caught real time tweets of a user I.e. a fan of a Player. Though Bengali is the native language of Bangladesh, most of the people do not write Bengali in Twitter, they use English instead. So, it was difficult for us to gather huge amount of data. We have got about ten thousand of tweets, among them only 2952 were appropriate for our work. Then we leveled each sentence whether it is positive or negative verified by multiple people. Our chosen Machine Learning algorithm detects popularity by calculating the percentage value of each player. Finally, our proposed research is successful and we have got 80.50% of accuracy. © Daffodil International University 5.2 Conclusions This undergrad research, despite the fact that in an extremely brief time, has made the issue consummately clear and what has been finished. We have concentrated on making the problem scope clear so it fills in as a stage for basic augmentation to this system. The appraisal of the abilities of the understudies will help the authority of have a solid outline of the students. The research is likewise expected to guarantee appropriate direction and instructional courses for the understudies who are on a poor expertise level. Last consequence of the research is produced by implementing different algorithms, calculations and statistical techniques. Students who had paid attention to their underlying phases of programming have sparkled in pretty much every other area. Learning of center programming causes a great deal to continue in other technical zones.</s>
|
<s>Also, specialized information with relational abilities prompts a balanced career. 5.3 Recommendations Perfection is all time a work in advancement, there our proposed project is just at its beginning stages. Consequently, a great deal of works can be possible to it. To improve the viability, dependability and proficiency of the study, further collection of data is required. The more the data is, the more dependable the outcomes are. Other than, an approved set is also expected to decrease the over-fitting of the models. Progressively advance models can be applied on the data to investigate further. © Daffodil International University 5.4 Implication for Further Study Presently days, the interest for data mining expert is exceedingly valued. This is because of presence of enormous amount of information in our environment. It is the opportune time to work with these sorts of complex data, so that a new pattern can be acquainted to resolve different complex problems. Sentimental determining is one of the basic parts of Machine Learning. The experimental study which we have carried out on popularity identification with an attractive result is leaving a solid impression behind our work. We are still dealing with the system and will keep on working on the system furthermore for a superior and more accurate system. © Daffodil International University Appendices Appendix A: Machine Learning pre-processing with TensorFlow. With data pre-processing in Deep Learning getting attention, we ventured to give the TensorFlow Transform (tf.Transform) library a try. © Daffodil International University References [1] P. Ferguson, N. O'Hare, M. Davy and A. Bermingham, "Exploring the use of paragraph-level annotations for sentiment analysis of financial blogs," in Workshop on Opinion Mining and Sentiment Analysis, Seville, Spain, 2009. [2] E. W. Pamungkas and D. G. P. Putri, "Word Sense Disambiguation for Lexicon-Based Sentiment," in 9th International Conference on Machine Learning and Computing, Singapore, Singapore, 2017. [3] L. Cruz, J. Ochoa, M. Roche and P. Poncelet, "Dictionary-Based Sentiment Analysis Applied to a Specific Domain," in Information Management and Big Data, vol. 656, Cusco, Peru, Springer International Publishing, 2017, pp. 57-68. [4] D. Ekawati and M. L. Khodra, "Aspect-based Sentiment Analysis for Indonesian," in International Conference on Advanced Informatics, Concepts, Theory, and Applications, Denpasar, Indonesia, 2017. [5] A. Trilla and F. Alías, "Sentence-Based Sentiment Analysis for Expressive Text-to-Speech," IEEE Transactions on Audio, Speech, and Language Processing, vol. 21, no. 2, pp. 223 - 233, 2012. [6] P. Sanguansat, "Paragraph2Vec-based sentiment analysis on social media for business in Thailand," in 8th International Conference on Knowledge and Smart Technology, Chiangmai, Thailand, 2016. [7] H. Nguyen and M.-L. Nguyen, "A Deep Neural Architecture for Sentence-Level Sentiment Classification in Twitter Social Networking," in International Conference of the Pacific Association for Computational Linguistics, Yangon, Myanmar, 2018. [8] M. Thelwall, K. Buckley and G. Paltoglou, "Sentiment Strength Detection for the Social Web," Journal of the American Society for Information Science and Technology, vol. 63, no. 1, pp. 163-173, 2012. [9] S. Dutta, M. Roy, A. K. Das and S. Ghosh, "Sentiment Detection in Online Content: A</s>
|
<s>WordNet Based Approach," in International Conference on Swarm, Evolutionary, and Memetic Computing, Hyderabad, India, 2015. [10] S. M. M. H. Chowdhury, S. Abujar, M. Saifuzzaman, P. Ghosh and S. A. Hossain, "Sentiment Prediction Based on Lexical Analysis Using Deep Learning," in Emerging Technologies in Data Mining and Information Security, Springer International Publishing, 2019, pp. 441-449. [11] M. Al-Amin, M. S. Islam and S. D. Uzzal, "Sentiment analysis of Bengali comments with Word2Vec and sentiment information of words," in International Conference on Electrical, Computer and Communication Engineering, Cox's Bazar, Bangladesh, 2017. [12] K. Sarkar and M. Bhowmick, "Sentiment polarity detection in bengali tweets using multinomial Naïve Bayes and support vector machines," in IEEE Calcutta Conference, Kolkata, India, 2018. [13] S. Akter and M. T. Aziz, "Sentiment analysis on facebook group using lexicon based approach," in 3rd International Conference on Electrical Engineering and Information Communication Technology, Dhaka, Bangladesh, 2017. [14] A. K. Paul and P. C. Shill, "Sentiment mining from Bangla data using mutual information," in 2nd International Conference on Electrical, Computer & Telecommunication Engineering, Rajshahi, Bangladesh, 2016. [15] S. Chowdhury and W. Chowdhury, "Performing sentiment analysis in Bangla microblog posts," in International Conference on Informatics, Electronics & Vision, Dhaka, Bangladesh, 2014. [16] M. Asimuzzaman, P. D. Nath, F. Hossain, A. Hossain and R. M. Rahman, "Sentiment analysis of bangla microblogs using adaptive neuro fuzzy system," in 13th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery, Guilin, China, 2017. [17] K. Sarkar, "Sentiment Polarity Detection in Bengali Tweets Using Deep Convolutional Neural Networks," Journal of Intelligent Systems, 2018. [18] D. C. Dandapat, S. C. Chavan, N. P. Chaudhary and V. D. Ghare, "Analysis of Tweets for Popularity Detection of Television Media in Business Intelligence," International Journal of Innovative Research and Creative Technology, vol. 1, no. 4, pp. 405-407, 2015. © Daffodil International University [19] P. Barnaghi, P. Ghaffari and J. G. Breslin, "Opinion Mining and Sentiment Polarity on Twitter and Correlation between Events and Sentiment," in IEEE Second International Conference on Big Data Computing Service and Applications, Dublin, 2016. [20] A. Genkin, D. D. Lewis and D. Madigan, "Large-Scale Bayesian Logistic Regression for Text Categorization," Technometrics, vol. 49, no. 3, pp. 291-304, 2007. [21] H. Daumé, "Notes on CG and LM-BFGS optimization of logistic regression," Information Sciences Institute, 2004. [22] B. Pang, L. Lee and S. Vaithyanathan, "Thumbs up?: sentiment classification using machine learning techniques," ACL-02 conference on Empirical methods in natural language processing, vol. 10, pp. 79-86, 2002. [23] P. Domingos and M. Pazzani, "On the Optimality of the Simple Bayesian Classifier," Machine Learning, vol. 29, no. 2–3, p. 103–130, 1997. [24] D. P. Kingma and J. L. Ba, "Adam: A Method for Stochastic Optimization," in 3rd International Conference for Learning Representations, San Diego, 2015. [25] A. Sun and E.-P. Lim, "Hierarchical text classification and evaluation," in IEEE International Conference on Data Mining, San Jose, CA, USA, USA, 2001. [26] J. Han, J. Pei and M. Kamber, Data Mining: Concepts and Techniques, Morgan Kaufmann, 2011. [27] G. Forman, "An</s>
|
<s>Extensive Empirical Study of Feature Selection Metrics for Text Classification," Journal of Machine Learning Research, vol. 3, pp. 1289-1305, 2003.</s>
|
<s>Paper Title (use style: paper title)Human Abnormality Detection Based on Bengali Text M. F. Mridha Department of Computer Science & Engineering Bangladesh University of Business & Technology Dhaka, Bangladesh firoz@bubt.edu.bd Md. Saifur Rahman Department of Computer Science & Engineering Bangladesh University of Business & Technology Dhaka, Bangladesh saifurs@gmail.com Abu Quwsar Ohi Department of Computer Science & Engineering Bangladesh University of Business & Technology Dhaka, Bangladesh quwsarohi@gmail.com Abstract— In the field of natural language processing and human-computer interaction, human attitudes and sentiments have attracted the researchers. However, in the field of human-computer interaction, human abnormality detection has not been investigated extensively and most works depend on image-based information. In natural language processing, effective meaning can potentially convey by all words. Each word may bring out difficult encounters because of their semantic connection with ideas or categories. In this paper, an efficient and effective human abnormality detection model is introduced, that only uses Bengali text. This proposed model can recognize whether the person is in a normal or abnormal state by analyzing their typed Bengali text. To the best of our knowledge, this is the first attempt in developing a text based human abnormality detection system. We have created our Bengali dataset (contains 2000 sentences) that is generated by voluntary conversations. We have performed the comparative analysis by using Naïve Bayes and Support Vector Machine as classifiers. Two different feature extraction techniques count vector, and TF-IDF is used to experiment on our constructed dataset. We have achieved a maximum 89% accuracy and 92% F1-score with our constructed dataset in our experiment. Keywords— Abnormality, Natural Language Processing, Machine Learning, SVM, Naïve Bayes, TF-IDF, Bengali Text. I. INTRODUCTION At present, the internet is the most important part of our daily life, where we express our emotions, judgments, appreciations, etc. We text each other, comment on different online products, comment on another person’s comment, provide opinions and most of them are happening in text form. So text is a great source for research that can be used to identify human characteristics. There exists a strong relationship between human behavior, sentiment, and abnormality. Although, the terms human behavior, and sentiment relate to each other. Behavior is a human attitude that is the physiological activity or feeling performed towards some evaluation [1]. Repeatedly, the human sentiment defines the same characteristics, and in the largest sense, it is expressed in written words or speech. On the contrary, human abnormality explains the variation from absolute mental health. Human sentiment is the closest expression to human abnormality. Human abnormality mostly defines negative-sentiments, although a negative sentiment may or may not be defined as a human abnormality. Table I contains explanations that target to the relation between human abnormality and sentiment. By mental condition, two The paper is accepted in IEEE Region 10 Symposium (TENSYMP) 2020. states, normal and abnormal is represented which is our fundamental finding. The human abnormality detection problem is closely related to human sentiment analysis in the text. Sentiment analysis is the process of extracting emotions, opinions</s>
|
<s>or a process of examining the attitude or behavior of humans. After examination analysis provides review based that is based on normal or abnormal behavior. As a result, an abnormality detection problem can be verified as a binary classification problem. TABLE I. DIFFERENCE BETWEEN SENTIMENT AND MENTAL CONDITION Expression Sentiment Mental Condition সে আমায় ভাল াবালেনি Negative Normal আজ সে যনি আমার মৃত্য ু হইলত্া Negative Abnormal আনম এটা পারলবা Positive Normal The Bengali language is one of the most spoken languages in the world. People are using the Bengali language to express their opinions, emotions, etc. on online blogs, social media sites. Caused by this, the Bengali language is a great field for research. A human sentiment can express love, sorrow, anger, depression, happiness, judgments, decision, suicidal, etc. Among all these types of sentiments, some special type of sentiment can be defined as an abnormal sentiment. Extracting or targeting abnormality from textual data is challenging yet not impossible. Human abnormality can become a broad field of study, as it targets the mental balance of human characteristics. Abnormality detection based on the text has much potential, which can be utilized to detect abnormal state detection based on online messaging systems. An abnormality detection system can easily filter abnormal messages from online messaging systems, which may become invaluable in the reduction of online criminal/abnormal activities. In this paper, we extract information from our text by which we will classify where a sentence is containing abnormal or normal attitudes through Machine Learning (ML). Along with the implementation of text-based abnormality detection, the contribution of this paper also includes the distinction between abnormality and negative sentiment states.1 http://orcid.org/0000-0001-5738-1631https://orcid.org/0000-0001-7375-9040The rest of the paper is organized as follows. Section II outlines the related works in different languages. Section III presents the methodology of the proposed architecture. We present the result analysis in Section IV. Lastly, future work and conclusion are presented in Section V. II. RELATED WORK In recent times, the classification of emotions into text is popular with the Natural Language Processing (NLP) researchers. Though it is harder to find exact research on human abnormality detection from the text, few works are done related to this topic, which is done for the English language and mostly links to attitude analysis as well. To analyze the sentiment polarity from the text, rule-based linguistic models are created [2], [3], [4]. ML approach is popular to analyze sentiment from text, images, etc. Go, Bhayani and Huang used emoticons in the training corpus [5]. SVM, Naïve Bayes, and MaxEnt are used as a classifier. Davidiv, Tsur, and Rappoport used emoticons and hashtags to recognize the sentiment label and using the KNN algorithm to train a supervised sentiment classifier [6]. Pak proposed a method which consists of Naïve Bayes classifiers with POS-tag and n-gram features [7]. Alena, Helmet and Mistru design a system to analysis textual attitude [15]. It is based on compositionality principles and rules for semantically distinct verb classes. Balahur et al. do his research on different</s>
|
<s>languages, Spanish, Germany, French with a Machine translation system [8]. SVM is used in the training phase. Kaur et al. proposed an algorithm with a unigram and simple scoring method for sentiment analysis from Punjabi text [9]. Attitude analysis with Bengali text is first work though sentiment analysis is studied in various topics. Various methods are explored to achieve the goal. Hasan et al. [10] gathered and combine the sentiment orientation for each sentence to recognize sentiment on Bengali text. They put the phrase patterns to match with predefined phrase patterns. The semi-supervised bootstrapping method is used for sentiment extraction from text with two labels of polarity positive and negative by Chowdhury et al. [11]. Nabi et al. represented a method of TF-IDF to extract sentiment [12]. They ignored mixed sentences and a few noisy data in their system. Shaika et al. identify sentiment with negative and positive polarity [13]. They used a semi-supervised bootstrapping approach to develop the training corpus and SVM and Maximum Entropy for classification. Tabassum and Khan used the random forest to classify sentence sentiment into positive and negative [14]. All the research works that are conducted in extracting sentiment rather than detecting abnormality of texts. In our work, we have classified human behavior as normal or abnormal using our own built corpus. We carried out different feature extraction and classification methods to implement the best possible architecture as well. III. METHODOLOGY This method identifies human abnormality detection based on Bengali Text. This method identifies two states using text analysis, whether a person is normal or abnormal. To do this, first, we extract features from the text, then we use a classifier to classify sentences into normal and abnormal. The complete methodology is divided into the following steps: (a) Data collections and Preprocessing, (b) Features Extraction, and (c) Classification. Fig. 1. represents a visual of the mentioned work flow. Fig.1. Work Flow of Human Attitude Analysis A. Data Collections and Preprocessing In machine learning, data is the power of any model. Since there was no work on this very topic in the Bengali language, we had to create our very first human abnormality dataset based on Bengali Text. The data was gathered from volunteers who made conversations on social media sites like Twitter, Facebook, WhatsApp, Messenger, etc. The dataset was collected in the Bengali Text form. We need to preprocess the data because we collected data from social media. The sentences of the dataset had many irrelevant tags, spaces, emojis, etc. which was removed by an automated script. The dataset contained sentences with appropriate labels (normal/abnormal). The sentences of the dataset were appropriately classified by multiple specialists. Each of the data is a Bengali sentence that contains two target values, 1 (abnormal) or 0 (normal). Table II and II contain an illustration of some normal and abnormal sentences of the dataset. The dataset contains 2000 sentences, where 814 are the abnormal sentence, and the rest of the sentence is the normal sentence. TABLE II. NORMAL ATTITUDE</s>
|
<s>Normal Attitude মা বাবা সবেঁলে থাো অবস্থায় ত্ালিরলে েখিও স্বলে সিনখনি আনম যনি আমার অনিে টাইলমর িালে নিয়নমত্ ওর সখাজ নিত্াম সে এেটা েলুযাগ োয় আনম পারলবা নেভালব TABLE III. ABNORMAL ATTITUDE Abnormal Attitude আজ সে যনি আমার মৃত্য ু হইলত্া আনম ওলে সমলর সি ব মািুষ হলত্ োও মি খাও B. Features Extraction Features extraction indicates information gathering from the text in NLP. A feature uniquely describes the data by its properties. In ML, feature extraction also recognized as a dimensionality reduction technique. There exist many methods to extract features from the text. We will use two different techniques, Count Vector and TF-IDF Vector. We experiment with these two different feature extraction techniques and evaluate their accuracy in Section IV. Word embedding is the numerical representation of text. Frequency-based and prediction based word embedding are the popular categories of word embedding. Count Vector and TF-IDF Vector are the subset of frequency-based word embedding. 1) Count Vector Count Vector preserves the number of occurrences of the words of a text document. In Count Vector, a dictionary is constructed that comprises all unique data from all documents. Then it encodes a sentence based on the dictionary, where the encoded data holds the number of appearances of a word in that sentence. 2) TF-IDF TF-IDF consists of two terms, TF which stands for Term Frequency and IDF stands for Inverse Document Frequency. TF-IDF presents the correlated importance of terms into a document and the whole corpus. The following formula give us TF-IDF score for a term T, TF(T)= number of occurrences of T in a document total number of T from that document (1) IDF(T)= logtotal number of documentsnumber of documents which contains T (2) TF-IDF=TF×IDF (3) Here, term T could be character, word, or n-gram. We use a word as a token or term to generate TF-IDF into our experiment. C. Classification The scope of the paper relates to supervised classification methods. A supervised classifier is a kind of model which is trained with training dataset with features and correct target classes. The features of the dataset are extracted through the feature extraction techniques, Count Vector and TF-IDF, which are elaborated in Section B. This section refines on the methods that were performed to build classifiers that perform abnormality detection. 1) Naïve Bayes Naïve Bayes classifier is based on Bayes’s theorem with the naïve assumption that the presence of a feature in a class is independent of the presence of any other features. Assume 𝑇 = {𝑡1, 𝑡2, … , 𝑡𝑛} is the dependent feature vector with given a class 𝑐, P(c|t1,t2,…,tn)= P(c)P(t1,t2,…,tn|c)P(t1,t2,…,tn) (4) Using naïve assumption, we can write (4) as P(c|t1,t2,…,tn)=P(c) ∏ P(ti|c)ni=1P(t1,t2,…,tn) (5) P(c|t1,t2,…,tn)=P(c) ∏ P(ti|c)ni=1 (6) [Since, P(t1,t2,…,tn) is constant] ĉ=argmax( P(c) ∏ P(ti|c)ni=1 ) (7) Equation (7) is used to find expected class with maximum probability. There exist several varieties of Naïve Bayes classifiers among which the most used classifiers are, Gaussian Naïve, Multinomial Naïve, and Bernoulli Naïve. We select Multinomial Naïve for our classification because</s>
|
<s>it is most suitable to process text-based data. We used Count Vectors and TF-IDF as features that are discrete values and, Multinomial Naïve works on multinomial distributed data. Smoothing is used in Multinomial Naïve using the following equation, θ̂ci=Nci+αNc+αn (8) In the (8) Nci= ∑ ti t∈TS is the number of times feature 𝑖 appears in a sample of class 𝐶 in the training set 𝑇𝑆 , and 𝑁𝑦 =∑ 𝑁𝑐𝑖𝑖 is the total count of all features for class 𝑐 . 𝛼 is smoothing prior, for Laplace smoothing α=1 and α<1 called Lidstone smoothing. 2) Support Vector Machine: SVM is a well-known machine learning algorithm to classify data that is more comfortable to use than the neural network and receives good accuracy as well. The principle of SVM is separating data into different classes by finding the optimal hyperplane, which has a maximum margin with minimum error. Following steps will describe the mathematics behind SVM, Let’s assume a training set TS, TS={(x1,y), (x2,y), …,(xn,y)} (9) TS={xi,yi;1≤i≤n (10) Where in (10) xi∈Rd is the input vector for the ith training data and yi∈(1,-1) is class label for ith training data and n is the number of training data set. The SVM follow the following decision function y=sign(∑ yαiK(x,xi)+βni=1 ) (11) Here in (11) K is the kernel function which explained later, 𝛼 and 𝛽 are the parameters where α={α1,α2,…,αn}. To train the SVM, we should find 𝛼 that minimize the objective function: min∑ ∑ αiαjyiK(xi,xj)- ∑ αjj=1j=1i=1 (12) Equation (12) Subject to the constrained ∑ αiyi=0ni=1 , 0≤αi≤C. So for training an SVM, we need to solve the quadric programming optimization problem with n number of parameters. A kernel function is a process to represents relations between two data points in the feature space. It helps to minimize the complexity of finding the mapping function. Kernel function has the following categories, The linear K(x,xi)=xTSxi Polynomial K(x,xi)=(γxTSxi+r)2,γ>0 Radial Basis Function (RBF) K(x,xi)= exp (-γ||x-xi||) ,γ>0 Sigmoid K(x,xi)=tanh(γxTSxi+γ) We use the grid search to find which kernel is suited for our data set and achieved higher accuracy and f1-score. IV. RESULT ANALYSIS The paper implements two different feature extraction techniques named Count Vector and TF-IDF. Features extraction techniques give us features into a numerical format that we can feed our classifiers Naïve Bayes classifiers: Multinomial Naïve Bayes and Support Vector Machine: binary classification separately. The evaluations are carried on using Python. NumPy [15, 16] and Scikit-Learn [17] are used to perform computations and implement machine learning models respectively. To conduct the evaluation, the dataset was randomly split into training and testing set by a share of 70%-30%. We assumed a sentence as a document. We evaluate our classifiers with Count Vector and TF-IDF feature extraction techniques. During feature extraction, we use the default for the max features parameter of Count Vector and TF-IDF. We used 2000 sentence to extract features. On the Naïve Bayes method, we use α=1 which symbolizes the Laplace Smoothing. We set the fit-prior-parameter=True on</s>
|
<s>our Multinomial Naïve Bayes method to learn class prior probability that served more accuracy. We used the grid search to find the optimal parameter for our support vector machine classifier with our dataset. We found that RBF kernel for C=100 and γ=0.01 give the maximum accuracy with count vector features. But on the TF-IDF feature SVM give maximum accuracy where C=1 and γ=1. Table IV describes the analysis of different classifiers with different feature extraction techniques. TABLE IV. COMPARATIVE ANALYSIS WITH DIFFERENT CLASSIFIERS ON A DATASET Classifier Feature Extraction Technique Accuracy Precision Recall F1 Score Naïve Bayes Classifier Count Vector 0.885 0.91 0.89 0.90 TF-IDf 0.875 0.89 0.88 0.88 SVM Count Vector 0.895 0.91 0.90 0.90 TF-IDf 0.89 0.95 0.89 0.92 We gained a maximum accuracy of 89% and F1-score of 92% with SVM classifier and TF-IDF feature extraction method. From Table IV, we can state that the difference in accuracy between the different techniques is more limited. But if we take attention to F1-score, we can conclude that SVM works better than the other classifier. However, accuracy can be increased by increasing the number of features. The number of features can be increased by collecting more data. We didn’t use any stop word elimination process, which could a limit to receiving higher accuracy and F1-score on our dataset. V. CONCLUSION Based on our research works presented in this paper, we may conclude that human abnormality can be classified from the Bengali text. We can classify abnormality of a person whether his/her expression is normal or abnormal by extracting his/her expressed text or converting his speech into text. We have acquired a good accuracy of around 89% on this study; however, this can be further enhanced. A rich dataset may help to gain much better accuracy by providing more information. As we have stated earlier that for the simplification and first attempt, we classified our attitude into only two classes in this research, we strongly believe that we will be able to extract more categorized sentiments in future research. We also believe that our contribution will inaugurate a wider perception in the expanse of human abnormality detection research works. REFERENCES [1] Neviarouskaya, Alena & Aono, Masaki & Prendinger, Helmut & Ishizuka, Mitsuru. (2014). Intelligent Interface for Textual Attitude Analysis. ACM Transactions on Intelligent Systems and Technology. 5. 1-20. 10.1145/2535912 [2] Karo Moilanen and Stephen Pulman, “Sentiment composition,”, In Proceedings of the RANLP 2007, 378–382. [3] Matthijs Mulder, Anton Nijholt, Marten den Uyl, and Peter Terpstra, “A lexical grammatical implementation of affect,” In Proceedings of the TSD 2004, Springer, Berlin, 171–178. [4] Tetsuya Nasukawa and Jeonghee Yi, “Sentiment analysis: Capturing favorability using natural language processing,” In Proceedings of the K-CAP 2003, 70–77. [5] A. Go, R. Bhayani and L. Huang. "Twitter sentiment classification using distant supervision". Technical report, Stanford Digital Library Technologies Project. 2009. [6] D. Davidiv, O. Tsur and A. Rappoport, “Enhanced Sentiment Learning Using Twitter Hash-tags and Smileys”. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters,</s>
|
<s>COLING ’10, pp. 241–9. Stroudsburg, PA: Association for Computational Linguistics. 2010. [7] Pak, A., & Paroubek, P. (2010, May). Twitter as a corpus for sentiment analysis and opinion mining. In LREc (Vol. 10, No. 2010, pp. 1320-1326). [8] Balahur, A., & Turchi, M. (2012, July). Multilingual sentiment analysis using machine translation. In Proceedings of the 3rd Workshop in Computational Approaches to Subjectivity and Sentiment Analysis (pp. 52- 60). Association for Computational Linguistics. [9] Kaur, A., & Gupta, V. (2014). Proposed algorithm of sentiment analysis for punjabi text. Journal of Emerging Technologies in Web Intelligence, 6(2), 180-183. [10] Hasan, K. A., & Rahman, M. (2014, December). Sentiment detection from Bangla text using contextual valency analysis. In Computer and Information Technology (ICCIT), 2014 17th International Conference on (pp. 292-295). IEEE. [11] Chowdhury, S., & Chowdhury, W. (2014, May). Performing sentiment analysis in Bangla microblog posts. In 2014 International Conference on Informatics, Electronics & Vision (ICIEV) (pp. 1-6). IEEE. [12] Nabi, M. M., Altaf, M. T., & Ismail, S. (2016). Detecting sentiment from Bangla text using machine learning technique and feature analysis. International Journal of Computer Applications, 153(11). [13] S. Chowdhury and W. Chowdhury, "Performing sentiment analysis in Bangla microblog posts," 2014 International Conference on Informatics, Electronics & Vision (ICIEV), Dhaka, 2014, pp. 1-6. [14] N. Tabassum and M. I. Khan, "Design an Empirical Framework for Sentiment Analysis from Bangla Text using Machine Learning," 2019 International Conference on Electrical, Computer and Communication Engineering (ECCE), Cox'sBazar, Bangladesh, 2019, pp. 1-5. [15] A. Neviarouskaya, H. Prendinger and M. Ishizuka, "@AM: Textual Attitude Analysis Model," Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, pages 80–88, Los Angeles, California, June 2010. c 2010 Association for Computational Linguistics. [16] Travis E. Oliphant. A guide to NumPy, USA: Trelgol Publishing, (2006). [17] Stéfan van der Walt, S. Chris Colbert and Gaël Varoquaux. The NumPy Array: A Structure for Efficient Numerical Computation, Computing in Science & Engineering, 13, 22-30 (2011). [18] Scikit-learn: Machine Learning in Python, Pedregosa et al., JMLR 12, pp. 2825-2830, 2011. http://jmlr.csail.mit.edu/papers/v12/pedregosa11a.html</s>
|
<s>See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/337317824Toxicity Detection on Bengali Social Media Comments using Supervised ModelsPreprint · November 2019DOI: 10.13140/RG.2.2.22214.01608CITATIONSREADS9102 authors:Some of the authors of this publication are also working on these related projects:Spatio-temporal Data Mining View projectNayan BanikComilla University8 PUBLICATIONS 17 CITATIONS SEE PROFILEMd. Hasan Hafizur RahmanComilla University11 PUBLICATIONS 26 CITATIONS SEE PROFILEAll content following this page was uploaded by Nayan Banik on 17 November 2019.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/337317824_Toxicity_Detection_on_Bengali_Social_Media_Comments_using_Supervised_Models?enrichId=rgreq-aabc2c5cdd59c16758379ccd33af64a6-XXX&enrichSource=Y292ZXJQYWdlOzMzNzMxNzgyNDtBUzo4MjYzMDMzOTM1ODcyMDRAMTU3NDAxNzQ4OTcyOQ%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/337317824_Toxicity_Detection_on_Bengali_Social_Media_Comments_using_Supervised_Models?enrichId=rgreq-aabc2c5cdd59c16758379ccd33af64a6-XXX&enrichSource=Y292ZXJQYWdlOzMzNzMxNzgyNDtBUzo4MjYzMDMzOTM1ODcyMDRAMTU3NDAxNzQ4OTcyOQ%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Spatio-temporal-Data-Mining?enrichId=rgreq-aabc2c5cdd59c16758379ccd33af64a6-XXX&enrichSource=Y292ZXJQYWdlOzMzNzMxNzgyNDtBUzo4MjYzMDMzOTM1ODcyMDRAMTU3NDAxNzQ4OTcyOQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-aabc2c5cdd59c16758379ccd33af64a6-XXX&enrichSource=Y292ZXJQYWdlOzMzNzMxNzgyNDtBUzo4MjYzMDMzOTM1ODcyMDRAMTU3NDAxNzQ4OTcyOQ%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Nayan_Banik?enrichId=rgreq-aabc2c5cdd59c16758379ccd33af64a6-XXX&enrichSource=Y292ZXJQYWdlOzMzNzMxNzgyNDtBUzo4MjYzMDMzOTM1ODcyMDRAMTU3NDAxNzQ4OTcyOQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Nayan_Banik?enrichId=rgreq-aabc2c5cdd59c16758379ccd33af64a6-XXX&enrichSource=Y292ZXJQYWdlOzMzNzMxNzgyNDtBUzo4MjYzMDMzOTM1ODcyMDRAMTU3NDAxNzQ4OTcyOQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Comilla_University?enrichId=rgreq-aabc2c5cdd59c16758379ccd33af64a6-XXX&enrichSource=Y292ZXJQYWdlOzMzNzMxNzgyNDtBUzo4MjYzMDMzOTM1ODcyMDRAMTU3NDAxNzQ4OTcyOQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Nayan_Banik?enrichId=rgreq-aabc2c5cdd59c16758379ccd33af64a6-XXX&enrichSource=Y292ZXJQYWdlOzMzNzMxNzgyNDtBUzo4MjYzMDMzOTM1ODcyMDRAMTU3NDAxNzQ4OTcyOQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Rahman242?enrichId=rgreq-aabc2c5cdd59c16758379ccd33af64a6-XXX&enrichSource=Y292ZXJQYWdlOzMzNzMxNzgyNDtBUzo4MjYzMDMzOTM1ODcyMDRAMTU3NDAxNzQ4OTcyOQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Rahman242?enrichId=rgreq-aabc2c5cdd59c16758379ccd33af64a6-XXX&enrichSource=Y292ZXJQYWdlOzMzNzMxNzgyNDtBUzo4MjYzMDMzOTM1ODcyMDRAMTU3NDAxNzQ4OTcyOQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Comilla_University?enrichId=rgreq-aabc2c5cdd59c16758379ccd33af64a6-XXX&enrichSource=Y292ZXJQYWdlOzMzNzMxNzgyNDtBUzo4MjYzMDMzOTM1ODcyMDRAMTU3NDAxNzQ4OTcyOQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Rahman242?enrichId=rgreq-aabc2c5cdd59c16758379ccd33af64a6-XXX&enrichSource=Y292ZXJQYWdlOzMzNzMxNzgyNDtBUzo4MjYzMDMzOTM1ODcyMDRAMTU3NDAxNzQ4OTcyOQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Nayan_Banik?enrichId=rgreq-aabc2c5cdd59c16758379ccd33af64a6-XXX&enrichSource=Y292ZXJQYWdlOzMzNzMxNzgyNDtBUzo4MjYzMDMzOTM1ODcyMDRAMTU3NDAxNzQ4OTcyOQ%3D%3D&el=1_x_10&_esc=publicationCoverPdfInternational Conference on Innovation in Engineering and Technology (ICIET) 23-24 December, 2019Toxicity Detection on Bengali Social MediaComments using Supervised ModelsNayan BanikDepartment of Computer Science & EngineeringGreen University of BangladeshDhaka - 1207, BangladeshEmail: cse.nayan@gmail.comMd. Hasan Hafizur RahmanDepartment of Computer Science & EngineeringComilla UniversityComilla - 3506, BangladeshEmail: hhr@cou.ac.bdAbstract—Social media playing an indispensable role in ourdaily life providing a public platform to share opinions in-cluding threats, spam and vulgar words often referred to astoxic comments. This type of expression depicts the anti-socialbehavior of the commentators which may hamper the onlineatmosphere. Filtering such toxic comments by handcrafting rulesis cumbersome because they are unstructured and often includemisspelled obscene words. Automated machine learning-basedmodels to classify such toxic comments constitute a part ofSentiment Analysis and they are extensively used for the Englishlanguage; showing promising results than statistical models.Though Bengali is a widely spoken language around the globe,little research works have been done to detect toxic commentsin this language. Hence in this scholarly manuscript, we providea comparative analysis of five supervised learning models (NaiveBayes, Support Vector Machines, Logistic Regression, Convolu-tional Neural Network, and Long Short Term Memory) to detecttoxic Bengali comments from an annotated publicly availabledataset. As our research finding, we demonstrate that both thedeep learning-based models have outperformed other classifiersby 10% margin where Convolutional Neural Network achievedthe highest accuracy of 95.30%.Keywords—Text Classification, Machine Learning, Natural Lan-guage ProcessingI. INTRODUCTIONSocial media provides a place for common people to sharetheir opinions, feelings, and reactions on diverse topics. Thispublic platform becomes a day-to-day habit for minors to age-old people who spend a myriad amount of time to socializewith their fellow peers. But often this online atmospherecreates disputable topics ranging from political propaganda,religious insanity and random hoax. Divided parties on suchphenomena exchange hate comments including threats, vulgarwords to attack each other personally. Such obscene wordsreferred to toxic comments are harmful to safe user experi-ence on the platform and hence need to be filtered out [1].Considering the social media as an information hub, excludingsuch toxic comments for analysis is an open challenge to thehuman as well as automated comment filter.According to [2], Bengali stands sixth as the most spokenlanguage in the world considering 228 million native speakersand this count is increasing rapidly due to its significance indemographic and political purposes. Moreover, Bengali as asouth Asian language is the national language of Bangladeshand it is also used partially in many regions of India. Theincreasing number of Bengali social media users post numer-ous statuses, comments, graphics, etc and they are instantlyavailable for others to react. Generally, this often results intext with toxic comments and needs to be</s>
|
<s>filtered out.Manual toxic comment filtration with offensive word listsand complex rules is not an easy task for inflectional languagelike Bengali. The unstructured nature of social media textalso makes it an arduous task with a poor acceptabilityscore. Handwritten rules using manual linguistic features werehard previously but cheap computing resources have changedthis scenario with automated systems [3]. Extracting notablefeatures from textual data using computational mechanismsis a part of Natural Language P rocessing (NLP ) whichrequires annotated corpus to convey information relating tomany applications including toxic comment detection.Researchers have tried statistical machine learning modelsto detect sentence toxicity. But such models require frequency-based feature engineering or probabilistic phenomena andhence do not scale well with unstructured social mediacomments[4]. To tackle these limitations, deep learning-basedmodels have proven their effectiveness by capturing low-levelfeatures and combining them into layer-wise abstractions[5].It is shown in different works that such models have out-performed other supervised machine learning models by agreat margin when applied to English text. In this scholarlywork, we have investigated that claim for the Bengali toxiccomment detection task comparing five classifiers namedNaive Bayes (NB), Support V ector Machines (SVM ),Logistic Regression (LR), Convolutional Neural Network(CNN ) and Recurrent Neural Network (RNN ), especiallyLong Short T erm Memory (LSTM ). Our experimental workdemonstrates that deep learning based models have betteraccuracy on the noisy nature of Bengali toxic comments.Further organization of this paper starts with a briefoverview of related works with their notable achievements andlimitations in Section II. Section III describes our appliedmodels and architectures. In Section IV, we provide theimplementation details of our proposed models along withthe comparative analysis and associated metrics. The paperconcludes with the conclusion and future references in Section978-1-7281-6309-3/19/$31.00 c©2019 IEEEII. RELATED WORKSToxic comment detection becomes a research topic due toits variational nature from the linguistic perspectives as wellas the commentator’s overview. Researchers in [6] describedthe inherent challenges in detecting toxic comments and theyhad proposed that the ensemble method of combining severalclassifiers works well when the comments have the variationalvocabulary. The inherent unaddressed complexities of toxiccomment detection and possible solutions to overcome themsystematically is proposed in [7]. Considering the effect oftoxic tweets, the researchers in [8] experimented with CNNand showed that toxicity can be revealed over time andthe inherent knowledge can be extracted. Relating to that,researchers in [9] experimented with Youtube comments todetect toxicity on specific channels contents. They appliedLatent Dirichlet Allocation to find out the topics on whichthe toxic comments were posted. To detect abusive text inBangla Facebook comments from different pages, authorsin [10] proposed several classifiers and claimed that SVMwith linear kernel performs better when T erm F requency- Inverse Documnet F requency (TF − IDF ) vectorizeris used. Relating to that, authors in [11] proposed a rootlevel algorithm to detect abusive comments from specificFacebook pages with a manually collected dataset. The worklacks comparisons with traditional classifiers and the smalldataset is also a limiting factor of this research. Researchersin [12], applied six classifiers to detect abusive commentsfrom a manually collected dataset. Here the authors collecteddata from Youtube, Facebook and Prothom-Alo pages andpre-processed them. Their experimental</s>
|
<s>results showed thatdeep learning-based model outperforms other models on agreat margin but the use of small dataset having only 4700comments is a limiting factor of this work. Comprehensivestudy on Bengali Sentiment Analysis (SA) is provided in[13], where the authors have demonstrated text-based researchworks in this field with their approaches, dataset, performancesand drawbacks.III. METHODOLOGYThe proposed models to find toxic comments is described inthis section. Initially, the acquired annotated dataset statisticsare given. To clean the noisy unstructured data, preprocessingis applied. Then we prepare our data compatible to be fed intothe model which structure is provided in the last part of thissection.A. Data AcquisitionAlmost every supervised classification task like toxicitydetection, a labeled dataset must be needed to train as well asto test the performance of the classifier. Manual preparation ofa dataset requires resources including human effort, knowledgeand a lot of time. To minimize that exhaustive search for amanual dataset, we have experimented with a human-annotatedpublic domain dataset for our work available at Github[14].The dataset contains five tags for each Bengali social mediacomment named toxic, threat, obscene, insult, racism. But thenumber of tagged comments for all columns except toxic isconsiderably small and hence we have only experimented withthe toxic columns as our label for classification. The value0 indicates non-toxic comments and 1 signifies toxic. Thedetailed statistics for the dataset is given in Table I.TABLE IDATASET STATISTICSTotal Comments 10219Toxic Comments 4255Non-Toxic Comments 5964Longest Comment length (in words) 528Smallest Comment Length (in words) 1Average Comment Length (in words) 12Unique Words 23600B. PreprocessingThe nature of social media comments is usually notstructured or follow any specific standards. For abusive andslang comments, these scenarios are more complicated as thecommentators express their waves of anger and depressionsthrough intentional misspellings and repetitive use of adjacentcharacters. Though there are some limitations to the commentsize, they are also platform dependent on which the commentsare posted. Some other major problems on social media com-ments include spelling errors, useless punctuation, emojis, andrandom duplication. In any text processing manual approach,several preprocessing tasks are performed to clean the noisydata. But considering the nature of toxic comments, we onlyperform punctuation and emoticons removal before doing thetokenization as they do not convey any meaning for toxicity.Sometimes the long version of words or intentional use ofrepetition may convey the attitude of the commentators andhence we do not perform any stemming or lemmatization onour dataset. Moreover, from the dataset statistics, the uniqueword count is not so large. So our feature space is also withinthe scope for the classifier to check their performances in time.C. Representation of Word EmbeddingsFor any text-based model depending on a deep neuralnetwork requires each word in a corpus to be represented asa vector of fixed length which is known as word embeddings.Word2V ec is such an algorithm that can be trained on acorpus to extract the embeddings by learning the similaritiesof word meanings[15]. In our work, we first build a D sizevocabulary from our dataset. Then each sentence in the datasetis transformed into D length one-hot encoded vector. Thesevectors are then fed into a neural network of</s>
|
<s>a single hiddenlayer containing m nodes. The hidden layer has a linearactivation function and the output layer has softmax activationfunction containing D nodes. Here each node represents thelikelihood of putting that word in the sentence. Upon trainingthis neural network, we build a D×m matrix; where D is thelength of input sentence and m is the number of hidden layernodes.D. Model ArchitectureOur approach to toxic comment detection utilizes twopopular deep learning based architectures named Long ShortT erm Memory (LSTM ) and Convolutional Neural Network(CNN ). The architectures of the models are shown in Figure 1and 2.For the performance analysis, we have also implementedbaseline models with Naive Bayes (NB), Support V ectorMachines (SVM ) and Logistic Regression (LR).Fig. 1. LSTM ArchitectureFig. 2. CNN Architecture1) LSTM: The preprocessed sentences in our dataset arefirst passed to the tokenizer to calculate the one-hot encodedvector of length 50 because the social media comments areshort in nature. To limit the computational time for training thenetwork and to simplify the model, we consider the 1000 mostoccurring words in the obtained vocabulary after tokenization.We also set a boundary on length 30 and hence commentswith longer than 30 words are truncated and the shorter arepadded with zeroes. These modified vectors are then fed tothe embedding layers and the weights are adjusted. We set theoutput layer dimension as 100 so that each word is representedwith 100 dimensions. Then the 50 words are fed to the LSTMlayer before adding the dense layer with softmax activationfunction.2) CNN: In our CNN architecture, we have used 1D con-volutional layer right after the embedding layer as describedin the previous section for LSTM . The convolutional layer isset to use 100 filters as the features are limited. In the nextlayer, we have used global max-pooling layer which extractsthe max value from them. Here the dimension of the outputvector is the same as the applied filters. Before the last denselayer having softmax activation, we pass our data through adense layer with ReLu activation function.3) Baseline Models: To compare the performances of ourdeep learning-based models, we have applied three classi-fiers named NB, SVM , LR as our baseline models. Thebaselines are used with their standard default parameters forthe simplicity of evaluation. Moreover, we do not apply anycross-validation to improve baselines accuracies. The datasetis trained on training data and test on the testing data. Noseparate validation data is used for parameter tuning. Thespecifics of the baselines are set the default to the librariesused for the experiment.IV. EXPERIMENTAL EVALUATIONThe performance of our proposed models for toxicity de-tection is discussed in this section. We also compare theperformance with the three baselines.A. Experimental LibrariesIn order to train the network and test the performance,we have used several freely available python based ma-chine learning resources. We have used TensorFlow1; anopen source library for numerical computation and large-scale machine learning including deep learning models andalgorithms. Theano2 is another python library that allows usersto compute and optimize complex mathematical computationsrelating high-dimensional arrays. Scikit-Learn3 is also a pythonlibrary which has clean, uniform, and streamlined API andprovides solid implementations</s>
|
<s>of a range of machine learningalgorithms.B. Results AnalysisFrom several evaluation metrics, we have considered accu-racy for our binary toxicity detection task. The performanceof baselines and our proposed deep learning-based models areshown in Table II and the corresponding loss-accuracy curvesare visualized in Fig. 3 and Fig. 4.TABLE IIPERFORMANCE STATISTICSClassifier AccuracyNaive Bayes 81.80Support Vector Machines 84.73Logistic Regression 85.22LSTM 94.13CNN 95.30Here from the Table II, we can see that both of our deeplearning-based models have outperformed all the three base-lines by roughly 10% margin and the best accuracy achieved inCNN classifier as 95.30%. The baselines use word frequencyas the mechanism to select features for the classification. Thistype of feature engineering results in poor performances as allthe baselines utilize TF − IDF as their feature extractor. Onthe other hand, the neural-network-based models utilize wordembeddings as their feature extractor. The word embeddingsare trained weights which play a crucial role to learn thelow-level abstractions and gradually improve itself throughgradient-based backpropagation technique.From the accuracy and loss curves of LSTM in Fig. 3 wecan see though the training accuracy reached on the maximum1https://www.tensorflow.org/2http://deeplearning.net/software/theano/3http://scikit-learn.org/stable/Fig. 3. Accuracy and Loss trends of LSTM ModelFig. 4. Accuracy and Loss trends of CNN Modellevel early in the training phase but the validation accuracycame to the stable state a little late as the netwok converges.The loss curve, on the other hand, depicts the overfitting onthe data and it gradually increases for validation data on everyepoch. The reason of this nature is quite straightforward asthe toxic comments are having many out of vocabulary wordsunseen during the training phase. As a result, we can claimthat the accuracy of LSTM network for this specific dataset islosing confidence in every epoch and the model is degrading.Contrary to LSTM network, accuracy and loss curves ofCNN in Fig. 3 showes steady growth in accuracy as wellas better generalizations in loss trends. From our experiment,we have seen that the optimal use of Global Average PoolingLayer instead of Max Pooling Layer results in better conver-gence during the training phase. Though the loss curve forvalidation data is slowly increasing, we can claim that ourCNN model has better confidence than the LSTM modeland hence the performance justifies for a marginal increasewith baselines.V. CONCLUSIONComment filtration is an important task for any promis-ing social media to provide safe atmosphere to its users.Filtering toxic comments in Bengali is currently an ongoingresearch topic as the number of Bengali speaking users insocial media is increasing. Previously, traditional machinelearning approaches were used to detect toxicity but since theirperformance are not scaling well with the large dataset, deeplearning based models emerged. In this scholary work, wehave demonstrated Bengali toxic comment detection systemusing two deep learning models named Convolutional NeuralNetwork (CNN) and Long Short Term Memory (LSTM). Bothof our models outperformed three baselines (Naive Bayes,Support Vector Machines and Logistic Regression) by 10%margin while CNN achieved the highest accuracy of 95.30%.ACKNOWLEDGEMENTThis work has been financially supported by Green Univer-sity of Bangladesh Research Fund.REFERENCES[1] T. Cooper, C. Stavros, and A. R. Dobele, “Domains of influence:exploring negative sentiment in social media,” Journal of Product &Brand</s>
|
<s>Management, 2019.[2] “What are the 10 most spoken languages in the world? — babbel,”https://www.babbel.com/en/magazine/the-10-most-spoken-languages-in-the-world/, (Accessed on 09/30/2019).[3] A. Goyal, V. Gupta, and M. Kumar, “Recent named entity recognitionand classification techniques: A systematic review,” Computer ScienceReview, vol. 29, pp. 21–43, 2018.[4] N. Banik and M. H. H. Rahman, “Evaluation of naı̈ve bayes and supportvector machines on bangla textual movie reviews,” in 2018 InternationalConference on Bangla Speech and Language Processing (ICBSLP).IEEE, 2018, pp. 1–6.[5] ——, “Gru based named entity recognition system for bangla onlinenewspapers,” in 2018 International Conference on Innovation in Engi-neering and Technology (ICIET). IEEE, 2018, pp. 1–6.[6] B. van Aken, J. Risch, R. Krestel, and A. Löser, “Challenges for toxiccomment classification: An in-depth error analysis,” in Proceedings ofthe 2nd Workshop on Abusive Language Online (ALW2), 2018, pp. 33–42.[7] B. Vidgen, A. Harris, D. Nguyen, R. Tromble, S. Hale, and H. Margetts,“Challenges and frontiers in abusive content detection.” Associationfor Computational Linguistics, 2019.[8] S. V. Georgakopoulos, S. K. Tasoulis, A. G. Vrahatis, and V. P.Plagianakos, “Convolutional neural networks for twitter text toxicityanalysis,” in INNS Big Data and Deep Learning conference. Springer,2019, pp. 370–379.[9] A. Obadimu, E. Mead, M. N. Hussain, and N. Agarwal, “Identifyingtoxicity within youtube video comment,” in International Conferenceon Social Computing, Behavioral-Cultural Modeling and Prediction andBehavior Representation in Modeling and Simulation. Springer, 2019,pp. 214–223.[10] S. C. Eshan and M. S. Hasan, “An application of machine learning todetect abusive bengali text,” in 2017 20th International Conference ofComputer and Information Technology (ICCIT). IEEE, 2017, pp. 1–6.[11] M. G. Hussain, T. Al Mahmud, and W. Akthar, “An approach to detectabusive bangla text,” in 2018 International Conference on Innovation inEngineering and Technology (ICIET). IEEE, 2018, pp. 1–5.[12] E. A. Emon, S. Rahman, J. Banarjee, A. K. Das, and T. Mittra,“A deep learning approach to detect abusive bengali text,” in 20197th International Conference on Smart Computing Communications(ICSCC), June 2019, pp. 1–5.[13] N. Banik, M. H. H. Rahman, S. Chakraborty, H. Seddiqui, and M. A.Azim, “Survey on text-based sentiment analysis of bengali language.”[14] “Bangla-abusive-comment-dataset,” https://github.com/aimansnigdha/Bangla-Abusive-Comment-Dataset, (Accessed on 09/30/2019).[15] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean,“Distributed representations of words and phrases and their composi-tionality,” in Advances in neural information processing systems, 2013,pp. 3111–3119.View publication statsView publication statshttps://www.researchgate.net/publication/337317824</s>
|
<s>A Deep Learning Approach to Detect Abusive Bengali TextSee discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/335945713A Deep Learning Approach to Detect Abusive Bengali TextConference Paper · June 2019DOI: 10.1109/ICSCC.2019.8843606CITATIONSREADS5985 authors, including:Some of the authors of this publication are also working on these related projects:Design and Development of Precision Agriculture Information System for Bangladesh View projectDriver Distraction Management Using Sensor Data Cloud View projectAmit Kumar DasEast West University (Bangladesh)39 PUBLICATIONS 290 CITATIONS SEE PROFILETanni MittraEast West University (Bangladesh)8 PUBLICATIONS 8 CITATIONS SEE PROFILEAll content following this page was uploaded by Amit Kumar Das on 22 September 2019.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/335945713_A_Deep_Learning_Approach_to_Detect_Abusive_Bengali_Text?enrichId=rgreq-46593571009723fbe3e04f0b9055d2e0-XXX&enrichSource=Y292ZXJQYWdlOzMzNTk0NTcxMztBUzo4MDU4MDI5MTkwNzk5MzZAMTU2OTEyOTc5NTMwMw%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/335945713_A_Deep_Learning_Approach_to_Detect_Abusive_Bengali_Text?enrichId=rgreq-46593571009723fbe3e04f0b9055d2e0-XXX&enrichSource=Y292ZXJQYWdlOzMzNTk0NTcxMztBUzo4MDU4MDI5MTkwNzk5MzZAMTU2OTEyOTc5NTMwMw%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Design-and-Development-of-Precision-Agriculture-Information-System-for-Bangladesh?enrichId=rgreq-46593571009723fbe3e04f0b9055d2e0-XXX&enrichSource=Y292ZXJQYWdlOzMzNTk0NTcxMztBUzo4MDU4MDI5MTkwNzk5MzZAMTU2OTEyOTc5NTMwMw%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Driver-Distraction-Management-Using-Sensor-Data-Cloud?enrichId=rgreq-46593571009723fbe3e04f0b9055d2e0-XXX&enrichSource=Y292ZXJQYWdlOzMzNTk0NTcxMztBUzo4MDU4MDI5MTkwNzk5MzZAMTU2OTEyOTc5NTMwMw%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-46593571009723fbe3e04f0b9055d2e0-XXX&enrichSource=Y292ZXJQYWdlOzMzNTk0NTcxMztBUzo4MDU4MDI5MTkwNzk5MzZAMTU2OTEyOTc5NTMwMw%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Amit_Das20?enrichId=rgreq-46593571009723fbe3e04f0b9055d2e0-XXX&enrichSource=Y292ZXJQYWdlOzMzNTk0NTcxMztBUzo4MDU4MDI5MTkwNzk5MzZAMTU2OTEyOTc5NTMwMw%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Amit_Das20?enrichId=rgreq-46593571009723fbe3e04f0b9055d2e0-XXX&enrichSource=Y292ZXJQYWdlOzMzNTk0NTcxMztBUzo4MDU4MDI5MTkwNzk5MzZAMTU2OTEyOTc5NTMwMw%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/East_West_University_Bangladesh?enrichId=rgreq-46593571009723fbe3e04f0b9055d2e0-XXX&enrichSource=Y292ZXJQYWdlOzMzNTk0NTcxMztBUzo4MDU4MDI5MTkwNzk5MzZAMTU2OTEyOTc5NTMwMw%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Amit_Das20?enrichId=rgreq-46593571009723fbe3e04f0b9055d2e0-XXX&enrichSource=Y292ZXJQYWdlOzMzNTk0NTcxMztBUzo4MDU4MDI5MTkwNzk5MzZAMTU2OTEyOTc5NTMwMw%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Tanni_Mittra?enrichId=rgreq-46593571009723fbe3e04f0b9055d2e0-XXX&enrichSource=Y292ZXJQYWdlOzMzNTk0NTcxMztBUzo4MDU4MDI5MTkwNzk5MzZAMTU2OTEyOTc5NTMwMw%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Tanni_Mittra?enrichId=rgreq-46593571009723fbe3e04f0b9055d2e0-XXX&enrichSource=Y292ZXJQYWdlOzMzNTk0NTcxMztBUzo4MDU4MDI5MTkwNzk5MzZAMTU2OTEyOTc5NTMwMw%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/East_West_University_Bangladesh?enrichId=rgreq-46593571009723fbe3e04f0b9055d2e0-XXX&enrichSource=Y292ZXJQYWdlOzMzNTk0NTcxMztBUzo4MDU4MDI5MTkwNzk5MzZAMTU2OTEyOTc5NTMwMw%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Tanni_Mittra?enrichId=rgreq-46593571009723fbe3e04f0b9055d2e0-XXX&enrichSource=Y292ZXJQYWdlOzMzNTk0NTcxMztBUzo4MDU4MDI5MTkwNzk5MzZAMTU2OTEyOTc5NTMwMw%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Amit_Das20?enrichId=rgreq-46593571009723fbe3e04f0b9055d2e0-XXX&enrichSource=Y292ZXJQYWdlOzMzNTk0NTcxMztBUzo4MDU4MDI5MTkwNzk5MzZAMTU2OTEyOTc5NTMwMw%3D%3D&el=1_x_10&_esc=publicationCoverPdf978-1-7281-1557-3/19/$31.00 ©2019 IEEE 2019 7th International Conference on Smart Computing & Communications (ICSCC) A Deep Learning Approach to Detect Abusive Bengali Text Estiak Ahmed Emon†, Shihab Rahman*, Joti Banarjee┼ , Amit Kumar Das‡, Tanni Mittra§ †*┼‡§Department of Computer Science and Engineering, East West University, Dhaka-1212, Bangladesh. Email: †estiakemon309@gmail.com, * r.shihab95@gmail.com, ┼ banarjeejoti@gmail.com, ‡amit.csedu@gmail.com, §tanni@ewubd.eduAbstract— Day by day, Social media sites, online news portals and blogs commenting sections are getting saturated with abusive contents in Bangladesh. Detecting different types of abusive contents in online will not only improve these websites discussion sections but will also ensure user’s safety. In this paper, several machine Learning and deep learning based algorithms e.g. Linear Support Vector Classifier (LinearSVC), Logistic Regression (Logit), Multinomial Naïve Bayes (MNB), Random Forest (RF), Artificial Neural Network (ANN), Recurrent Neural Network (RNN) with a Long Short Term Memory (LSTM) cell have been tested to detect multi-type abusive Bengali text. Besides, there has been introduced new stemming rules for Bengali language which help to achieve better performance of algorithms. Deep learning based algorithm RNN outperforms other algorithms by gaining highest accuracy 82.20%. Keywords—Linear Support Vector classifier; Multinomial Naïve Bayes; Long Short Term Memory; Deep Learning; Stemming etc. I. INTRODUCTION In this modern era, social networking sites have brought a revolutionary change in human life. Over the past decade, social network has grown in size and popularity [1]. Now people can easily communicate with each other and can share their information, feelings, and emotions using social sites in their own language and culture [19]. Currently Bangladesh has over 30 million active social network users. A survey reveals that 15 percent of active social media users increased since January 2017 [3]. In addition, Dhaka is the second highest city of active Facebook users [4]. Moreover, many of them use Twitter, Instagram, YouTube etc. [13]. As social networking sites are increasing gradually, cyber bullying is also getting more frequent [5]. People often face harassment by unknown users and strangers in social network [10]. Cybercrime and cyber bullying are rising rapidly in Bangladesh too [20]. A report by UNICEF indicates that 32 percent kids are at risk of cyber bullying in Bangladesh [7] [23]. Another study conduct by Cyber-Crime Awareness Foundation, an NGO reveals that 73.71 percent of cyber-crime victims are women [8]. Situation gets worst when several people committed suicide because of hateful messages and harassment through social</s>
|
<s>network [9] [22]. Sometimes, abusive posts lead people towards real life actions and violations [17] [21]. Many researchers have focused on text classification in English and other languages to detect abusive messages, comments or images [12][6]. All over the world, significant numbers of users use Bengali language to communicate with each other [2]. Therefore, there is a good scope to detect multi type abusive Bengali text using predictive approach. Machine Learning (ML) and Deep Learning (DL) algorithms can play momentous role to detect and eliminate Bengali abusive content existing in social media. In this paper, we introduce several types of Machine Learning (ML) and Deep Learning (DL) based algorithm, which is used to identify different types of abusive content in Bengali language. We imposed on evaluating the performance of algorithms by introducing stemming rules for Bengali language by following some Bengali grammar rules. Thus, applying these stemming rules with low dataset, higher accuracy is achieved. The rest of the paper follows proceeding: in section II, shows the relevant papers and research. Section III, explains the methodology. Section IV, discusses about what we get from experiment. Lastly, the paper concludes with section V. II. RELETED WORK To detect abusive content researchers tried to show many approaches. In paper [11] author introduced a new concept that is Bag of Communities (BoC) approach. This concept can identify abusive content from a major online community. In this paper [11] Naïve Bayes (NB), LinearSVC, Logistic Regression algorithms are compared for text classification and Naïve Bayes performs best in all condition. Paper [12] used Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) with long short-term memory to classify abusive comment. Another text classification problem is sentiment analysis. Paper [25] conducted sentiment analysis on Bengali and Romanized Bengali Text. Here data is categorized into positive, negative and ambiguous using deep learning and 78 percent accuracy is attained. In this paper the data set contains Romanized Bengali text. That’s why, sentences bear spelling mistake and grammatically incorrect. The analysis of how other ML algorithms perform with this type of data are absent in this paper. A few researches have been conducted on Bengali abusive text detection. Paper [14] worked with binary classification. Using 300 Facebook comments as data set the author proposed 2019 7th International Conference on Smart Computing & Communications (ICSCC)978-1-7281-1557-3/19/$31.00 ©2019 IEEEan algorithm to detect abusive comment. No predictive algorithm is used in this paper. Another paper [15] worked with total 2500 Bengali data, which is collected only from popular Facebook pages. The author used Support Vector Machine (SVM), Random Forest (RF), and Multinomial Naïve Bayes (MNB) and showed a good analysis using ML algorithms. However, in both papers good preprocessing technique like stemming are absent. The only paper [16] worked with classifier where preprocessing technique was present. In paper [16], stemming process was followed to get root form of a Bengali word. But in stemming process proper Bengali grammatical rules were not followed and it only removes suffix of a Bengali word for which</s>
|
<s>getting better performance was not possible for all types of data. However, all of these issues have been considered in this paper. III. METHODOLOGY Fig. 1. Depicts the workflow of whole process. Raw data is preprocessed in 3 steps. 3 different tokenizers are used before sending to algorithms. A. Dataset Collection 9206185816244101118445Slang Religious Hatred Personal Attack Politically Violated Antifaminism Positive Neutral20040060080010001200Class TypeFig. 2 shows the number of data containing each class. For this research, data were collected from public comment sections of different social sites and online resources e.g. YouTube, Prothom Alo Online [18] and different facebook pages. In the data set, positive and neutral data are also present as two different classes. Total data set size is 4700, which were labeled in seven different classes such as slang, religious hatred, personal attack, politically violated, antifeminism, positive and neutral comment data. B. Preprocessing As a step of preprocessing, we collected comments which contain only Bengali language using language detector, an open source python library for language detection. Then all types of punctuations, whitespaces, emoticons and digits from dataset were removed. Each type of data was labeled manually according to their respective classes. Stemming: In Natural Language, processing (NLP) stemming is important preprocessing technique for text analysis. Stemming approach is needed in NLP to find a word to its basic or root form. To get root form of Bengali word five rules are applied from Bengali grammar book [24]. • Rule 1: Article (পদাি�ত িনেদ�শক) Inflection: Article in Bengali language is added at the end of the numeric word and demonstrative pronoun and others word. If article is removed from the Bengali word, then the Bengali word will be turned into its root form. Example: টাকাটা -> টাকা; Removed ‘টা’(Ta) সারা�ট -> সারা; Removed ‘�ট’(Ti) ও�ট -> ও ; Removed ‘�ট’(Ti) • Rule 2: Number (বচন) Inflection: By removing Singular and plural number from a word gives root form of that word. Example: ছা�রা ->ছা�; Removed ‘রা’(Ra) খাতাখানা ->খাতা ;Removed ‘খানা’(khana) • Rule 3: Suffix (ত��ত �ত�য়) Inflection: If suffix is removed from the Bengali word then the Bengali word will be turned into its root form. Example: েচারা ->েচার; Removed ‘◌া’(Ra) জিমদাির -> জিমদার; Removed ‘ি◌’(i) • Rule 4: Verbal Root (ধাত� র মূল) Inflection: Another way to get root word is verbal root inflection. Example: খাওয়া ->খা; Removed ‘ওয়া’(oa) খাওন -> খা ; Removed ‘ওন’(on) • Rule 5: Bibhakti(িবভ��) Inflection: bibhakti is attach with the root word. If number is removed from the Bengali word, then the Bengali word will be turned into its root form. Example: েচয়ার�টেত ->েচয়ার�ট; Removed ‘েত’(te) যাে�ন -> যায় ; Repleced by ‘য়’(yo) Here root word of যাে�ন is যা but for effective work purpose ে�ন is repleced by (য়). 2019 7th International Conference on Smart Computing & Communications (ICSCC)TABLE I STEMMING EXAMPLE Before stemming After stemming ছিব�টেক িনিষ� করা হেয়েছ ছিব িনিষ� করা হয় কেয়ক মােসর মেধ�ই সারা েদেশ কায ��ম চালু হে� কেয়ক মাস মেধ� সারা েদেশ কায ��ম চালু হয় Table</s>
|
<s>I Sentences before and after stemming. C. Parameter Tuning For Algorithms TABLE II PARAMETER TUNING Algorithm Parameter for tuning Best value Multinomial Naïve Bayes Alpha=[0.0001, 0.001, 0.01, 1] Alpha = 0.01 LinearSvC Multi-class=[crammer, ovr ] Iterations=[800,1000,130'Tolerance': [0.0001, 0.001, 0.01] Multi-class=Ovr, Iterations = 800, Tolerance= 0.0001 Logistic regression Iterations=[800,1000,1300], 'Tolerance': [0.0001, 0.001, 0.01], multi-class =[‘ovr’, ‘auto’] Multi-class=ovr ,Iterations=800, Tolerance= .0001 Random forest classifier Number of trees=[100,500,1000], Maximum depth=[50,75,100] Number of trees=1000, Maximum depth= 100 Artificial neural network Number of layers=[2,3,4], Hidden layers=[1,2], Activation functions=[softmax,relu], Dropout =[0.1,0.2,0.3], Batch size =[64,128,256] Number of layer =3, Hidden layer=1, Activation function=softmaBatch size = 256, Dropout=0.1. Recurrent Neural Network with Long Short Term Memory Batch size=[16,32,64,128], Number of epochs=[5,7,10,15] optimizer = ['RMSprop', 'Adam', ‘Nadam’], Dropout rate = [0.0, 0.1, 0.2, 0.3] recurrent dropout = [0.0, 0.1, 0.2, 0.3] Batch_size = 16, Number of epochs=5, Optimizer=NadaDropout rate=0.1, Recurrent dropout=0.2. Table II. Parameter with values which were run using 10-fold cross validation. The best values were found for each classifier with uni-gram range. Hyper parameters are not run directly with estimator. It is possible to increase performance of algorithm by tuning parameter. GridSearchCV is used for parameter tuning. To identify best configuration for all algorithms we use different settings. Table II. Shows all parameters and best values for algorithms. 10-fold cross validation are used to train and test the data. The best parameters are used in all subsequent process. D. Extraction of feature and tokenizations: We evaluate the process using three types tokenizers to analysis performance and optimize the result. For machine learning based algorithm, we use countvectorizer and tf-idf vectorizer to find best model. We use N-gram(n(1,3)) range for extracting from the text. For deep learning based approach, word embedding is used for tokenization. IV. RESULT ANALYSIS A. Applying machine learning algorithms using CountVectorizer The ML algorithms are trained with CountVectorizer and the tested result is shown in Fig. 3. Multinomial Naïve Bayes gain 79.66 %, LinearSVC gain 80.93% Logistic Regression gain 77.96% and Random Forest classifier gain 73.72% accuracy. 79.66 80.93 77.9673.72Multinomial_NB Linear_SVC Logit RandomFClassifier Using CountvectorizerFig. 3. Shows accuracy of different algorithms using CountVectorizer. From Fig. 3, Linear SVC with CountVectorizer achieves highest accuracy than other classifiers. Besides, the accuracy of Multinomial Naïve Bayes is also noticeable which is very close to the Linear SVC. Other two algorithms result are poor. B. Applying machine learning algorithms using tf-idf Vectorizer 2019 7th International Conference on Smart Computing & Communications (ICSCC)The ML algorithms are trained with tf-idf Vectorizer. The tested results are shown in Fig. 4. Multinomial naïve Bayes gains 77.11%, LinearSVC gain 80.29% Logistic Regression gains 75.42% and Random Forest classifier gains 74.15% accuracy. TABLE III PRECISION, RECALL, F1_SCORE, SUPPORT USING COUNTVECTORIZER Class_name Precision Recall F1_score Support Slang 0.84 0.93 0.88 80 Religious_hatred 0.86 0.87 0.86 69 Personal_attack 0.72 0.74 0.73 57 Politically violated 0.78 0.82 0.80 56 Antifeminism 0.79 0.67 0.72 45 Positive 0.86 0.88 0.87 117 Neutral 0.69 0.56 0.62 48 Table III Chart of all classes Precision, Recall and F-1 Score of Linear</s>
|
<s>SVC classifier using Countvectorizer. 77.11 80.2975.42 74.15Multinomial_NB Linear_SVC Logit RandomFClassifier Using tf-idf vectorizerFig. 4. Accuracy of different algorithm using TF-IDF Vectorizer. From Fig. 4. We see that, Linear SVC with tf-idf Vectorizer shows better accuracy than other classifiers. Although, from Fig. 3 We see that, LinearSVC with countVectorizer shows slightly better accuracy. TABLE IV PRECISION, RECALL, F1_SCORE, SUPPORT USING TF-IDF VECTORIZER Class_name Precision Recall F1_score Support Slang 0.82 0.86 0.84 99 Religious_hatred 0.74 0.78 0.76 51 Personal attack 0.82 0.73 0.77 51 Politically violated 0.82 0.92 0.87 66 Antifeminism 0.73 0.59 0.65 41 Positive 0.87 0.90 0.88 115 Neutral 0.67 0.59 0.63 49 Table IV. Precision, Recall and f-1 Score of Linear SVC classifier using tf-idf Vectorizer. C. Applying Deep learning algorithm Deep learning based algorithms are trained with text to sequence which is keras provided tokenizer. From Fig. 5. We see that ANN achieves 34.65% accuracy where RNN with LSTM cells achieves 82.20 % accuracy. RNN shows better accuracy than artificial neural network. The gap of performance between these two algorithms is noticeable. 34.6582.2ANN RNNClassifierFig. 5. Accuracy of different deep learning algorithms. TABLE V PRECISION, RECALL, F1_SCORE, SUPPORT USING RNN Class_name Precision Recall F1_score Support Slang 0.89 0.92 0.91 89 Religious hatred 0.83 0.84 0.84 70 Personal_attack 0.80 0.80 0.80 56 Politically Violated 0.78 0.89 0.83 65 Antifeminism 0.91 0.67 0.77 30 Positive 0.88 0.81 0.84 112 Neutral 0.62 0.66 0.64 50 Table V. The chart of all classes Precision, Recall and F-1 Score of RNN. D. Comparison between RNN, Linear SVC tf-idf Vectorizer and Linear SVC CountVectorizer From Fig. 6, we see that RNN with LSTM cell outperforms all other machine learning algorithms by achieving accuracy 82.20%. Here RNN achieves highest accuracy. RNN gives best performance with high precision (0.83), high recall (0.82) and high f1-score (0.82) while linear SVC with CountVectorizer achieved precision 0.81, recall 0.81, f1-score 0.81 and linear SVC with tf-idf Vectorizer achieved 0.80 in precision, recall and f1-score. For our multiclass classification problem 90% training and 10 % testing brings highest accuracy. In RNN model with 3 hidden layers is used. Nesterov-accelerated Adaptive Moment 2019 7th International Conference on Smart Computing & Communications (ICSCC)Estimation(NADAM) optimizer is the combination of Adam, RMSProp and momentum, a modified algorithm of Adam. As our classification is multiclass, so categorical cross entropy with Nadam optimizers achieved highest accuracy. 80.9334.6582.2 80.29LinearSVC Countvectorizer ANN RNN LinearSVC tf-idf vectorizerClassifierFig. 6. Shows that RNN outperforms all other algorithms with accuracy achieved 82.20%. Dropout for node was kept 0.1 and recurrent dropout was kept 0.2 to prevent over fitting the model. Tensorflow library is adopted for RNN model. Tensorflow, an open source python library for deep learning based algorithm. V. CONCLUSION This experiment helps to analyze the performance of all algorithms by following Bengali grammar rule. In this paper, among machine learning and deep learning algorithms, RNN with LSTM cell performs best to detect Bengali abusive text. In future, this experiment will be extended by applying other deep learning algorithms such as Deep Neural</s>
|
<s>Network (DNN), Convolutional Neural Network (CNN) with Bengali spelling correcting process for detecting abusive Bengali text. REFERENCES [1] Global social media research summary 2019. [Online]. Available: https://www.smartinsights.com/social-media-marketing/social-media-strategy/new-global-social-media-research/.[Accessed On: 10- Feb- 2019]. [2] J. Islam, M. Mubassira, M. R. Islam and A. K. Das, "A Speech Recognition System for Bengali Language using Recurrent Neural Network," 2019 4th International Conference on Computer and Communication Systems (ICCCS), Singapore, 2019. [3] 30 Bangladesh’s Digital Marketing and Social Media Marketing Stats and Facts. [Online]. Available: https://www.soravjain.com/digital-marketing-and-social-media-marketing-stats-and-facts-of-bangladesh/.[Accessed On: 10- Feb- 2019]. [4] http://digiology.xyz/demographics-facebook-population-bangladesh-april-2018/.[Accessed On: 15- Feb- 2019]. [5] "A Majority of Teens Have Experienced Some Form of Cyberbullying", Pew Research Center: Internet, Science & Tech, 2019. [Online]. Available: http://www.pewinternet.org/2018/09/27/a-majority-of-teens-have-experienced-some-form-of-cyberbullying/.[Accessed On: 10- Feb- 2019]. [6] R. A. Tuhin, B. K. Paul, F. Nawrine, M. Akter and A. K. Das, " An Automated System of Sentiment Analysis from Bangla Text using Supervised Learning Techniques," 2019 4th International Conference on Computer and Communication Systems (ICCCS), Singapore, 2019. [7] Cyber Safety in Bangladesh: 32pc children bullied online.[Online]. Available: https://www.thedailystar.net/country/safer-internet-day-2019-prevent-bullying-of-children-online-in-bangladesh-unicef-1697785. [Accessed On: 10- Feb- 2019]. [8] M. Akter, F. T. Zohra and A. K. Das, “Q-MAC: QoS and mobility aware optimal resource allocation for dynamic application offloading in mobile cloud computing,” 2017 International Conference on Electrical, Computer and Communication Engineering (ECCE), Cox’s Bazar, 2017, pp. 803-808. [9] Cyber violence against women: the case of Bangladesh.[Online]. Available: https://www.genderit.org/articles/cyber-violence-against-women-case-bangladesh. [Accessed On: 03- Mar- 2019]. [10] A. K. Das, A. Ashrafi and M. Ahmmad, “Joint Cognition of Both Human and Machine for Predicting Criminal Punishment in Judicial System," 2019 4th International Conference on Computer and Communication Systems (ICCCS), Singapore, 2019. [11] E. Chandrasekharan, M. Samory, A. Srinivasan, E. Gilbert , “The Bag of Communities: Identifying Abusive Behavior Online with Preexisting Internet Data”, CHI 2017, May 6–11, 2017, Denver, CO, USA. [12] T. Chu, K. Jue, M. Wang,” Comment Abuse Classification with Deep Learning”. [13] T. Adhikary, A. K. Das, M. A. Razzaque, A. Almogren, M. Alrubaian, and M. M. Hassan, “Quality of Service Aware Reliable Task Scheduling in Vehicular Cloud Computing,” Mobile Networks and Applications, Volume 21, Issue 3, pp 482-493, June 2016. [14] M. G. Hussain, T. A. Mahmud, W. Akthar, “An Approach to Detect Abusive Bangla Text”, International Conference on Innovation in Engineering and Technology (ICIET) 27-29 December, 2018. [15] S. C. Eshan, M. S. Hasan,” An application of Machine Learning to Detect Abusive Bengali Text”. [16] A. M. Ishmam, J. Arman,” Automated Hate Speech Detection for Bengali Language in Social Media using Natural Language Processing and Machine Learning Approach”. [17] M. A. A. Mamun, J. A. Puspo and A. K. Das, “An intelligent smartphone based approach using IoT for ensuring safe driving,” 2017 International Conference on Electrical Engineering and Computer Science (ICECOS), Palembang, 2017, pp. 217-223. [18] Prothom Alo [Online]. Available: https://blog.mukto-mona.com. [Accessed On: 10- Feb- 2019]. [19] T. Adhikary, A. K. Das, M. A. Razzaque, M. Alrubaian, M. M. Hassan, and A. Alamri, “Quality of service aware cloud resource provisioning for social multimedia services and applications,” Multimedia Tools and Applications,</s>
|
<s>Volume 76, Issue 12, pp 14485-14509, June 2017. [20] Women increasingly falling prey to cyberbullying.[Online]. Available: http://m.theindependentbd.com/post/171850. [Accessed On: 11- Jan- 2019]. [21] Hindus again attacked in Bangladesh on false rumours of defaming Muhammad in facebook.", Struggle for Hindu Existence, 2019. [Online]. Available: https://hinduexistence.org/2014/05/07/hindus-again-attacked-in-bangladesh-on-false-rumours-of-defaming-muhammad-in-facebook/. [Accessed On: 18- Feb- 2019]. [22] A. K. Das, T. Adhikary, M. A. Razzaque, M. Alrubaian, M. M. Hassan, Z. Uddin, and B. Song, “Big media healthcare data processing in cloud: a collaborative resource management perspective,” Cluster Computing, Volume 20, Issue 2, pp 1599-1614, June 2017. [23] A. Tashnim, S. Nowshin, F. Akter and A. K. Das, “Interactive interface design for learning numeracy and calculation for children with autism,” 2017 9th International Conference on Information Technology and Electrical Engineering (ICITEE), Phuket, 2017, pp. 1-6. [24] M. Chowdhury, M. H. Chowdhury,”NCTB Bangla Grammer for Class 9-10”. [25] A. Hassan, M. R. Amin, A. K. A. Azada, N. Mohammed, “Sentiment Analysis on Bangla and Romanized Bangla Text (BRBT) using Deep Recurrent models”. 2019 7th International Conference on Smart Computing & Communications (ICSCC)View publication statsView publication statshttps://www.researchgate.net/publication/335945713</s>
|
<s>Detecting Suspicious Texts Using Machine Learning Techniquesapplied sciencesArticleDetecting Suspicious Texts Using MachineLearning TechniquesOmar Sharif 1 , Mohammed Moshiul Hoque 1,* , A. S. M. Kayes 2,* , Raza Nowrozy 2,3 andIqbal H. Sarker 11 Department of Computer Science and Engineering, Chittagong University of Engineering and Technology,Chittagong 4349, Bangladesh; omar.sharif@cuet.ac.bd (O.S.); iqbal@cuet.ac.bd (I.H.S.)2 Department of Computer Science and Information Technology, La Trobe University, Plenty Road, Bundoora,VIC 3086, Australia; r.nowrozy@latrobe.edu.au3 College of Engineering and Science, Victoria University, Ballarat Road, Footscray, VIC 3011, Australia* Correspondence: moshiul_240@cuet.ac.bd (M.M.H.); a.kayes@latrobe.edu.au (A.S.M.K.)Received: 5 August 2020; Accepted: 12 September 2020; Published: 18 September 2020�����������������Abstract: Due to the substantial growth of internet users and its spontaneous access via electronicdevices, the amount of electronic contents has been growing enormously in recent years throughinstant messaging, social networking posts, blogs, online portals and other digital platforms.Unfortunately, the misapplication of technologies has increased with this rapid growth of onlinecontent, which leads to the rise in suspicious activities. People misuse the web media to disseminatemalicious activity, perform the illegal movement, abuse other people, and publicize suspiciouscontents on the web. The suspicious contents usually available in the form of text, audio, or video,whereas text contents have been used in most of the cases to perform suspicious activities. Thus, one ofthe most challenging issues for NLP researchers is to develop a system that can identify suspicioustext efficiently from the specific contents. In this paper, a Machine Learning (ML)-based classificationmodel is proposed (hereafter called STD) to classify Bengali text into non-suspicious and suspiciouscategories based on its original contents. A set of ML classifiers with various features has been usedon our developed corpus, consisting of 7000 Bengali text documents where 5600 documents used fortraining and 1400 documents used for testing. The performance of the proposed system is comparedwith the human baseline and existing ML techniques. The SGD classifier ‘tf-idf’ with the combinationof unigram and bigram features are used to achieve the highest accuracy of 84.57%.Keywords: natural language processing; suspicious text detection; Bengali language processing;machine learning; text classification; feature extraction; suspicious corpora1. IntroductionDue to the effortless access of the Internet, world wide web, blogs, social media, discussionforums, and online platforms via digital gadgets have been producing a massive volume of digitaltext contents in recent years. It is observed that all the contents are not genuine or authentic; instead,some contents are faked, fabricated, forged, or even suspicious. It is very unpropitious with this rapidgrowth of digital contents that the ill-usage of the Internet has also been multiplied which governs theboost in suspicious activities [1]. Suspicious contents are increasing day by day because of ill-usage ofthe Internet by a few individuals to promulgate fierceness, share illegal activities, bullying other people,perform smishing, publicize incitement related contents, spread fake news, and so on. According tothe FBI’s Internet Crime Complaint Center (IC3) report, a total of 467,361 complaints received in theyear 2019 related to internet-facilitated criminal activity [2]. Moreover, several extremist users useAppl. Sci. 2020, 10, 6527; doi:10.3390/app10186527 www.mdpi.com/journal/applscihttp://www.mdpi.com/journal/applscihttp://www.mdpi.comhttps://orcid.org/0000-0002-1971-6522https://orcid.org/0000-0001-8806-708Xhttps://orcid.org/0000-0002-2421-2214https://orcid.org/0000-0001-5269-7012https://orcid.org/0000-0003-1740-5517http://www.mdpi.com/2076-3417/10/18/6527?type=check_update&version=1http://dx.doi.org/10.3390/app10186527http://www.mdpi.com/journal/applsciAppl. Sci. 2020, 10, 6527 2 of 23social media or blogs to spread suspicious and</s>
|
<s>violent contents which can be considered one kind ofthreat to national security [3].Around 245 million people are speaking in Bengali as their native tongue, which makes it the7th most spoken language in the world [4]. However, research on Bengali Language Processing(BLP) is currently in its initial stage, and there are no significant amount of works that have beenconducted yet like English, Arabic, Chinese, or other European languages that make Bengali a resourceconstrained language [5]. As far as we are concerned, there has been no research conducted up to nowon suspicious text detection in the Bengali language. However, such systems are required to ensurethe security as well as mitigate national threats in cyber-space.Suspicious contents are those contents that hurt religious feelings, provoke people againstgovernment and law enforcement agencies, motivate people to perform acts of terrorism,perform criminal acts by phishing, smishing, and pharming, instigate a community without anyreason, and execute extortion acts [6–9]. As examples, social media has already used as a mediumof communication in Boston attack and the revolution in Egypt [10]. The suspicious contentscan be available in the form of video, audio, images, graphics, and text. However, text plays anessential role in this context as it is the most widely used medium of communication in cyber-space.Moreover, the semantic meaning of a conversation can be retrieved by analyzing text contents which isdifficult in other forms of content. In this work, we focus on analyzing text content and classifying thecontent into suspicious or non-suspicious.A text could be detected as suspicious if it contained suspicious contents. It is impossible to detectsuspicious texts from the enormous amount of internet text contents manually [11]. Therefore, the automaticdetection of suspicious text contents should be developed. Responsible agencies have been demandingsome smart tool/system that can detect suspicious text automatically. It will also be helpful to identifypotential threats in the cyber-world which are communicated by text contents. Automatic detectionof suspicious text system can easily and promptly detect the fishy or threatening texts. Law andenforcement authority can take appropriate measures immediately, which in turn helps to reducevirtual harassment, and suspicious and criminal activities mediated through online. However, it isa quite challenging task to classify the Bengali text contents into suspicious or non-suspicious classdue to its complex morphological structure, enormous numbers of synonym, and rich variations ofverb auxiliary with the subject, person, tense, aspect, and gender. Moreover, scarcity of resourcesand lack of benchmark Bengali text dataset are the major barriers to build a suspicious text detectionsystem and make it more difficult to implement compared to other languages. Therefore, the researchquestion addressing in this paper is—“RQ: How can we effectively classify potential Bengali texts intosuspicious and non-suspicious categories?”To address this research question in this work, we first develop a dataset of suspicious andnon-suspicious texts considering a number of well-known Bengali data sources, such as Facebookposts, blogs, websites, and newspapers. In order to process the textual data, we take into accountunigram, bigram, trigram features using tf-idf and a bag of words feature extraction technique.Once the feature extraction has been done, we employ</s>
|
<s>the most popular machine learning classifiers(i.e., logistic regression, naive Bayes, random forest, decision tree, and stochastic gradient descent) toclassify whether a given text is suspicious or not. We have also performed a comparative analysis ofthese machine learning models utilizing our collected datasets. The key contributions of our work areillustrated in the following:• Develop a corpus containing 7000 text documents labelled as suspicious or non-suspicious.• Design a classifier model to classify Bengali text documents into suspicious or non-suspiciouscategories on developed corpus by exploring different feature combination.• Compare the performance of the proposed classifier with various machine learning techniques aswell as the existing method.• Analyze the performance of the proposed classifier on different distributions of thedeveloped dataset.Appl. Sci. 2020, 10, 6527 3 of 23• Exhibits a performance comparison between human expert (i.e., baseline) and machinelearning algorithms.We expect that the work presented in this paper will play a pioneering role in the developmentof Bengali suspicious text detection systems. The rest of the paper organized as follows: Section 2presents related work. In Section 3, a brief description of the development of suspicious Bengali corpusand its several properties have explained. Section 4 explained the proposed Bengali suspicious textdocument classification system and its significant constituents. Section 5 described the evaluationtechniques used to assess the performance of the proposed approach. Results of the experiments arealso presented in this section. Finally, in Section 6, we concluded the paper with a summary anddiscussed the future scopes.2. Related WorkSuspicious contents detection is a well-studied research issue for the highly resourced languageslike Arabic, Chinese, English, and other European languages. However, no meaningful researchactivities have been conducted yet to classify text with suspicious content in the BLP domain.A machine learning-based system developed to detect promotion of terrorism by analyzing the contentsof a text. Iskandar et al. [12] have collected data from Facebook, Twitter, and numerous micro-bloggingsites to train the model. By performing a critical analysis of different algorithms, they showed thatNaïve Bayes is best suited for their work as it deals with probabilities [13]. Johnston et al. [14] proposeda neural network-based system which can classify propaganda related to the Sunni (Sunni is a classof Islamic believer group of Muslims: www.britannica.com/topic/Sunni) extremist users on socialmedia platforms. Their approach obtained 69.9% accuracy on the developed dataset. A method toidentify suspicious profiles within social media presented where normalized compression distancewas utilized to analyze text [15]. Jiang et al. [16] discusses current trends and provides futuredirection to determine suspicious behaviour in various mediums of communications. The researchersinvestigated the novelty of true and false news on 126,000 stories that tweeted 4.5 million times usingML techniques [17]. An automated system explained the technique of detecting hate speech from theTwitter data [18]. Logistic regression with regularization outperforms other algorithms by attainingthe accuracy of 90%. An intelligent system introduced to detect suspicious messages from Arabictweets [19]. This system yields maximum accuracy of 86.72% using SVM with a limited number ofdata and class. Dinakar et al. [20] developed a corpus of YouTube comments for detecting textualcyberbullying using a multiclass and binary</s>
|
<s>classifier. A novel approach presented of detectingIndonesian hate speech by using SVM, lexical, word unigram and tf-idf features [21]. A methoddescribed to detect abusive content and cyberbullying from Chinese social media. Their modelachieved 95% accuracy by using LSTM and taking characteristic and behavioural features of a user [22].Hammer [23] discussed a way of detecting violence and threat from online discussions towardsminority groups. This work considered the manually annotated sentences with bigram features ofessential words.Since Bengali is an under-resourced language, the amount of digitized text (related to suspicious,fake, or instigation) is quite less. In addition to that, no benchmark dataset is available on the suspicioustext. Due to these reasons, very few research activities have carried out in this area of BLP, which aremainly related to hate, threat, fake and abusive text detection. Ishmam et al. [24] compare machinelearning and deep learning-based model to detect hateful Bengali language. Their method achieved70.10% accuracy by employing a gated recurrent neural network (GRNN) method on a dataset ofsix classes and 5 K documents collected from numerous Facebook pages. The reason behind this pooraccuracy is the less number of training documents in each class (approximately 900). Most importantly,they did not define the classes clearly, which is very crucial for the hateful text classification task.Recent work explained a different machine and deep learning technique to detect abusive Bengalicomments [25]. The model acquired 82% accuracy by using RNN on 4700 Bengali text documents.Ehsan et al. [26] discussed another approach of detecting abusive Bengali text by combining differentwww.britannica.com/topic/SunniAppl. Sci. 2020, 10, 6527 4 of 23n-gram features and ML techniques. Their method obtained the highest accuracy for SVM with trigramfeatures. A method to identify malicious contents from Bengali text is presented by Islam et al. [27].This method achieved 82.44% accuracy on an unbalanced dataset of 1965 instances by applying theNaive Bayes algorithm. Hossain et al. [28] develop a dataset of 50 k instances to detect fake newsin Bangla. They have extensively analyzed linguistic as well as machine learning-based features.A system demonstrated the technique to identify the threats and abusive Bengali words in social mediausing SVM with linear kernel [29]. The model experimented with 5644 text documents and obtainedthe maximum accuracy of 78%.As far as we aware, none of the remarkable research conveyed so far that focuses on detectingsuspicious Bengali text. Our previous approach used logistic regression with BoW features extractiontechnique to detect suspicious Bengali text contents [30]. However, that work considered only 2000 textdocuments and achieved an accuracy of 92%. In this work, our main concern is to develop theML-based suspicious Bengali text detection model trained on our new dataset by exploring variousn-gram features and feature extraction techniques.3. A Novel Suspicious Bangla Text DatasetUp until this date, no dataset is available for identifying Suspicious Bengali Texts (SBT).Therefore, we developed a Suspicious Bengali Text Dataset (SBTD), which is a novel annotatedcorpus to serve our purpose. The following subsection explains the definition of SBT with its inherentcharacteristics and details statistics of the developed SBTD.3.1. Suspicious Text and Suspicious Text DetectionSuspicious Text Detection</s>
|
<s>(STD) system classifies a text tiεT from a set of texts T = {t1, t2, ..., tm}into a class ciεC from a set of two classes C = {Cs, Cns}. The task of STD is to automatically assign tito ci: < ti, ci >.Deciding whether a Bengali text is suspicious or not is not so simple even for language expertsbecause of its complicated morphological structure, rich variation in sentence formation, and lackof defining related terminology. Therefore, it is very crucial to have a clear definition of SBT formaking the task of STD smoother. In order to introduce a reasonable definition concerning the Bengalilanguage, several definitions of violence, incitement, suspicious, and hatred contents have analyzed.Most of the information collected from the different social networking websites and scientific paperssummarized in Table 1.Table 1. Definitions of hatred, incitement and violent contents according to different social networkingwebsites, organization, and scientific studiesSource DefinitionFacebook“Contents that incite or facilitate serious violence pose credible threat to thepublic or personal safety, instructions to make weapons that could injure orkill people and threats that lead to physical harm towards private individualsor public figures” [6].Twitter “One may not promote terrorism or violent extremism, harasses or threatenother people, incite fury toward a particular or a class of people” [31].YouTube“Contents that incite others to promote or commit violence againstindividuals and groups based on religion, nationality, ethnicity, sex/gender,age, race, disability, gender identity/sexual orientation” [32].Council of Europe (COE) “Expression which incite, spread, promote or justify violence toward aspecific individual or class of persons for a variety of reasons” [33].Paula et al.“Language that glorify violence and hate, incite people against groups basedon religion, ethnic or national origin, physical appearance, gender identityor other” [7].Appl. Sci. 2020, 10, 6527 5 of 23The majority of the quoted definitions focus on similar attributes such as incitement of violence,promotion of hate and terrorism, and threatening a person or group of people. These definitions coverthe larger aspect of suspicious content from video, text, image, cartoon, illustrations and graphics.Nevertheless, in this work, we concentrate on detecting suspicious content form the text contentsonly. Analyzing the contents and properties of these definitions guided us to present a definition ofsuspicious Bengali text as follows:“Suspicious Bengali texts are those texts which incite violence, encourage in terrorism,promote violent extremism, instigate political parties, excite people against a personor community based on some specific characteristics such as religious beliefs, minority,sexual orientation, race and physical disability.”3.2. Development of SBT CorporaBengali is the resource-constrained language due to its scarcity of digitized text contents andunavailability of benchmark datasets. By considering the explanation of SBT and the characteristics ofsuspect activity defined by the U. S. department of homeland security, we accumulated the text datafrom various online sources [34]. We endorsed the same technique of developing datasets, as explainedby Das et al. [35]. Figure 1 illustrates the process of dataset development.Figure 1. Process of dataset accumulation.Data crowd-sourcing: Figure 2 shows the total number of texts collected from different sources interms of suspicious (S) and non-suspicious (NS) classes. We have crawled a total of 7115 texts</s>
|
<s>amongthem 3557 texts are S, and 3558 texts are NS. In the case of the suspicious class, 12.2% of source textscollected from the website (W), 12% data collected from the Facebook comment (FC), and 10.2% fromthe newspaper (N). Other sources such as Facebook posts (FP) and online blogs (OB) contributed 8.9%and 5.4% of text data. On the other hand, a significant portion of non-suspicious source texts collectedfrom the newspapers (30.4%). A total of 7.8% of non-suspicious texts were collected from the OB,5.6% from the W and 3.2% from the FC. A tiny portion of the texts was accumulated from varioussources (such as novels and articles) in both classes. As the sources of the newspapers, the three mostpopular Bangladeshi newspapers are considered (such as the daily Jugantor, the daily Kaler Kontho,and the daily Prothom Alo) for accumulating the texts.Data labelling: Crowd-sourced data are initially labelled by five undergraduate students ofChittagong University of Engineering and Technology who have 8–12 months of experience in theBLP domain. They are also doing their undergraduate thesis on BLP and attended several seminars,webinars, and workshops on computational linguistics and NLP.Label verification: The expert verifies the data labels. A professor or a PhD student having morethan five years of experience or any researcher having vast experience in the BLP domain can beconsidered as an expert. The final labels (Cns, Cs) of data are decided by pursuing the process describedin Algorithm 1.Appl. Sci. 2020, 10, 6527 6 of 23Figure 2. Source texts distribution in suspicious (S) and non-suspicious (NS) categories. The acronymsFP, FC, W, OB, N, and M denote Facebook pages, Facebook comments, websites, online blogs,newspapers, and miscellaneous, respectively.Algorithm 1: Process of data labellingT ← Text Dataset;A← Set of annotators;Final_Label ← will contain final labels;Initial_Label, Expert_Label;for iεT doCountS ← 0, CountNS ← 0 (initialization of suspicious and non-suspicious count);for ajεA doif aj == Cs thenCountS = CountS + 1;elseCountNS = CountNS + 1;endendInitial_Label = CountNS > CountS ? (0) : (1);Expert_Label = (0) || (1);if Initial_Label == Expert_Label thenFinal_Label[i] = Initial_Label;elseFinal_Label[i] = ‘x′;endi = i + 1;endFor each text in T, the annotator labels are counted. aj indicates the jth annotator label for theith text. If the annotator label is suspicious (Cs), then suspicious count (CountS) will be increased;otherwise, the non-suspicious count (CountNS) will increase. Majority voting [36] will decide theinitial label. If the non-suspicious count is greater than the suspicious count, then the initial label willbe non-suspicious (0); otherwise, suspicious (1). After that, the expert will label the text as eitherAppl. Sci. 2020, 10, 6527 7 of 23non-suspicious or suspicious. If the initial label matches the expert label, then it will be the finallabel. When disagreement increased, the label marked with ‘x’, and the final label will be decided by adiscussion between the experts and the annotators. If they agree on a label, it will be added to SBTD;otherwise, it will be discarded. It is noted that most of the disagreement was aroused for data of thesuspicious class. Among 900 disagreements, only 5–7% disagreement occurs for non-suspicious classes.A</s>
|
<s>small number of labels and their corresponding texts discarded from the crawled dataset due tothe disagreement between experts and annotators. Precisely, 57 for the suspicious class and 58 for thenon-suspicious class. We got 9.57% deviation on the agreement among annotators for suspicious classand 2.34% deviation for the non-suspicious class. This deviation is calculated by averaging pairwisedeviation between annotators. Cohen’s kappa [37] between human expert and initial annotators are88.6%, which indicates a high degree of similarity between them. Table 2 shows a sample data of ourcorpus. Our data are stored in the corpus in Bangla form, but Banglish form and the English translationis given here for better understanding.Table 2. A sample text with corresponding metadata on SBTD.Domain https://www.prothomalo.com/Source NewspaperCrawling Date 18 January 2019Text (ti)(Banglish form: “BPL a ek durdanto match gelo. Khulna Titans ketin wicket a hariye dilo comilla victoria”). (English form: “A greatmatch was played in BPL. Comilla Victoria defeated Khulna Titansby 3 wickets”)Final Label 0 (Non-Suspicious)Table 3 summarizes the several properties of the developed dataset. Creating SBTD was the mostchallenging task for our work because all the texts demanded manual annotation. It took around tenmonths of relentless work to build this SBTD. Some metadata have also been collected with the text.Table 3. Statistics of the dataset.Attributes Suspicious (Cs) Non-Suspicious (Cns)Number of documents 3500 3500Total words 95,629 252,443Total unique words 18,236 36,331Avg. number of words 27.32 72.12Maximum text length 427 2102Minimum text length 3 5Size (in bytes) 688,128 727,0404. Proposed SystemThe primary objective of this work is to develop a machine learning-based system that canidentify suspicious content in Bengali text documents. Figure 3 shows a schematic process of theproposed system that is comprised of four major parts: preprocessing, feature extraction, training andprediction. Input texts are processed by following several preprocessing steps explained in Section 4.1.Feature extraction methods are employed on the processed texts to extract features. In the trainingphase, exploited features are used to train the machine learning classifiers (i.e., Stochastic gradientdescent, Logistic regression, Decision tree, Random forest, and Multinomial Naïve Bayes). Finally,the trained model will be used for classification in the prediction step. The following subsectionsinclude the detailed explanation of the significant parts of the proposed system.https://www.prothomalo.com/Appl. Sci. 2020, 10, 6527 8 of 23Figure 3. Schematic process of the proposed suspicious text detection system.4.1. PreprocessingPreprocessing is used to transform raw data into an understandable form by removinginconsistencies and errors. Suppose that a Bengali text document ti = (Banglish form) “Ei khulna titanske, tin wickete hariye dilo comilla victoria, ?...|” (English translation: Comilla Victoria defeated thisKhulna Titans by three wickets.) of the dataset T[] can be preprocessed according to the following steps:• Redundant characters removal: Special characters, punctuation, and numbers are removed fromeach text ti of the dataset T[]. After this, ti becomes “Ei khulna titans ke tin wickete hariye dilocomilla victoria”.• Tokenization: Each text document ti is detruncated into its constituent words. A word vector ofdimension k is obtained by tokenizing a text, ti having k words, where ti = w<1>, w<2>, ..., w<k>.Tokenization gives a list of words</s>
|
<s>of the input text such as ti = [‘Ei’, ‘khulna’, ‘titans’, ‘ke’, ‘tin’,‘wickete’, ‘hariye’, ‘dilo’, ‘comilla’, ’victoria’]• Removal of stop words: Words that have no contribution in deciding whether a text ti is (Cs) or(Cns) is considered as unnecessary. Such words are dispelled from the document by matchingwith a list of stop words. Finally, after removing the stop words, the processed text as, ti =“Khulna titans ke tin wickete hariye dilo comilla victoria”. (English translation: Comilla Victoriadefeated Khulna Titans by three wickets) will be used for training.With the help of the above operations, a set of processed texts is created. These texts are storedchronologically in a dictionary in the form of array indexing A[t1]... A[t7000] with a numeric (0, 1) label.Here, 0 and 1 represent non-suspicious and suspicious class, respectively.4.2. Feature ExtractionMachine learning models could not possibly learn from the texts that we have prepared.Feature extraction performs numeric mapping on these texts to find some meaning. This workexplored the bag of words (BoW) and term frequency-inverse document frequency (tf-idf) featureextraction techniques to extract features from the texts.The BoW technique uses the word frequencies as features. Here, each cell gives the count (c)of a feature word ( fwi) in a text document (ti). Unwanted words may get higher weights than thecontext-related words on this technique. The Tf-idf technique [38] tries to mitigate this weightingproblem by calculating the tf-idf value according to Equation (1):t f − id f ( fwi, ti) = t f ( fwi, ti) log|tεm : fwεt| (1)Here, t f − id f ( fwi, ti) indicates the tf-idf value of word fwi in text document (ti), t f ( fwi, ti)indicates the frequency of word fwi in text document (ti), m means total number of text documents,and |tεm : fwεt| represents the number of text document t containing word fw.Tf-idf value of the feature words (( fw)) puts more emphasis on the words related to the contextthan other words. To find the final weighted representation of the sentences, compute the EuclideanAppl. Sci. 2020, 10, 6527 9 of 23norm after calculating t f − id f value of the feature words of a sentence. This normalization set highweight on the feature words with smaller variance. Equation (2) computes the norm:Xnorm(i) = Xi/(X1)2 + (X2)2 + ... + (Xn)2 (2)Here, Xnorm(i) is the normalized value for the feature word fwi and X1, X2, ..., Xn are the t f − id fvalue of the feature word fw1, fw2, ..., fwn, respectively. Features picked out by both techniques havebeen applied on the classifier.BoW and tf-idf feature extraction techniques are used to extract the features. Table 4 presentsthe sample feature values for first five feature words ( fw1, fw2, fw3, fw4, fw5) of the first four textsamples (t1, t2, t3, t4) in our dataset. Features exhibited by an array of size (m ∗ n) having m rowsand n columns. A total of 7000 text documents t1, t2, ..., t7000 are represented in rows while all thefeature words fw1, fw2, ..., f3000 are</s>
|
<s>represented in columns. In order to reduce the complexity andcomputational cost, 3000 most frequent words considered as the feature words among thousands ofunique words.Table 4. Small fragment of extracted feature values for the first four texts of the dataset.c Technique fw1 fw2 fw3 fw4 fw5Sample Feature ValuesBoW 1 0 4 6 2tf-idf 0.35 0.03 0.42 0.59 0.23BoW 5 2 1 8 10tf-idf 0.47 0.28 0.11 0.65 0.72BoW 0 1 3 12 5tf-idf 0.04 0.11 0.22 0.75 0.44BoW 2 0 7 4 9tf-idf 0.17 0.02 0.62 0.48 0.65The model extracted linguistic n-gram features of the texts. The N-gram approach is used to takeinto account the sequence order in a sentence in order to make more sense from the sentences [39].Here, ‘n’ indicates the number of consecutive words that can be treated as one gram. N-gram, as wellas a combination of n-gram features, will be applied in the proposed model.Table 5 shows the illustration of various n-gram features. The combination of two featureextraction techniques and n-gram features will be applied to find the best-suited model for theaccomplishment of suspicious Bengali text detection.Table 5. Representation of different N-gram features for a sample Bangla text (Banglish form).N-grams “Khulna titans ke tin wickete hariye dilo comilla victoria”unigrams ‘khulna’, ‘titans’, ‘ke’, ‘tin’, ‘wickete’, ‘hariye’, ‘dilo’, ‘comilla’, ‘victoria’bigrams ‘khulna titans’, ‘titans ke’, ‘ke tin’, ‘tin wickete’, ‘wickete hariye’, ‘hariyedilo’, ‘dilo comilla’, ‘comilla victoria’trigrams ‘khulna titans ke’, ‘titans ke tin’, ‘ke tin wickete’, ‘tin wickete hariye’,‘wickete hariye dilo’, ‘hariy dilo comilla’, ‘dilo comilla victoria’4.3. TrainingFeatures that we obtained from the previous step were used to train the machine learning modelby employing different popular classification algorithms [40]. These algorithms are stochastic gradientdescent (SGD), logistic regression (LR), decision tree (DT), random forest (RF), and multinomialnaïve Bayes (MNB). We analyze these algorithms and explain their structure in our system in thefollowing subsections.Appl. Sci. 2020, 10, 6527 10 of 234.3.1. Stochastic Gradient DescentStochastic gradient descent (SGD) is a well-known technique used to solve ML problems [41]. It isan optimization technique where a sample is selected randomly in each iteration instead of whole datasamples. Equations (3) and (4) represent the weight update process for gradient descent and stochasticgradient descent at the jth iteration:wj := wj − α∂wj(3)wj := wj − α∂Ji∂wj(4)Here, α indicates the learning rate, J represents the cost over all training examples, and Ji is thecost of the ith training example. It is computationally costly to calculate the sum of the gradient of thecost function of all the samples; thus, each iteration takes a lot of time to complete [42]. To address thisissue, SGD takes one sample randomly in each iteration and calculate the gradient. Although it takesmore iteration to converge, it can reach the global minima with shorter training time. Algorithm 2explains the process of SGD. C is the optimizer that takes θ and returns the cost and gradient. α andtheta0 represents the learning rate and the starting point of SGD, respectively.Algorithm 2: Process of SGDFunction SGD(C, theta0, α, max_iter):θ = theta0;for i ε max_iter do_ , gradient = C(θ);θ</s>
|
<s>= θ − (α ∗ gradient);i++;endEnd FunctionWe implemented the SGD classifier with ‘log’ loss function and the ‘l2’ regularization technique.We choose a maximum number of iterations on a trial and error basis. Finally, 40 iterations are usedand samples are randomly shuffled during training.4.3.2. Logistic RegressionLogistic regression [43] is well suited for the binary classification problem. Equations (5)–(6)define the logistic function that determines the output of logistic regression:hθ(x) =1 + exp(−θTx)(5)Cost function is,C(θ) =i−1c(hθ(xi), yi) (6)c(hθ(x), y) =− log(1− hθ(x)) if y = 0− log(hθ(x)) if y = 1Here, m indicates the number of training examples, hθ(xi) presents the hypothesis function ofthe ith training example, and yi is the input label of ith training example. We used the ’l2’ norm topenalize the classifier, and the ‘lbfgs’ optimizer is used for a maximum of 100 iterations. The defaultvalue of the inverse of regularization strength is used with a random state 0.Appl. Sci. 2020, 10, 6527 11 of 234.3.3. Decision TreeThe decision tree has two types of nodes: external and internal. External nodes representthe decision class while internal nodes have the features essential for making classification [44].The decision tree was evaluated in the top-down approach where homogeneous data were partitionedinto subsets. Its entropy determines the homogeneity of samples, which is calculated by theEquation (7):E(S) =l=1pi log2 pi (7)Here, pi is the probability of a sample in the training class, and E(S) indicates entropy of thesample. We used entropy to determine the quality of the split. All of the features considered duringthe split to choose the best split in each node. Random state 0 controls permutation of the features.4.3.4. Random ForestThe Random Forest (RF) comprises of several decision trees which operate individually [45].The ’Gini index’ of each branch is used to find the more likely decision branch to occur. This indexcalculated by Equation (8):Gini = 1−l=1(pi)2 (8)Here, c represents the total number of class and pi indicated the probability of the ith class. Weused 100 trees in the forest where the quality of split is measured by ‘gini’. Internal nodes are split if atleast two nodes are there and all the system features are considered in each node.4.3.5. Multinomial Naïve BayesMultinomial Naïve Bayes (MNB) is useful to classify discrete features such as document or textclassification [46]. MNB follows multinomial distribution and uses a Bayes theorem where variablesV1, V2, ..., Vn of class C are conditionally independent of each other given C [47]. Equations (9) and (10)used MNB for text classification in our dataset:p(C|V) =p(V|C)p(C)p(V)p(C|(v1, v2, ..., vn) =p(v1|C)p(v2|C)...p(vn|C)p(C)p(v1)p(v2)...p(vn)p(C)∏ni=1 p(vi|C)p(v1)p(v2)...p(vn)(9)Here, C is the class variable and V = (v1, v2, ..., vn) represents the feature vector. We assume thatfeatures are conditionally independent. The denominator remains constant for any given input; thus,it can be removed:C = argmaxC p(C)i=1p(vi|C) (10)Equation (10) is used to compute the probability of a given set of inputs for all possible valuesof class C and pick up the output with maximum probability. Laplace smoothing used and priorprobabilities of a class are adjusted according to the data.4.4. PredictionIn this step, the trained classifier</s>
|
<s>models have been used for classification. The test setTS = {t1, t2, t3, ..., tx} has x test documents, which will be used to test the classifier model.Appl. Sci. 2020, 10, 6527 12 of 23Predicted class (C) is determined by using threshold (Th) on the predicted probability (P) usingEquation (11):C =Non− suspicious(Cns) if P <= ThSuspicious(Cs) if P > Th(11)The proposed approach classifies suspicious and non-suspicious classes as a binary classification,so sigmoid activation function is used without tweaking the default value of Th. It ensured that bothtrain and test documents from the same distribution; otherwise, evaluation will not be accurate.5. ExperimentsThe goal of the experiments is to analyze the performance of different machine learning classifiersfor various feature combinations. We will use several graphical and statistical measures to findout the most suitable model that can perform well for the task of suspicious text classification.Experimentation was carried out in an open-source Google colab platform with Python == 3.6.9and TensorFlow == 2.2.1 [48]. Pandas == 1.0.3 data frame used for dataset preparation and trainingand testing purpose, scikit-learn == 0.22.2 used. The dataset was partitioned into two independent sets:training and testing. Data are randomly shuffled before partitioning to dispel any bias. The trainingset is comprised of 80% of the total data (5600 text documents), and the testing set has 20% of thetotal data (1400 text documents). In this section, we subsequently discuss the measures of evaluationand analyze the results of experiments. In addition, we compare the proposed model with existingtechniques as well as the human baseline.5.1. Measures of EvaluationVarious statistical and graphical measures are used to calculate the efficiency of the system. Thefollowing terminologies have been used for evaluation purposes:• True Positive (τp): Texts (ti) correctly classified as suspicious (Cs).• False Positive (Φp): Texts (ti) incorrectly classified as suspicious (Cs).• True Negative (τn): Texts (ti) correctly classified as non-suspicious (Cns).• False Negative (Φn): Texts (ti) incorrectly classified as non-suspicious (Cns).• Precision: It tells how many of the ti are actually Cs among the ti that are classified as Cs.Precision is calculated by Equation (12):P = (τp)/(τp + Φp) (12)• Recall: It gives the value of how many text documents ti classified correctly as Cs among totalsuspicious texts. Recall can compute by using Equation (13):R = (τp)/(τp + Φn) (13)• f1-score: This is a useful evaluation metric to decide which classifier to choose among severalclassifiers. It is calculated by averaging precision and recall, which is done by Equation (14):f1-score = (2 ∗ P ∗ R)/(P + R) (14)As the dataset is balanced, the receiver operating characteristics (ROC) curve is therefore used forthe graphical evaluation. The trade-off between the true and false positive rate is summarized by it fordifferent probability thresholds.Appl. Sci. 2020, 10, 6527 13 of 235.2. Evaluation ResultsWe used scikit-learn, a popular machine learning library to implement ML classifiers.Parameters of the classifiers tuned during experimentation. A summary of the parameters usedfor each classifier is presented in Table 6.The ‘L2’ regularization technique used with ‘lbfgs’ optimizer in logistic regression. The inverseof the regularization strength set</s>
|
<s>to 1. We select criterion as ‘entropy’ and ‘gini’ for DT and RF,respectively, to measure the quality of a split. Both cases utilize all system features and select the bestsplit at each internal node of DT. We implement RF with 100 decision trees. Each node of the decisionbranch is divided if it has at least two samples. In MNB, we applied adaptive smoothing and priorprobabilities adjusted according to the samples of the class. In the SGD classifier, we selected ’log’ lossfunction and ‘l2’ regularization with the optimal learning rate. Samples were shuffled randomly witha state 0 during training for a maximum of 40 iterations.Table 6. Summary of the classifier parameters.Classifiers ParametersLR penalty = ‘l2’, C = 1.0, solver = ‘lbfgs’, max_iter = 100DT criterion = ‘entropy’, splitter = ‘best’,max_features = n_features, random_state = 0RF n_estimators = 100, criterion = ‘gini’,min_samples_split = 2, max_features = n_featuresMNB alpha = 1.0, fit_prior = true, class_prior = noneSGD loss = ‘log’, penalty = ‘l2’, learning_rate = ‘optimal’,max_iter = 40, random_state = 05.2.1. Statistical EvaluationThe proposed system experimented with five different classification algorithms for BoW and tf-idffeature extraction techniques with n-gram features. The final system evaluated with F1 = unigram,F2 = bigram, F3 = trigram, F4 = (unigram + bigram) and F5 = (unigram + bigram + trigram) features.Table 7 shows the comparison of performance between the classifiers for a different combination offeatures. For the BoW FE technique, random forest with an F1 feature outdoes others by acquiring83.21% accuracy. There exists a little (0.5–1)% margin among the classifiers for F1, F2, and F5 features.All of the classifiers obtain the highest accuracy value by employing the F1 feature except DT and SGD.DT performed well with F2 feature, whereas SGD performed for F4 features. All the classifiers showedlower performance with F3 features. SGD achieved the highest precision value of 83.79%, and resultsshowed a minimum difference between precision and recall in SGD.For the tf-idf FE technique, SGD with an F4 feature obtains the maximum accuracy of 84.57%where the maximum precision value of 83.78% achieved for F1 features. By comparing the results oftwo feature extraction techniques (i.e., BoW and tf-idf), impressive outcomes have observed in all theevaluation parameters. Almost all the metric values have increased (2–3)% approximately by adoptingthe tf-idf feature extraction technique. LR and RF obtained maximum accuracy with F1 features, MNBand SGD gained with F4 feature and DT obtained with F2 features. Thus, in summary, the tf-idf featureextraction and SGD classifier are well suited for our task as it outperforms the BoW technique withother classifiers.Figure 4 depicts the f1-score comparison among the classifiers for the tf-idf feature extractiontechnique. We observed a tiny margin in the f1-score among the classifiers with F1 and F5 features.All classifiers achieved a minimum f1-score for the F3 feature except DT. The DT method obtaineda minimum value of 78.74% with the F4 feature. LR and RF got the maximum value of 86.58% and86.92%, respectively, for the F1 feature. DT obtained a maximum f1-score of 82.81% while MNB gotAppl. Sci. 2020, 10, 6527 14</s>
|
<s>of 2386.57%. The results revealed that SGD with the F2 feature outperforms all other feature combinationsby obtaining an 86.97% f1-score.Table 7. Performance comparison for different feature combinations where F1, F2, F3, F4, and F5means unigram, bigram, trigram, a combination of unigram and bigram, and a combination ofunigram, bigram, and trigram features, respectively. A, P, and R denotes accuracy, precision, andrecall, respectively.Classifier FE Features A (%) P (%) R (%)BoWF1 82.28 79.46 91.91F2 81.64 77.24 94.99F3 78.07 72.18 98.58F4 82.07 79.72 90.88F5 82.21 79.57 91.52tf-idfF1 84.00 81.14 92.81F2 81.50 77.36 94.35F3 77.85 72.22 97.81F4 83.85 80.75 93.19F5 83.92 80.84 93.19BoWF1 76.00 76.78 81.51F2 77.78 77.14 85.36F3 74.57 69.24 97.68F4 75.57 76.67 80.61F5 75.50 77.38 79.07tf-idfF1 77.92 78.24 83.56F2 79.57 77.85 88.44F3 76.14 70.69 97.56F4 75.35 75.71 82.02F5 76.71 76.93 83.05BoWF1 83.21 79.43 94.22F2 80.50 78.68 89.08F3 76.00 70.49 97.81F4 82.14 79.29 91.91F5 83.20 79.82 93.45tf-idfF1 83.71 78.54 97.30F2 81.57 78.22 96.68F3 77.92 72.16 98.20F4 83.21 78.14 96.91F5 83.71 78.90 96.53MNBBoWF1 81.57 79.10 90.88F2 79.00 74.37 94.99F3 65.50 61.84 99.22F4 81.14 77.77 92.55F5 81.00 77.14 93.58tf-idfF1 83.78 81.29 92.04F2 80.35 76.25 93.96F3 73.21 67.84 98.58F4 83.85 80.55 93.58F5 83.50 80.17 93.45Appl. Sci. 2020, 10, 6527 15 of 23Table 7. Cont.Classifier FE Features A (%) P (%) R (%)SGDBoWF1 81.00 80.86 86.26F2 81.00 76.58 94.86F3 78.21 72.31 98.58F4 81.28 83.79 82.28F5 78.57 81.30 79.84tf-idfF1 82.14 83.78 81.51F2 82.00 78.06 94.09F3 78.92 73.54 97.04F4 84.57 82.09 92.42F5 83.92 81.04 93.53Figure 4. f1-score comparison among different ML classifiers with F1, F2, F3, F4, and F5 features forthe tf-idf FE technique.5.2.2. Graphical EvaluationThe ROC curve used as a measure of graphical evaluation as each class contains an equal numberof texts. Figures 5–9 exhibit the ROC curve analysis of the BoW and the tf-idf feature extractiontechnique for F1, F2, F3, F4, and F5 features, respectively. For BoW with an F1 feature, logisticregression and random forest both provide the similar AUC values of 87.8%. SGD achieved 87.0%AUC, which increased by 2.3% by using the tf-idf FE technique. The AUC value of other algorithmsalso increased by employing the tf-idf feature extraction technique.With the F2 feature, LR obtained the maximum AUC value of 84.5%, while, for the F3 feature,SGD achieved the maximum value. In both cases, the tf-idf feature extraction technique was used.With tf-idf and the F4 feature, SGD beats others by getting a maximum AUC of 89.3%. The valueof all the classifiers increased except the decision tree where its value decreased by 0.06%. Resultswith the F5 feature is quite similar to the F1 feature. However, LR outdoes SGD here by a margin of0.02%. Critical analysis of results brings to the notation that the SGD classifier with the combination ofunigram and bigram feature for the tf-idf feature extraction technique achieved the highest value formost of the evaluation parameters compared to others. The performance of the proposed classifier(SGD) was analyzed further by varying the number of training documents to get more insight.Appl. Sci. 2020, 10, 6527 16 of 23Figure 10 shows the accuracy versus the number of training examples graph. The</s>
|
<s>analysis revealsthat the classification accuracy is increasing with the increased dataset and the tf-idf predominates theBoW with an F2 feature.(a) ROC curve for (BoW+F1) features (b) ROC curve for (tf-idf+F1) featuresFigure 5. ROC curve analysis for the F1 feature where F1 represents the unigram feature.(a) ROC curve for (BoW+F2) features (b) ROC curve for (tf-idf+F2) featuresFigure 6. ROC curve analysis for the F2 feature where F2 represents the bigram feature.(a) ROC curve for (BoW+F3) features (b) ROC curve for (tf-idf+F3) featuresFigure 7. ROC curve analysis for the F3 feature where F3 represents the trigram feature.Appl. Sci. 2020, 10, 6527 17 of 23(a) ROC curve for (BoW+F4) features (b) ROC curve for (tf-idf+F4) featuresFigure 8. ROC curve analysis for the F4 feature where F4 represents a combination of unigram andbigram features.(a) ROC curve for (BoW+F5) features (b) ROC curve for (tf-idf+F5) featuresFigure 9. ROC curve analysis for the F5 feature where F5 represents a combination of unigram, bigram,and trigram features.Figure 10. Effects of training set size on accuracy.Appl. Sci. 2020, 10, 6527 18 of 235.3. Human Baseline vs. ML TechniquesThe performance of the classifiers compared with the human for further investigation. To eliminatethe chance of human biases in data labelling and evaluation phases, we have assigned two new expertswho manually label the testing texts into one of the predefined categories. Among 1400 test textsamples, 621 texts are from non-suspicious (Cns) class and 779 texts are from suspicious (Cs) class.The accuracy of each class can be computed by the ratio between the number of correctly predictedtexts and the total number of texts of that class by using the confusion matrix. Suppose that a systemcan correctly predict 730 texts among 779 suspicious texts; then, its accuracy in suspicious class will be93.7% (730/779). As the tf-idf outperformed the BoW in the previous evaluation, we thus comparedthe performance of the classifiers only for the tf-idf feature extraction technique with experts. Table 8exhibits the summary of comparison.Table 8. Accuracy comparison between experts and classifiers with the tf-idf FE technique.Approach Cns Accuracy (%) Cs Accuracy (%)Expert 1 98.71 98.58Expert 2 99.19 98.33LR + F1 72.94 92.81LR + F2 65.37 94.35LR + F3 52.81 97.81LR + F4 72.14 93.19LR + F5 72.30 93.19DT + F1 70.85 83.56DT + F2 68.43 88.44DT + F3 42.44 97.56DT + F4 57.69 82.02DT + F5 68.76 83.05RF + F1 66.66 97.30RF + F2 67.63 96.68RF + F3 52.49 98.20RF + F4 66.02 96.91RF + F5 67.63 96.53MNB + F1 73.42 92.04MNB + F2 63.28 93.96MNB + F3 41.38 98.58MNB + F4 71.65 93.58MNB + F5 71.01 93.45SGD + F1 72.30 81.51SGD + F2 66.82 94.09SGD + F3 56.19 97.04SGD + F4 72.46 92.42SGD + F5 70.04 93.53The experts outperformed the ML classifiers in both classes. Experts can more accurately classifynon suspicious texts than suspicious texts. We found approximately 0.5% accuracy deviation betweenexperts. All of the classifiers done well on the suspicious class and performed very poorly onthe non-suspicious class. A significant difference has been observed between human baseline andML classifiers. All of</s>
|
<s>the classifiers were able to identify suspicious texts more precisely than thenon suspicious texts. After manual analysis, we traced the reason behind this disparate behaviour.The maximal portion of the non-suspicious texts was accumulated from the newspaper, and, on average,each text has 72.12 words in it. ML-based classifiers did not consider the semantic meaning of a textwhich is important for the classification of long texts. Thus, the system could not detect non-suspiciousAppl. Sci. 2020, 10, 6527 19 of 23texts accurately. For this reason, the false-negative value becomes very high, which causes a drop inthe recall, and thus affects the system classification accuracy.5.4. Comparison with Existing TechniquesAs far as we aware, no meaningful research has been conducted up to now, which focused solelyon suspicious Bengali text classification. In addition to that, no benchmark dataset is available on SBT.Therefore, the proposed work compared with the techniques has already been used on a quite similartask. We implemented existing techniques on our developed dataset to investigate the performancevariation of the proposed approach with others. Table 9 shows the comparison in terms of accuracy forthe suspicious text classification.Table 9. Performance comparison after employing existing techniques on our dataset.Techniques Accuracy(%) on SBTDSVM + BoW [19] 81.14LR + unigram + bigram [49] 82.07DT + tf-idf [50] 77.92Naïve Bayes [51] 81.77LR + BoW [30] 82.28Our Proposed Technique 84.57Naive Bayes [51] and SVM classifier with a BoW [19] feature extraction technique achieved aquite similar accuracy—more than 81% accuracy on our developed dataset. LR with the combinationof unigram and bigram [49] achieved 82.07% accuracy, whereas LR with the BoW feature extractiontechnique [30] also achieved similar results (82.28%). Only 77.92% accuracy was obtained for DT withthe tf-idf feature extraction technique [50], and the proposed method achieved the highest accuracy of84.57% among existing approaches. Although the nature of the datasets is different, the result of thecomparison indicates that the proposed approach surpasses other existing techniques with the highestaccuracy on our developed dataset.5.5. DiscussionAfter analyzing the experimental results, we can summarize that LR, DT, and RF do well with aunigram feature while MNB and SGD obtained maximum accuracy with unigram and bigram featurecombination. In both cases, the tf-idf feature extraction technique is employed. Classifiers performedpoorly for trigram features. After comparing BoW and the tf-idf extraction technique, we noticed anallusive rise for the weighted features of the texts. This increase happens because the BoW emphasizesthe most frequent words only, while the tf-idf emphasizes the context-related words more. LR, RF,MNB, and SGD performed excellently on every feature combined with a little deviation of (0.5–0.8)%between them. However, the performance of the decision tree is inferior compared to others due toits limited ability to learn complex rules from texts. The AUC value is another performance measurethat indicates the model’s capability to make a distinction between classes. SGD obtained the highestvalue of 0.893 AUC for the tf-idf, and LR and RF achieved the maximum AUC value of 0.878 for theBoW feature extraction. After analysis, the reason behind the superior performance of SGD classifier ispointed out. Here, SGD represents a</s>
|
<s>linear classifier that is already proven as a well-suited model forbinary classification like ours [42]. It uses a simple approach of discriminative learning which can findglobal minima more efficiently, thus resulting in better accuracy. By comparing these ML classifiersin terms of their execution time, no significant difference has been reported. All the classifiers havecompleted their execution before the 50 s mark.Since the machine learning-based techniques mainly utilized word-level features, it is difficultto adopt the sentence-level meanings appropriately. The system can not predict the class accuratelyfor this reason. Therefore, to shed light on the tendency for which text is complicated to predict inAppl. Sci. 2020, 10, 6527 20 of 23suspicious detection, we analyze the predicted results. Consider an example, (Banglish form: “Sakib alhasan khela pare na take bangladesh cricket team theke ber kore dewa dorkar”). (English translation:Shakib Al Hasan cannot play, he needs to be dropped from the Bangladesh cricket team). This text mayexcite the fans of Shakib Al Hasan because it conveys the disgraceful message about him. The proposedapproach should classify these texts as the suspicious class rather than the non-suspicious class.Thus, the classification discrepancies happen due to the inability to capture the semantic relationbetween words and sentence-level meanings of the texts. It is always challenging to classify such typeof text because these types of texts did not have any words that directly provoke people or pose anythreat. The proposed approach encountered a limited number of such texts during the training phaseand hence it failed to predict the class correctly. These deficiencies can be dispelled by employingneural network-based architecture with the addition of diverse data in the existing corpus.Although the result of the proposed SGD based model is quite reasonable compared to theprevious approaches, there are scopes to increase the overall accuracy of suspicious Bengali textdetection. Firstly, the proposed model did not consider the semantic relation between words in thetexts. For this reason, ML-based classifiers show poor accuracy for a non-suspicious class that has longtexts. The semantic relationship and corresponding machine learning rule-based model [52,53] couldalso be effective depending on the data characteristics. Moreover, Deep learning techniques can beused to find the intricate patterns from the texts that will help to comprehend the semantic relation,but it requires a huge amount of data to effectively build the model [40,54]. Secondly, the number ofclasses can be extended by introducing more sub-classes that have suspicious contents such as obscene,religious hatred, sexually explicit, and threats. Finally, to improve the exactness of an intelligentsystem, it is mandatory to train the model with a diverse and large amount of data. Therefore, a corpuswith more texts would help the system to learn more accurately and predict classes more precisely.6. Conclusions and Future ResearchIn this paper, we have presented a machine learning based model to classify Bengali texts havingsuspicious contents. We have used different feature extraction techniques with n-gram features inour model. This work also computationally analyzed with a set of ML classification techniques bytaking into account the popular BoW and tf-idf feature extraction methods. Moreover, performance ofthe</s>
|
<s>classifiers is compared with human experts for error analysis. To serve our purpose, a dataset isdeveloped containing 7000 suspicious and non-suspicious text documents. After employing differentlearning algorithms on this corpus, an SGD classifier and tf-idf feature extraction technique with thecombination of unigram and bigram features showed the best performance with 84.57% accuracy.In the future, we plan to train the model with a large dataset to increase the overall performance.Sub-domains of suspicious texts will be taken into account to make the dataset more diverse.Furthermore, recurrent learning algorithms can be employed to capture the inherent sequentialpatterns of long texts.Author Contributions: Conceptualization, O.S. and M.M.H.; investigation, O.S., M.M.H., A.S.M.K., R.N. andI.H.S.; methodology, O.S. and M.M.H.; software, O.S.; validation, O.S. and M.M.H.; writing—original draftpreparation, O.S.; writing—review and editing, M.M.H., A.S.M.K., R.N. and I.H.S. All authors have read andagreed to the published version of the manuscript.Funding: This research received no external funding.Conflicts of Interest: The authors declare no conflict of interest.References1. Khangura, A.S.; Dhaliwal, M.S.; Sehgal, M. Identification of Suspicious Activities in Chat Logs usingSupport Vector Machine and Optimization with Genetic Algorithm. Int. J. Res. Appl. Sci. Eng. Technol.2017, 5, 145–153.2. Internet Crime Complaint Center (U.S.), United States, F.B.O.I. 2019 Internet Crime Report. 2020. pp. 1–28.Available online: https://www.hsdl.org/?view&did=833980 (accessed on 22 May 2020)https:// www.hsdl.org/? view&did=833980Appl. Sci. 2020, 10, 6527 21 of 233. Bertram, L. Terrorism, the Internet and the Social Media Advantage: Exploring how terrorist organizationsexploit aspects of the internet, social media and how these same platforms could be used to counter-violentextremism. J. Deradicalization 2016, 7, 225–252.4. Mandal, A.K.; Sen, R. Supervised Learning Methods for Bangla Web Document Categorization. Int. J.Artif. Intell. Appl. 2014, 5, 93–105. [CrossRef]5. Phani, S.; Lahiri, S.; Biswas, A. A Supervised Learning Approach for Authorship Attribution of BengaliLiterary Texts. ACM Trans. Asian Low-Resour. Lang. Inf. Process 2017, 16, 1–15. [CrossRef]6. Facebook. Violence and Incitement. Available online: https://www.facebook.com/communitystandards/(accessed on 21 April 2019).7. Fortuna, P.; Nunes, S. A survey on automatic detection of hate speech in text. ACM Comput. Surv. (CSUR)2018, 51, 1–30. [CrossRef]8. Understanding Dangerous Speech. Available online: https://dangerousspeech.org/faq/ (accessed on10 April 2019).9. Sarker, I.H.; Kayes, A.S.M.; Badsha, S.; Alqahtani, H.; Watters, P.; Ng, A. Cybersecurity data science:An overview from machine learning perspective. J. Big Data 2020, 7, 1–29. [CrossRef]10. Alami, S.; Elbeqqali, O. Cybercrime profiling: Text mining techniques to detect and predict criminalactivities in microblog posts. In Proceedings of the 2015 10th International Conference on IntelligentSystems: Theories and Applications (SITA), Rabat, Morocco, 20–21 October 2015.11. Hartmann, J.; Huppertz, J.; Schamp, C.; Heitmann, M. Comparing automated text classification methods.Int. J. Res. Mark. 2019, 36, 20–38. [CrossRef]12. Iskandar, B. Terrorism detection based on sentiment analysis using machine learning. J. Eng. Appl. Sci.2017, 12, 691–698.13. Sarker, I.H. A machine learning based robust prediction model for real-life mobile phone data.Internet Things 2019, 5, 180–193. [CrossRef]14. Johnston, A.H.; Weiss, G.M. Identifying Sunni extremist propaganda with deep learning. In Proceedings ofthe 2017 IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, USA, 27 November–1December 2017.15. Alami, S.; Beqali, O. Detecting suspicious profiles using text analysis within social</s>
|
<s>media. J. Theor. Appl.Inf. Technol. 2015, 73, 405–410.16. Jiang, M.; Cui, P.; Faloutsos, C. Suspicious behavior detection: Current trends and future directions.IEEE Intell. Syst. 2016, 31, 31–39. [CrossRef]17. Vosoughi, S.; Roy, D.; Aral, S. The spread of true and false news online. Science 2018, 359, 1146–1151.[CrossRef] [PubMed]18. Davidson, T.; Warmsley, D.; Macy, M.; Weber, I. Automated hate speech detection and the problem ofoffensive language. In Proceedings of the Eleventh International AAAI Conference on Web and SocialMedia, Montreal, QC, Canada, 15–18 May 2017 .19. AlGhamdi, M.A.; Khan, M.A. Intelligent Analysis of Arabic Tweets for Detection of Suspicious Messages.Arab. J. Sci. Eng. 2020, 1–12. [CrossRef]20. Dinakar, K.; Reichart, R.; Lieberman, H. Modeling the detection of textual cyberbullying. In Proceedingsof the Fifth International AAAI Conference on Weblogs and Social Media, Barcelona, Catalonia, Spain,17–21 July 2011.21. Aulia, N.; Budi, I. Hate Speech Detection on Indonesian Long Text Documents Using Machine LearningApproach. In Proceedings of the 2019 5th International Conference on Computing and ArtificialIntelligence, Bali, Indonesia, 19–22 April 2019.22. Zhang, P.; Gao, Y.; Chen, S. Detect Chinese Cyber Bullying by Analyzing User Behaviors and LanguagePatterns. In Proceedings of the 2019 3rd International Symposium on Autonomous Systems (ISAS),Shanghai, China, 29–31 May 2019.23. Hammer, H.L. Detecting threats of violence in online discussions using bigrams of important words.In Proceedings of the 2014 IEEE Joint Intelligence and Security Informatics Conference, The Hague,The Netherlands, 24–26 September 2014.24. Ishmam, A.M.; Sharmin, S. Hateful Speech Detection in Public Facebook Pages for the Bengali Language.In Proceedings of the 2019 18th IEEE International Conference On Machine Learning And Applications(ICMLA), Boca Raton, FL, USA, 16–19 December 2019.http://dx.doi.org/10.5121/ijaia.2014.5508http://dx.doi.org/10.1145/3099473https:// www.facebook.com /community standards/http://dx.doi.org/10.1145/3232676https:// dangerousspeech.org /faq/http://dx.doi.org/10.1186/s40537-020-00318-5http://dx.doi.org/10.1016/j.ijresmar.2018.09.009http://dx.doi.org/10.1016/j.iot.2019.01.007http://dx.doi.org/10.1109/MIS.2016.5http://dx.doi.org/10.1126/science.aap9559http://www.ncbi.nlm.nih.gov/pubmed/29590045http://dx.doi.org/10.1007/s13369-020-04447-0Appl. Sci. 2020, 10, 6527 22 of 2325. Emon, E.A.; Rahman, S.; Banarjee, J.; Das, A.K.; Mittra, T. A Deep Learning Approach to DetectAbusive Bengali Text. In Proceedings of the 2019 7th International Conference on Smart Computing& Communications (ICSCC), Sarawak, Malaysia, 28–30 June 2019.26. Eshan, S.C.; Hasan, M.S. An application of machine learning to detect abusive bengali text. In Proceedingsof the 2017 20th International Conference of Computer and Information Technology (ICCIT), Dhaka,Bangladesh, 22–24 December 2017.27. Islam, T.; Latif, S.; Ahmed, N. Using Social Networks to Detect Malicious Bangla Text Content.In Proceedings of the 2019 1st International Conference on Advances in Science, Engineering and RoboticsTechnology (ICASERT), Dhaka, Bangladesh, 3–5 May 2019.28. Hossain, M.Z.; Rahman, M.A.; Islam, M.S.; Kar, S. BanFakeNews: A Dataset for Detecting Fake News inBangla. arXiv 2020, arXiv:2004.08789.29. Chakraborty, P.; Seddiqui, M.H. Threat and Abusive Language Detection on Social Media in BengaliLanguage. In Proceedings of the 2019 1st International Conference on Advances in Science, Engineering andRobotics Technology (ICASERT), Dhaka, Bangladesh, 3–5 May 2019.30. Sharif, O.; Hoque, M.M. Automatic Detection of Suspicious Bangla Text Using Logistic Regression.In Proceedings of the International Conference on Intelligent Computing & Optimization, Koh Samui,Thailand, 3–4 October 2019.31. Twitter. Hateful Conduct. Available online: https://help.Twitter.com/en/rules-and-policies/Twitter-rules/ (accessed on 25 April 2019).32. Youtube. Harmful or Dangerous Content Policy. Available online: https://support.google.com/youtube/answer/2801939/ (accessed on 27 April 2019).33. COE. Hate Speech and Violence. Available online: https://www.coe.int/en/web/european-commission-against-racism-and-intolerance/hate-speech-and-violence/ (accessed on 18 April 2019).34. U.S. Department of Homeland</s>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.