text
stringlengths
41
31.4k
<s>Security. Available online: https://www.dhs.gov/see-something-say-something/what-suspicious-activity (accessed on 13 April 2019).35. Dash, N.S.; Ramamoorthy; Naicker, L. Utility & Application of Language Corpora; Springer: Singapore, 2019 ;pp. 17–34.36. Magatti, D.; Calegari, S.; Ciucci, D.; Stella, F. Automatic labeling of topics. In Proceedings of the 2009Ninth International Conference on Intelligent Systems Design and Applications, Pisa, Italy, 30 November–2December 2009.37. Cohen, J. A coefficient of agreement for nominal scales. Educ. Psychol. Meas. 1960, 20, 37–46. [CrossRef]38. Tokunaga, T.; Makoto, I. Text categorization based on weighted inverse document frequency. In SpecialInterest Groups and Information Process Society of Japan (SIG-IPSJ); Citeseer: Tokoyo, Japan, 1994.39. Fürnkranz, J. A study using n-gram features for text categorization. Austrian Res. Inst. Artif. Intell. 1998,3, 1–10.40. Sarker, I.H.; Kayes, A.; Watters, P. Effectiveness analysis of machine learning classification models forpredicting personalized context-aware smartphone usage. J. Big Data 2019, 6, 57. [CrossRef]41. Zhang, T. Solving large scale linear prediction problems using stochastic gradient descent algorithms.In Proceedings of the Twenty-First, International Conference on Machine Learning, New York, NY, USA,4–8 July 2004.42. Diab, S. Optimizing stochastic gradient descent in text classification based on fine-tuning hyper-parametersapproach. A Case Study on Automatic Classification of Global Terrorist Attacks. arXiv 2019,arXiv:1902.06542.43. Pranckevičius, T.; Marcinkevičius, V. Application of logistic regression with part-of-the-speech tagging formulti-class text classification. In Proceedings of the 2016 IEEE 4th Workshop on Advances in Information,Electronic and Electrical Engineering (AIEEE), Vilnius, Lithuania, 10–12 November 2016.44. Pranckevičius, T.; Marcinkevičius, V. Comparison of naive bayes, random forest, decision tree,support vector machines, and logistic regression classifiers for text reviews classification. Balt. J. Mod.Comput. 2017, 5, 221. [CrossRef]45. Ali, J.; Khan, R.; Ahmad, N.; Maqsood, I. Random forests and decision trees. Int. J. Comput. Sci. Issues(IJCSI) 2012, 9, 272.46. Le, C.C.; Prasad, P.; Alsadoon, A.; Pham, L.; Elchouemi, A. Text classification: Naïve bayes classifier withsentiment Lexicon. IAENG Int. J. Comput. Sci. 2019, 46, 141–148.https:// help.Twitter.com /en/ rules-and-policies /Twitter-rules/https:// help.Twitter.com /en/ rules-and-policies /Twitter-rules/https:// support.google.com/ youtube/ answer/ 2801939/https:// support.google.com/ youtube/ answer/ 2801939/https:// www. coe.int/ en/web/ european- commission- against- racism- and-intolerance /hate-speech-and- violence/https:// www. coe.int/ en/web/ european- commission- against- racism- and-intolerance /hate-speech-and- violence/https:// www.dhs.gov/ see-something- say-something/ what-suspicious-activityhttps:// www.dhs.gov/ see-something- say-something/ what-suspicious-activityhttp://dx.doi.org/10.1177/001316446002000104http://dx.doi.org/10.1186/s40537-019-0219-yhttp://dx.doi.org/10.22364/bjmc.2017.5.2.05Appl. Sci. 2020, 10, 6527 23 of 2347. McCallum, A.; Nigam, K. A comparison of event models for naive bayes text classification. In Proceedingsof the AAAI-98 Workshop on Learning for Text Categorization, Madison, WI, USA, 26–27 July 1998.48. Google. Google Colaboratory. Available online: https://colab.research.google.com/notebooks/welcome(accessed on 7 January 2020).49. Tagami, T.; Ouchi, H.; Asano, H.; Hanawa, K.; Uchiyama, K.; Suzuki, K.; Inui, K.; Komiya, A.; Fujimura, A.;Yanai, H.; et al. Suspicious News Detection Using Micro Blog Text. arXiv 2018, arXiv:1810.11663.50. Ahmed, H.M.; Bethoon, N.B. Cybercrime: Suspicious Viber Messages Detection Model. Int. J. Sci. Eng.Res. 2017, 8, 1496–1502.51. Nizamani, S.; Memon, N.; Wiil, U.K.; Karampelas, P. Modeling suspicious email detection using enhancedfeature selection. arXiv 2013, arXiv:1312.1971.52. Sarker, I.H. Context-aware rule learning from smartphone data: survey, challenges and future directions.J. Big Data 2019, 6, 95. [CrossRef]53. Sarker, I.H.; Kayes, A.S.M. ABC-RuleMiner: User behavioral rule-based machine learning method forcontext-aware intelligent services. J. Netw. Comput. Appl. 2020, 168, 102762. [CrossRef]54.</s>
<s>Xin, Y.; Kong, L.; Liu, Z.; Chen, Y.; Li, Y.; Zhu, H.; Gao, M.; Hou, H.; Wang, C. Machine learning and deeplearning methods for cybersecurity. IEEE Access 2018, 6, 35365–35381. [CrossRef]Sample Availability: Samples of the compounds are available from the authors.c© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open accessarticle distributed under the terms and conditions of the Creative Commons Attribution(CC BY) license (http://creativecommons.org/licenses/by/4.0/).https:// colab.research.google.com/ notebooks/welcomehttp://dx.doi.org/10.1186/s40537-019-0258-4http://dx.doi.org/10.1016/j.jnca.2020.102762http://dx.doi.org/10.1109/ACCESS.2018.2836950http://creativecommons.org/http://creativecommons.org/licenses/by/4.0/. Introduction Related Work A Novel Suspicious Bangla Text Dataset Suspicious Text and Suspicious Text Detection Development of SBT Corpora Proposed System Preprocessing Feature Extraction Training Stochastic Gradient Descent Logistic Regression Decision Tree Random Forest Multinomial Naïve Bayes Prediction Experiments Measures of Evaluation Evaluation Results Statistical Evaluation Graphical Evaluation Human Baseline vs. ML Techniques Comparison with Existing Techniques Discussion Conclusions and Future Research References</s>
<s>Bengali Text generation Using Bi-directional RNNBengali Text generation Using Bi-directional RNNSheikh Abujar Dept. of CSE Daffodil International University Dhaka, Bangladesh sheikh.cse@diu.edu.bd Abu Kaisar Mohammad Masum Dept. of CSE Daffodil International University Dhaka, Bangladesh mohammad15-6759@diu.edu.bd S. M. Mazharul Hoque Chowdhury Dept. of CSE Daffodil International University Dhaka, Bangladesh mazharul2213@diu.edu.bd Mahmudul Hasan Dept. of CSE Comilla University Cumilla, Bangladesh mhasanraju@gmail.com Syed Akhter Hossain Dept. of CSE Daffodil International University Dhaka, Bangladesh aktarhossain@daffodilvarsity.edu.bdAbstract— Current world is growing so fast and communication between nation and different type of people with different language became part of our life. Even from buying product to our social life everything is dependent on communication. Therefore language is the most important part of human life. Though still now there is a language barrier for communication between people. But very soon language will be universal and everyone will be able to communicate in any language worldwide using the NLP technology. For that it is necessary to understand each language individually. This research proposes a new type of text generation of Bangla language using the bi-directional RNN. This technique is used to predict the next possible word in a Bangla text.Keywords— Bangla language, Bi-directioanl RNN, Corpus, NLP. I. INTRODUCTION Modern technology made our life so easy that things can be done in minutes. All those tasks that was considered as impossible are now possible for the technology. Even without knowing language people are traveling around the world depending on their smartphones. The reason behind this is Natural Language Processing. Using NLP it is possible to analysis text any find necessary information. Not only English or any particular language, NLP can be applied to any language if it has its own Unicode or computerized form. Bangla is the fifth language according to the number of user. Therefore it is important to focus on development of tool and technique using NLP to process Bangla language. In this research a very important topic was discussed which is word prediction. A corpus is built using daily life data to predict the next word. During this process word frequency was used to determine the next word. Generally in a next possible word prediction algorithm some key topics are considered first. One of them is determination of the topic. Algorithm tries to figure out the topic user is writing. So that the algorithm will have a short listed word that is most relevant to the topic. Exception may occur, but that will be limited and based on the corpus used in the analysis. Next the algorithm finds the frequency distance of the current word and the other relevant words on the list. Even it is possible in the word level analysis where after every letter algorithm will be able to determine the next possible word. The most perfect algorithm and corpus will provide higher accuracy. As NLP is progressing rapidly it has become essential to focus on Bangla text analysis to make this language valuable worldwide. II. LITERATURE REVIEW As throughout all these years a lot of</s>
<s>work has been done on this field more is needed to make it useful. Different researchers are developing new algorithms and techniques as well as new model to improve current result of text generations. Some of them will be discussed in this section. Partha Pratim Barman et al. (2018) worked on a next word prediction model and in this research they used RNN based approach [1]. Basically they built a model of LSTM which is a special kind of RNN. They applied this work in a live chatting application where the application is able to predict the next possible word. In this model their target language was Assamese language. Hyeonwoo Noh et al. (2016) worked on a question answering system and used neural network in their system to automatically predict the possible reply for the question provided by the user [2]. In their conventional neural network (CNN) they used a parameter prediction network to create an adaptive question answering model. They applied the hashing technique to reduce the complexity of large number of parameters. Therefore answer selection will be much easier for the system. IEEE - 4567010th ICCCNT 2019 July 6-8, 2019, IIT - Kanpur, Kanpur, India Researcher Martin Sundermeyer and his team worked on language modeling using the LSTM neural network in 2012 [3]. The main purpose of their research was to build a neural network that is able to increase the accuracy of the analysis. They used their model on English language and as well as a large set of French language modeling task. Tomas Mikolov and his team from Microsoft Research worked on linguistic regularities [4]. In this research they tried to build a model that can predict the space word representation for a question answering system. This system includes vector representation and can automatically learn the relationship of words. With about 40% accuracy on symmetric question answering their work was the best model back then. Andriy Mnih et al. (2013) was researching on word embeddings efficiently and in this research they used noise constrictive estimation [5]. Their main focus was to improve the result with a much simpler analysis technique. They made a comparison with Mikolov et al. (2013a) model where state of art method was used and this research uses training log-bilinear models with the help of noise-contrastive estimation of word. They also done some comparison between several other techniques on the same field. A very long time ago researcher Helmut Schmid worked on parts of speech tagging and he used neural network in this process [6]. This research proposed a new type of model on POS tagging using neural network to provide better result compared to some other most popular techniques as HMM-tagger by Cutting et al. in 1992 and a trigram-based tagger by Kempe in 1993. Tomáš Mikolov et al. (2011) worked on modification of RNN for language modeling [7]. This research outperformed several other techniques significantly. The major complexity remained in this system was computational complexity. Their presented technique can speed up</s>
<s>the data training and testing up to 15 times of present techniques. In this analysis they used back propagation algorithm and this technique provides better accuracy than the basic algorithms. Yoshua Bengio et al. (2013) worked on a probabilistic model for language analysis using neural network [8]. In this research they worked on such technique that will be able to find and inform the model relation between sentences and the model learns simultaneously using distributed representation of words. Text generation is important for the sequence to sequence word classification. This paper we tried to explain a method of how to generate Bengali word next sequence using Bidirectional LSTM and RNN. Actual use of the text generation is a machine translation. It is difficult to translate the Bengali language for machine translation in NLP purpose. Because of any machine translation problem, Bengali sentence structure is not correctly working. So accurately text generation for the Bengali language is our main intention and correct Bengali sentence sequence generation. Real utilization of the Text generation is a machine interpretation. It is hard to decipher the Bengali language for machine interpretation in NLP reason. In view of any machine interpretation issue, Bengali sentence structure isn't effectively working. So precisely message age for the Bengali language is our principle goal and right Bengali sentence sequence genetation. III. METHODOLOGY Language Modelling is the most significant piece of present-day NLP. There is some piece of the assignment, for example, Text Summarization, Machine Interpretation, Text Generation, Speech to Text Generation and so forth. Text generation is a noteworthy piece of Language Modelling. A well-prepared language model acquires information of the likelihood of the occasion of a word dependent on the past arrangement of words. In this paper, we talked about the n-gram modelling with a pre-trained Bengali word embedding for Text generation and make a Bi-directional Recurrent Neural Network for preparing the model. In figure1 has been given our work methodology stream. Figure1: Working flow for Text Generation A. Data collect & pre-processing Bengali content needs a decent dataset. We utilize our very own dataset which was gathered from online life. Our dataset contains a few sorts of Bengali post such gathering post, individual post, page post and so forth. There is some snag to gather Bengali information, for example, the structure of Bengali content. In any case, in our dataset, we endeavour to lessen the majority of that deterrent to keep a clean Bengali content. Our dataset contains text information with their sort and content outline or summary. For our working reason, we IEEE - 4567010th ICCCNT 2019 July 6-8, 2019, IIT - Kanpur, Kanpur, India utilize just content and their outline to create a sequence of next Bengali word which is almost generate a sentence. Before getting a ready dataset for content age, we have to include Bengali compressions. Since contraction contain a short type of a word, for example " র জ:"=" র জে শন", "ডা."="ডা ার". In the wake of gathering dataset, we have to</s>
<s>a clean dataset to create content. So for clean information, we expel whitespace, digits, accentuation from Bengali content and expel Bengali prevent words from a Bengali stop word content record. At long last, we clean the content and make a rundown which contains content with their summary. At that point, we make a corpus for text generation. B. Add Bengali Word Embedding Word2vec is utilized to create word embedding. There are a few reasons why we utilized word embedding, for example, the idea of a word isn't comprehended by a machine. A machine can see just paired or numerical esteem. In this way, process a languageand working with normal language preparing word2vector must be required. When applying word implanting each machine can change over tokenize word to a vector where every vector speaks to the vocabulary of content archives. There are several Word embedding pre-trained models in different kinds of language but in the Bengali language few numbers of word embedding file present most of is not enough for research. One good and usable pre-trained model found which is used in our research purposes. Which name is “bn_w2v_model”.C. N-gram Tokens Sequence Text generation language model required an arrangement of the token and which can anticipate the likelihood next word or grouping. So need to tokenize the words. We use keras work in tokenize model which concentrate word with their record number from the corpus. After this, all content changes the arrangement of the token. In n-gram, the arrangement contains whole number token which was produced using the info content corpus. Each whole number speak to the record of the word which is in the content vocabulary. We used word embedding which represents the word vocabulary number. Each vector number present a word. So when generating n-gram sequences each word represents by a vector number in the embedded file. D. Pad Sequence Every progression has a substitute length. So we need to pad sequences for making arrangement length proportionate. For this point, we use keras pad sequences function. The commitment of the learning model we use n-gram gathering as given word and the foreseen word as the accompanying word. The model is given in table 1. Finally, we can do get the data X and the accompanying word Y which is used for setting up the model. GIVEN WORD NEXT WORDবা বতা বা বতা কবা বতা ক বা বতা ক ঘনৃাবা বতা ক ঘনৃা বা বতা ক ঘনৃা কিরTable1: Example of pad sequences E. Proposed Model An RNN system works incredibly products for consecutive information. Since it's can recall it yields reason for outside memory. It can foresee up and coming next succession utilizing memory and furthermore profound comprehension with its arrangement contrasted with different calculations. When it can consider the present state likewise can recollect what it gains from the past state. RNN has a long short term memory (LSTM) that recalls the past grouping content. This paper we work with Bidirectional RNN which have two directions one is forward</s>
<s>and another is Backward, both are the opposite direction. Output dense get the information from forward and backwards. Past information provides by backwards direction and next or predicting sequence provides forward direction. Figure2: Bi-directional RNN The formula will be, ℎ = ∗ X + ℎ + (1) ℎ = ( ∗ X + ℎ + ) (2) = (ℎ + ℎ + ) (3) Here, = Activation Functionℎ = Forward hidden layer ℎ = Backward hidden layer W = Weight, b=Bias In our proposed model, we use the weight (w) of text sequence as input with the time (t).LSTM cell can store previous input state and then working with the current state. When working in the current state in can remember previous then using activation function it can predict the next word or sequence. Since here we use Bidirectional RNN the previous input was remember by backwards direction then for the future word or sequence prediction forward direction will help for prediction .For train our model we define keras sequential model. Add Bidirectional model with LSTM cell. For our research purpose we use 256 units and use ‘relu’ activation for LSTM cell. Set the value of Dropout function is 0.5 which helps to reduce the overfitting.Add Dense which is equal of the total word and use softmax activate function. For loss function calculation we use 'sparse categorical crossentropy' since numeric value and use 'Adam' optimization function. Finally fit the define model and set input and output sequence with verbose. IEEE - 4567010th ICCCNT 2019 July 6-8, 2019, IIT - Kanpur, Kanpur, India Algorithm1 for Bengali text generation using Bi-directional LSTM1:Set function model create(max sequence length, total words):2: input length= max sequence length-13: declare Sequential()4: add(Bidirectional(LSTM(units, activation),input shape)5: add(Dropout(0.5))6: add(Dense(total words, activation))7: compile(loss function, optimizer)8: return model9: create model(max sequence length, total words)This section we show our model graphical view. Here unique id is the input of the process will continue to the Dense or output layer. Figure3: Visualizing Bidirectional Model structurei. Long Short Term Memory: Long Short Term Memory is a piece of the Recurrent Neural Network. It's utilized to the vanishing of inclination and cancels angle. Each LSTM cell has three entryways, for example, Input Gate, Forget Gate, Output Gate and a cell state which included data by means of the doors. i = (w [h , x ] + b ) (4) f = ( [ℎ , ] + ) (5) o = ( [ℎ , ] + ) (6) = ∗ + ∗ ( [ℎ , ] + ) (7)ℎ = ∗ σ( ) (8) Here, = input gate s, = forget gate's ,= output gate, = cell state, ℎ = hidden state, = activation function ii. Activation function: In model we use two activation function such as ReLu and softmax. Rectified Linear Unit utilizes for actuating the LSTM cell in Bi-directional RNN. It always put the value zero to maximum. The equation will be, ( ) = max (0, ) (9) Here, x= input of neuron.</s>
<s>The softmax activation is the calculated initiation work or logistic activation, which is utilized to manage order issues. It keeps up the yield somewhere in the range of 0 and 1 counts likelihood or probability. The recipe for softmax activation is, (Z) = ∑ (10) Here, z is the contributions to the output layer and j records the output. IV. EXPERIMENT AND OUTPUTSubsequent to making the model capacity we have to prepare our model. We fit the model with the present and next word. We used 80 per cent data for train and 20 per cent data for the test. For dataset set the epochs size 100 and set verbose 2 then using the fit function for train model. Train model right around 3 hours it gives better accuracy of 98.766% with loss 0.0430.We Figure4 shows model train precision graph and figure5 show loss graph of the model. Figure 4: Model Accuracy graph for text generation Figure5: Model loss graph for text generation Previously we are working with one direction RNN or LSTM text sequence generation. Then use Bidirectional RNN for text sequence generation. Both perform differently in sequence generation they perform in train and loss are different. Both algorithms perform given below in the table2IEEE - 4567010th ICCCNT 2019 July 6-8, 2019, IIT - Kanpur, Kanpur, India Table2:Comparison with Bi-directional LSTM and LSTMThis test our fundamental objective to make the following arrangement of words. For output, we have made a capacity where we define Bengali pre-define word embedding for a set each word with a vector which helps to define the related word in a file with a numeric value and seed content for appearing. We have generated a padded sequence with fixed seed word and set the length of indicator next word, call the model with the greatest arrangement length.Table3 demonstrates our test result. Table3: Bengali Text Generation V. CONCLUSION AND FUTURE WORKWe have proposed a decent technique for creating a programmed Bengali text generation using Bi-directional RNN. Since no model gives a precise outcome but yet our model gives better yield and maximum output is exact. Utilizing our proposed model we have effectively created a fixed length and importance full Bengali content. There are a few imperfections in our proposed system, for example, cannot create arbitrary length content. We have to characterize the creating content length. Another deformity is we have to characterize cushion token for foreseeing next words. In our future work, we will make a programmed Bengali content generator which gives an arbitrary length Bengali content without utilizing any token or succession.VI. ACKNOWLEDGMENT We need to express gratefulness to our Computer Science and Engineering Department to give a superior facility for research. Extraordinarily thanks to our DIU-NLP and ML lab for supporting and helping to finish our research work. REFERENCES1. P. P. Barmana, A. Boruaha, “A RNN based Approach for next word prediction in Assamese Phonetic Transcription,” In Procedia Computer Science, Volume 143, Pages 117-123, 2018. 2. H. Noh, P. H. Seo, B. Han, “Image</s>
<s>Question Answering Using Convolutional Neural Network With Dynamic Parameter Prediction,” The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 30-38, 2016. 3. M. Sundermeyer, R. Schlüter, H. Ney, “LSTM Neural Networks for Language Modeling,” In 13th Annual Conference of the International Speech Communication Association, pp. 194-197, Portland, OR, USA, September 2012.4. T. Mikolov, W. Yih, G. Zweig, “Linguistic Regularities in Continuous Space Word Representations,” In Proceedings of NAACL-HLT 2013, pages 746–751, Atlanta, Georgia, 9–14 June 2013. 5. A. Mnih, K. Kavukcuoglu, “Learning word embeddings efficiently with noise-contrastive estimation,” InProceedings of the 26th International Conference on Neural Information Processing Systems, Volume 2, Pages 2265-2273, December 2013. 6. H. Schmid, “Part-of-speech tagging with neural networks,”In Proceedings of the 15th conference on Computational linguistics, Volume 1, Pages 172-176, Kyoto, Japan, August 1994. 7. T. Mikolov, S. Kombrink, L. Burget, J. Černocký, S.Khudanpur, “Extensions of recurrent neural network language model,” In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), DOI: 10.1109/ICASSP.2011.5947611, July 2011. 8. Yoshua Bengio, Réjean Ducharme, Pascal Vincent, Christian Jauvin, “A Neural Probabilistic Language Model,” In The Journal of Machine Learning Research,Volume 3, Pages 1137-1155, February 2003. 9. Sanzidul Islam, et al. "Sequence-to-sequence Bangla Sentence Generation with LSTM Recurrent Neural Networks." Procedia Computer Science 152 (2019): 51-58.APPROACH ACCURACY LOSSLSTM 97% 0.04354BI-DIRECTIONAL LSTM98.766% 0.0430Given Text Outputশত শত শত ত ন ত ণীর কমসং া্র ব ব া হেবভানুয়াতেত ভানুয়াতেত িমক আটকা পেড়েছIEEE - 4567010th ICCCNT 2019 July 6-8, 2019, IIT - Kanpur, Kanpur, India /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles false /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Warning /CompatibilityLevel 1.4 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize false /OPM 0 /ParseDSCComments false /ParseDSCCommentsForDocInfo false /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo false /PreserveFlatness true /PreserveHalftoneInfo true /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts false /TransferFunctionInfo /Remove /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /Arial-Black /Arial-BoldItalicMT /Arial-BoldMT /Arial-ItalicMT /ArialMT /ArialNarrow /ArialNarrow-Bold /ArialNarrow-BoldItalic /ArialNarrow-Italic /ArialUnicodeMS /BookAntiqua /BookAntiqua-Bold /BookAntiqua-BoldItalic /BookAntiqua-Italic /BookmanOldStyle /BookmanOldStyle-Bold /BookmanOldStyle-BoldItalic /BookmanOldStyle-Italic /BookshelfSymbolSeven /Century /CenturyGothic /CenturyGothic-Bold /CenturyGothic-BoldItalic /CenturyGothic-Italic /CenturySchoolbook /CenturySchoolbook-Bold /CenturySchoolbook-BoldItalic /CenturySchoolbook-Italic /ComicSansMS /ComicSansMS-Bold /CourierNewPS-BoldItalicMT /CourierNewPS-BoldMT /CourierNewPS-ItalicMT /CourierNewPSMT /EstrangeloEdessa /FranklinGothic-Medium /FranklinGothic-MediumItalic /Garamond /Garamond-Bold /Garamond-Italic /Gautami /Georgia /Georgia-Bold /Georgia-BoldItalic /Georgia-Italic /Haettenschweiler /Impact /Kartika /Latha /LetterGothicMT /LetterGothicMT-Bold /LetterGothicMT-BoldOblique /LetterGothicMT-Oblique /LucidaConsole /LucidaSans /LucidaSans-Demi /LucidaSans-DemiItalic /LucidaSans-Italic /LucidaSansUnicode /Mangal-Regular /MicrosoftSansSerif /MonotypeCorsiva /MSReferenceSansSerif /MSReferenceSpecialty /MVBoli /PalatinoLinotype-Bold /PalatinoLinotype-BoldItalic /PalatinoLinotype-Italic /PalatinoLinotype-Roman /Raavi /Shruti /Sylfaen /SymbolMT /Tahoma /Tahoma-Bold /TimesNewRomanMT-ExtraBold /TimesNewRomanPS-BoldItalicMT /TimesNewRomanPS-BoldMT /TimesNewRomanPS-ItalicMT /TimesNewRomanPSMT /Trebuchet-BoldItalic /TrebuchetMS /TrebuchetMS-Bold /TrebuchetMS-Italic /Tunga-Regular /Verdana /Verdana-Bold /Verdana-BoldItalic /Verdana-Italic /Vrinda /Webdings /Wingdings2 /Wingdings3 /Wingdings-Regular /ZWAdobeF /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 200 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 1.50000 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages false /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1</s>
<s>2] /VSamples [2 1 1 2] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 200 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages false /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 400 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e55464e1a65876863768467e5770b548c62535370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc666e901a554652d965874ef6768467e5770b548c52175370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000650067006e006500720020007300690067002000740069006c00200064006500740061006c006a006500720065007400200073006b00e60072006d007600690073006e0069006e00670020006f00670020007500640073006b007200690076006e0069006e006700200061006600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200075006d002000650069006e00650020007a0075007600650072006c00e40073007300690067006500200041006e007a006500690067006500200075006e00640020004100750073006700610062006500200076006f006e00200047006500730063006800e40066007400730064006f006b0075006d0065006e00740065006e0020007a0075002000650072007a00690065006c0065006e002e00200044006900650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000520065006100640065007200200035002e003000200075006e00640020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f0073002000640065002000410064006f00620065002000500044004600200061006400650063007500610064006f007300200070006100720061002000760069007300750061006c0069007a00610063006900f3006e0020006500200069006d0070007200650073006900f3006e00200064006500200063006f006e006600690061006e007a006100200064006500200064006f00630075006d0065006e0074006f007300200063006f006d00650072006300690061006c00650073002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f006200650020005000440046002000700072006f00660065007300730069006f006e006e0065006c007300200066006900610062006c0065007300200070006f007500720020006c0061002000760069007300750061006c00690073006100740069006f006e0020006500740020006c00270069006d007000720065007300730069006f006e002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 5.0 e versioni successive.) /JPN <FEFF30d330b830cd30b9658766f8306e8868793a304a3088307353705237306b90693057305f002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b4f7f75283057307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a3067306f30d530a930f330c8306e57cb30818fbc307f3092884c3044307e30593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020be44c988b2c8c2a40020bb38c11cb97c0020c548c815c801c73cb85c0020bcf4ace00020c778c1c4d558b2940020b3700020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d002000650072002000650067006e0065007400200066006f00720020007000e5006c006900740065006c006900670020007600690073006e0069006e00670020006f00670020007500740073006b007200690066007400200061007600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f00620065002000500044004600200061006400650071007500610064006f00730020007000610072006100200061002000760069007300750061006c0069007a006100e700e3006f002000650020006100200069006d0070007200650073007300e3006f00200063006f006e0066006900e1007600650069007300200064006500200064006f00630075006d0065006e0074006f007300200063006f006d0065007200630069006100690073002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f0074002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002c0020006a006f0074006b006100200073006f0070006900760061007400200079007200690074007900730061007300690061006b00690072006a006f006a0065006e0020006c0075006f00740065007400740061007600610061006e0020006e00e400790074007400e4006d0069007300650065006e0020006a0061002000740075006c006f007300740061006d0069007300650065006e002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d00200070006100730073006100720020006600f60072002000740069006c006c006600f60072006c00690074006c006900670020007600690073006e0069006e00670020006f006300680020007500740073006b007200690066007400650072002000610076002000610066006600e4007200730064006f006b0075006d0065006e0074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /ENU (Use these settings to create PDFs that match the "Required" settings for PDF Specification 4.01)>> setdistillerparams /HWResolution [600 600] /PageSize [612.000 792.000]>> setpagedevice</s>
<s>See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/330421056A Rule Based Extractive Text Summarization Technique for Bangla NewsDocumentsArticle in International Journal of Modern Education and Computer Science · December 2018DOI: 10.5815/ijmecs.2018.12.06CITATIONSREADS3773 authors, including:Some of the authors of this publication are also working on these related projects:estimation of carbon stock and forest structure attributes using remote sensing View projectWatermarking using Chaos View projectRezvi ShahariarUniversity of Dhaka5 PUBLICATIONS 4 CITATIONS SEE PROFILEMuhammad Asif KhanFederal Urdu University of Arts, Science and Technology129 PUBLICATIONS 2,533 CITATIONS SEE PROFILEAll content following this page was uploaded by Rezvi Shahariar on 18 March 2019.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/330421056_A_Rule_Based_Extractive_Text_Summarization_Technique_for_Bangla_News_Documents?enrichId=rgreq-66226d339bbe1b45d9d947d11dd2502c-XXX&enrichSource=Y292ZXJQYWdlOzMzMDQyMTA1NjtBUzo3Mzc4MTk4MzYzMDEzMTJAMTU1MjkyMTM2NTEwNg%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/330421056_A_Rule_Based_Extractive_Text_Summarization_Technique_for_Bangla_News_Documents?enrichId=rgreq-66226d339bbe1b45d9d947d11dd2502c-XXX&enrichSource=Y292ZXJQYWdlOzMzMDQyMTA1NjtBUzo3Mzc4MTk4MzYzMDEzMTJAMTU1MjkyMTM2NTEwNg%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/estimation-of-carbon-stock-and-forest-structure-attributes-using-remote-sensing?enrichId=rgreq-66226d339bbe1b45d9d947d11dd2502c-XXX&enrichSource=Y292ZXJQYWdlOzMzMDQyMTA1NjtBUzo3Mzc4MTk4MzYzMDEzMTJAMTU1MjkyMTM2NTEwNg%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Watermarking-using-Chaos?enrichId=rgreq-66226d339bbe1b45d9d947d11dd2502c-XXX&enrichSource=Y292ZXJQYWdlOzMzMDQyMTA1NjtBUzo3Mzc4MTk4MzYzMDEzMTJAMTU1MjkyMTM2NTEwNg%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-66226d339bbe1b45d9d947d11dd2502c-XXX&enrichSource=Y292ZXJQYWdlOzMzMDQyMTA1NjtBUzo3Mzc4MTk4MzYzMDEzMTJAMTU1MjkyMTM2NTEwNg%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Rezvi_Shahariar?enrichId=rgreq-66226d339bbe1b45d9d947d11dd2502c-XXX&enrichSource=Y292ZXJQYWdlOzMzMDQyMTA1NjtBUzo3Mzc4MTk4MzYzMDEzMTJAMTU1MjkyMTM2NTEwNg%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Rezvi_Shahariar?enrichId=rgreq-66226d339bbe1b45d9d947d11dd2502c-XXX&enrichSource=Y292ZXJQYWdlOzMzMDQyMTA1NjtBUzo3Mzc4MTk4MzYzMDEzMTJAMTU1MjkyMTM2NTEwNg%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/University_of_Dhaka?enrichId=rgreq-66226d339bbe1b45d9d947d11dd2502c-XXX&enrichSource=Y292ZXJQYWdlOzMzMDQyMTA1NjtBUzo3Mzc4MTk4MzYzMDEzMTJAMTU1MjkyMTM2NTEwNg%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Rezvi_Shahariar?enrichId=rgreq-66226d339bbe1b45d9d947d11dd2502c-XXX&enrichSource=Y292ZXJQYWdlOzMzMDQyMTA1NjtBUzo3Mzc4MTk4MzYzMDEzMTJAMTU1MjkyMTM2NTEwNg%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Muhammad_Khan953?enrichId=rgreq-66226d339bbe1b45d9d947d11dd2502c-XXX&enrichSource=Y292ZXJQYWdlOzMzMDQyMTA1NjtBUzo3Mzc4MTk4MzYzMDEzMTJAMTU1MjkyMTM2NTEwNg%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Muhammad_Khan953?enrichId=rgreq-66226d339bbe1b45d9d947d11dd2502c-XXX&enrichSource=Y292ZXJQYWdlOzMzMDQyMTA1NjtBUzo3Mzc4MTk4MzYzMDEzMTJAMTU1MjkyMTM2NTEwNg%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Federal_Urdu_University_of_Arts_Science_and_Technology?enrichId=rgreq-66226d339bbe1b45d9d947d11dd2502c-XXX&enrichSource=Y292ZXJQYWdlOzMzMDQyMTA1NjtBUzo3Mzc4MTk4MzYzMDEzMTJAMTU1MjkyMTM2NTEwNg%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Muhammad_Khan953?enrichId=rgreq-66226d339bbe1b45d9d947d11dd2502c-XXX&enrichSource=Y292ZXJQYWdlOzMzMDQyMTA1NjtBUzo3Mzc4MTk4MzYzMDEzMTJAMTU1MjkyMTM2NTEwNg%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Rezvi_Shahariar?enrichId=rgreq-66226d339bbe1b45d9d947d11dd2502c-XXX&enrichSource=Y292ZXJQYWdlOzMzMDQyMTA1NjtBUzo3Mzc4MTk4MzYzMDEzMTJAMTU1MjkyMTM2NTEwNg%3D%3D&el=1_x_10&_esc=publicationCoverPdfI.J. Modern Education and Computer Science, 2018, 12, 44-53 Published Online December 2018 in MECS (http://www.mecs-press.org/) DOI: 10.5815/ijmecs.2018.12.06 Copyright © 2018 MECS I.J. Modern Education and Computer Science, 2018, 12, 44-53 A Rule Based Extractive Text Summarization Technique for Bangla News Documents Partha Protim GhoshInstitute of Information Technology, University of Dhaka Email: bit0440@iit.du.ac.bd Rezvi Shahariar Institute of Information Technology, University of Dhaka Email: rezvi@du.ac.bd Muhammad Asif Hossain Khan Department of Computer Science & Engineering, University of Dhaka Email: asif@du.ac.bd Received: 12 September 2018; Accepted: 17 October 2018; Published: 08 December 2018 Abstract—News summarization is a process of distilling the most important information from a news document in a precise way. For the advancement of Internet nowadays almost all of the Bangla newspapers have their online versions, and people of this era like to read newspaper from website using Internet. But large amount of electronic news content is a burden for human to come out with valuable information. For mitigating this pain point, this paper proposes an automatic method to summarize Bangla news document. In this proposed approach, graph based sentence scoring feature is introduced for the first time for Bangla news document summarization. After analyzing vast amount of Bangla news document 12 sentence scoring features have been introduced for calculating score of a sentence. An improved summary generation method has also been proposed which remove the redundant information from summary. The result is evaluated using a standard summary evaluation tool called ROUGE, and found proposed method outperforms all existing methods used in Bangla news summarization. Index Terms—Bangla news summarization, Extractive based approach, NLP, ROUGE, Sentence scoring features. I. INTRODUCTION Man is passionately curious to know the unknowns. They like to share their knowledge, current social, political incident to others for the development of society. They want to build a social bonding through some social issues and this bonding initiates by communicating with the mass people through some media. Newspaper is one of the most popular media to do this. Johann Carolus was the man who first published newspaper [1]. In the perspective of Bangladesh, the first newspaper of independent Bangladesh was “The Daily Azadi”. As time went on, the number of Bangla newspapers has also increased. Though Bangladesh is a small country but there are a great number of newspapers published each day. A radical change was happened to the information world after the</s>
<s>invention and commercialization of Internet all over the world in the late 1990s. People started to publish news over Internet. Those newspapers which publish its news over Internet are called online newspaper. After the development of Internet in Bangladesh, online newspaper gain popularity day by day. The First Bangladeshi online newspaper is bdnews24.com started its journey back in 2006 [31]. As the time went on, its' number had been increased dramatically. Now, almost all of the Bangla newspapers publish their news in website. Day by day the size of electronic Bangla news data becomes huge. Thus, people who speak in Bangla, face with an overflow of Bangla news articles in daily life. With this vast electronic Bangla news data, people face some problems when someone has to find out relevant information within shortest time. The news which is published by news companies, all the contents of the news are not important but people have to read all the contents to come out with exact information from the various news. Researchers identified this problem few years back and devoted themselves on Bangla Natural Language Processing (BNLP). Text summarization is an application area of Natural Language Processing (NLP). Text summarization is a process of finding out the most important information from the source document in a precise way. Actually, it represents the condensed information of a longer text. In the shorter text, all the important information of longer text should be present and it should not be more than half mailto:bit0440@iit.du.ac.bdmailto:rezvi@du.ac.bd A Rule Based Extractive Text Summarization Technique for Bangla News Documents 45 Copyright © 2018 MECS I.J. Modern Education and Computer Science, 2018, 12, 44-53 of the original text. With the help of this text summarization process, it is possible to summarize a long news contents to a shorter version including all salient information. Basically, summarization process can be categorized into two types i) Extractive summarization ii) Abstractive summarization. Extractive summarization is a process where important sentences are selected from the original text. And these sentences will represent the whole document. On the other hand abstractive text summarization is a process where a semantic method is used to examine and interpret the original text and then a new brief text is presented on the basis of the information of original text. News summarization system was proposed initially for English news content around five decades ago. After that, several researchers have enriched text summarization system. Bangla is 8th most spoken language in the world [7]. In Bangladesh we have overabundance amount of Bangla electronic news data. But this is a matter of great regret that there are only a few research works done on Bangla news summarization [10, 11, 12, 13, 20]. English news summarization system may not directly applicable for Bangla news content, because of different sentence structure; grammatical rules and so on. Research work for Bangla language is difficult because automatic tools are unavailable for Bangla language to facilitate research. Identifying subject and object is very difficult for Bangla sentence.</s>
<s>Additionally, grammatical rule of this language is too much inconsistence. In this challenging context, a new approach for Bangla news document summarization has been presented here using rule based approach with the following major contributions. i. Introduced graph based sentence scoring features for Bangla news document, and use those features along with surface level and corpus level features. ii. Most of the sentence scoring features have been introduced for the first time in Bangla news summarization which helped to generate summary more accurately such as aggregate similarity, bushy Path, keyword in sentence, presence of inverted comma, and special symbol iii. Also introduce an improved summary generation procedure that helps to remove redundant information from the summary. The rest of the paper is organized as follows: Section 2 describes literature review on Bangla news summarization. In section 3, proposed method is discussed in detail. Evaluation and discussion on results are illustrated in Section 4. Finally, the conclusion is drawn with future directions in Section 5. II. LITERATURE REVIEW News summarization was first introduced for English Language by H. P. Luhn [3] in 1958. Here, the significant factor of a sentence is derived from an analysis of its words. It was proposed that the frequency of word in a news article establishes a useful measurement of words’ impact. The method of H. P. Luhn [3] was first extended by incorporating position of sentences and cue-phrases by P. B. Baxendale [28]. It is said that sentence can be important based on its position and containing certain cue-words (i.e., words like “important” or “relevant”) or the words of heading. Today, various research works are available in the arena of English news summarization [18, 19]. The recent research works on English news summarization have also followed for modeling our methodology. This [19] is one of the recent work proposed by Hilario Oliveira, Rafael Ferreira in 2016. They have used several new sentence scoring features like: i) lexical similarity ii) sentence centrality iii) word co-occurrence iii) text rank. This paper also presents a comparative analysis among these features; those have been used to calculate sentence score. The aim of this paper is to investigate several shallow sentences scoring performance. English news summarization procedure has reached at a mature stage. Except English language, news summarization of other languages, like Bangla and Hindi are not well defined. There are only few attempts conducted in the field of Bangla news summarization. In 2004, Islam and Masum [8] presented “Bhasa”, a corpus oriented search engine and summarizer. It performed document indexing and information retrieval based on keywords using vector space retrieval model [21] for Unicode Bangla text. This was the first attempt on Bangla text summarization. A few years later, some techniques from the investigation of English news summarization systems were applied to summarize Bangla news by Nizam Uddin et al. in 2007 [9]. They proposed a technique by incorporating some existing methods of English news summarization as follows: (i) location method, (ii) cue method, (iii) title method, (iv) term</s>
<s>frequency, and (v) numerical data. They have taken 40% higher ranked sentences from the input document as summary. In 2012, Kamal Sarkar [10] proposed an easy-to-implement approach for Bangla news summarization like the method of Edmandson [25]. It has three major steps: (i) preprocessing, (ii) sentence ranking, and (iii) summary generation. In this method, thematic term has been utilized which is related to the main theme of a news document. The term which has TF-IDF (Term Frequency Inverse Document Frequency) values greater than a predefined threshold value is taken as thematic term. In 2013, Md. Iftekharul Alam Efat et al. [13] introduced a method for Bangla news summarization by sentence scoring and ranking. In 2015, Md. Majharul Haque and Zerina Begum proposed an automatic Bangla news document summarization method by introducing sentence frequency and clustering. Another work on Bangla news summarization have proposed by Sumya Akter, Md. Palash Uddin and Shikhor Kumer Roy in 2017 [6]. This paper presented a method for summarization which extracts important sentences from a single or multiple Bangla news documents. They used sentence clustering approach to 46 A Rule Based Extractive Text Summarization Technique for Bangla News Documents Copyright © 2018 MECS I.J. Modern Education and Computer Science, 2018, 12, 44-53 generate summary from both single and multi-documents. We found a recent work on Bangla news summarization have done by Sheikh Abujar, Mahmudul Hasan and M.S.I Shahin in 2017 [17]. In 2015 Md. Majharul [32] proposes a Bangla news summarization technique using term frequency and sentence clustering. It also considers numerical figures. For summary generation, it divides the sentences into two clusters and takes half of the summary sentence from each cluster. But the problem with the work is that it evaluates the method only using 5 news articles. Again in 2017 Md Majharul [20] proposes Bangla news summarization by introducing sentence frequency and improve sentence ranking technique. The significant thing of this work is that, it gives the first sentence more priority if it contains any title word. On the other hand, it also consider numerical figure in words. All the Bangla news summarization research works have used either surface level or corpus level sentence scoring features to generate summary only. This finding draws our attention to devise an approach based on both. Hence, the propose method uses graph based sentence scoring features for the first time in the history of Bangla news document summarization along with both surface and corpus level features. Moreover, we analysed our proposed method using ROUGE (Recall Oriented Understudy of Gisting Evaluation) which shows better result than the five latest existing methods found in the Bangla literature. III. TEXT SUMMARIZATION TECHNIQUE FOR BANGLA NEWS DOCUMENTS In this section, first we would like to describe the proposed methodology that we used for Bangla news summarization. This process starts with taking an input of Bangla news document. The entire process of this proposed method has been divided into the following four sub processes which are document preprocessing, calculating sentence score using</s>
<s>sentence scoring features, ranking the sentences, and selecting summary delineated below. 3.1 Document Preprocessing Preprocessing is the first step of our method which is started from user input of a Bangla news document and goes through preprocessing of the document. Input news document is segmented to sentences based on the punctuation marks “।”, “?”, or, “!”' as the end point of a sentence. Then every sentence is tokenized into words. In this way, a word list is generated from an input news document. There are some words in Bangla language which are used to indicate the tense, adjective or for adapting grammatical structure. These words are called stop words. Stops words have less importance to represent a document. These words should be removed for further analysis. For identifying stop words, a list of stop words is kept in the system with which all the words of the input document are checked and removed which matched the stop words. The list of 398 stop words for Bangla language has been collected from [29]. In Bangla language, words are very much inflectional. So word stemming algorithm is applied to convert the words with different endings to a single word that is shown in Fig. 1. Fig.1. Word stemming in Bangla 3.2 Sentence Score Calculation Sentence scoring features defines how important a sentence is, among all the sentences of a document. A sentence, which has higher score, is most important sentence in a document. Selecting features for a summarization method is most important part. In our work, more than 2000 Bangla news document have been analyzed to realize which features can represent a news document. After performing an analysis, 12 sentences scoring features have been selected to calculate sentence score. All these features can be categories into three types: 3.2.1 Graph Based Features In this subsection, aggregate similarity and bushy path are discussed briefly. 3.2.1.1 Aggregate Similarity (F1) Aggregate similarity is a graph based technique that shows how a sentence is relevant with other sentence of a document. It works based on centrality idea. Centrality idea identifies main discuss topic of a document by finding highest relevancy of a sentence among all the sentence of a news document. Relevant sentences have more information in common with other sentence of a news document. This technique computes the importance of a sentence (Si) by calculating cosine similarity with all other sentences of that news document. Here, sentence (Si) is considered as a vertex of a graph and computed cosine similarity with all other sentences. If cosine similarity between two sentences is greater than a threshold (0.16), an edge is created between them. The weight of that edge will be the similarity value. Total weight of a sentence will be the summation of all edge value, which is created. Thus highly connected vertices will represent central sentences that indicate the main discussion in a document. Aggregate Similarity is defined as: Aggregate Similarity of si =∑ 𝑒𝑑𝑔𝑒_𝑤𝑒𝑖𝑔ℎ𝑡𝑠𝑗=1,𝑖≠𝑗 (1) Where S is the total number of sentence in</s>
<s>a document and edge_weight is the similarity between the sentences si and sj. If aggregate similarity score for a sentence is A Rule Based Extractive Text Summarization Technique for Bangla News Documents 47 Copyright © 2018 MECS I.J. Modern Education and Computer Science, 2018, 12, 44-53 greater than 1, then score should be normalized. Score can be normalized by the following equation: Normalized Score of si=S1−MinMax−Min (2) Here, similarity score of current sentence is S1, minimum similarity score is Min and maximum similarity score is Max within the document. 3.2.1.2 Bushy Path (F2) Bushy path is another graph based method used to compute the salience of a sentence based on centrality idea. It is very similar to aggregate similarity method. It computes the importance of a sentence by calculating ratio of generating edge of a sentence in a news document. At time, to compute aggregate similarity edge was created between two sentences if similarity value is greater than the threshold. Here number of edges is counted for a sentence Si in a document. This technique score is measured using the following equation: BushyPath(si) = NoE connected to SiMNoE connected to any sentence (3) Here, number of edges denoted as NoE and maximum number of edges is expressed by MNoE. 3.2.2 Corpus Based Features In this subsection, only corpus based features are described. Term frequency inverse sentence frequency and keyword in sentences are mostly used corpus based sentence scoring features. 3.2.2.1 Term Frequency Inverse Sentence Frequency (F3) The term frequency-inverse sentence frequency (TF-ISF) is used to measure the weight of terms as per their number of appearance in a document. Term Frequency (TF) measures how frequently a term exists in a document. Inverse sentence Frequency (ISF) measures how much descriptive a word is. This measure takes place by finding a word is common or rare across all sentences. If a word is appeared frequently in different sentences, that means it is important word for summary generation. TF-ISF score of a term is calculated by following equation: TF-ISF of term t = TF (ti) × log(𝑆𝑡𝑖) (4) TF-ISF score of a sentence will be the summation of TF-ISF score of all term of a sentence which is determined by the following equation: TF-ISF of sentence si = ∑ TF − ISF (tj)stj€T (5) Here, TF returns the frequency of term ti in the document, S is the total number of sentence in the document, T is the total number of term in a sentence si and Stj is the total number sentences in which ti occurs. In this case, also score of TF-ISF for sentence will be more than 1. Thus normalization procedure should be called for each score as it is used for aggregate similarity. 3.2.2.2 Keyword in Sentence (F4) Keyword of a document is the highly frequent word. If a high frequent word is present in a sentence then this sentence will have the high probability to discuss main topic of news document. Here, top 10% high frequent word is</s>
<s>selected as keyword. Keyword of sentence si = NKeysTNWowrds (6) Here, NKeys denotes the total number of keywords in a sentence and TNWords denotes the total number of words present in a sentence. 3.2.3 Surface level features Different surface level sentence scoring features are used in the literature. Among them, sentence position, title word, cue word, numerical value, special symbol, presence of inverted comma, URL, and Email address are used extensively. Below these features are discussed shortly. 3.2.3.1 Sentence Position (F5) The position of a sentence in a news document is one of the most effective features to select relevant sentences for summary. The sentence position feature is that the first sentence in a news document comprises the most relevant sentence of any document. And their importance decrease as the sentence goes further down in the document. Most of the time first sentence is the description of the title of a news document. So, the first sentence is the most important sentence for a news document. Proposed method also gives high importance to first sentence and importance goes down gradually. Sentence position score is measures by using the following equation: Sentence Position score of si = 1− (7) Here, i is the ith sentence in the document and i starting from zero and S is the total number of sentence in the news document 3.2.3.2 Title Word (F6) Title word of a news document is most relevant word about document’s discussing topic. It represents the theme a news document contains. In several existing methods [13, 16, 25], title words have been considered for sentence scoring. We have also observed from the analysis of 2000 Bangla news documents that title words convey the theme of the news document in the most cases. To compute the title words score of any sentence si, the following equation is used: Title Word Score of si = Wsi ∩ Wt|Wt| (8)48 A Rule Based Extractive Text Summarization Technique for Bangla News Documents Copyright © 2018 MECS I.J. Modern Education and Computer Science, 2018, 12, 44-53 Here, Wsi denotes the set of words in the sentence and Wt is the set of words in title. 3.2.3.3 Cue Word (F7) Cue phrase technique is one of the initial method uses for summary generation. In Bangla language, more than one sentence can be used for expressing information. There exist semantic relation between linked sentences. Cue phrase emphasize the gist of two sentences. The cue words can be as "ম োটকথো" (in short), "অবশেশে" (at last), "ইতি শযে" (already), "মেশেিু" (since), “পতিশেশে” (in summary) etc. Thus, sentences which contain any cue word has higher probability to select as summary sentence. Score is compute as: Cue word score of si = No.Cue wordsTNo. Cue Words (9) In this equation, No.Cue words denotes the number of Cue words in the sentence and TNo.Cue words denotes the total number of cue words in the document. 3.2.3.4 Numerical Value (F8) Numerical figure is always important for representing significant information of a news document.</s>
<s>Sentence containing numerical data are good candidate to be included in summary. In our proposed method, numerical figure identification pattern is used to identify numerical value. Numerical value is calculated by using the following equation: Numerical value of si = No.of Numerical Words Total Number of Words (10) In this equation, numerical value of Si is obtained by dividing the number of numerical words by the total number of words in the sentence. 3.2.3.5 Presence of Inverted Comma (F9) In Bangla, (“”, ‘’) quotation marks or inverted comma surrounding quotation, direct speech etc. contain important information. It is important especially for news document and articles where people give their speech. These speeches have a great chance to select in summary. Because people perception have to quote for represent news. Score of this technique is calculated as: Inverted comma score of si = No.Words in quotation MarkTotal Number of Words (11) Here, Inverted comma score of a sentence is obtained by dividing the number of words in the quotation mark by the total number of words in the sentence. 3.2.3.6 Special Symbol (F10) In this feature, different symbol like, %, different currency symbol are considered. Numerical value with currency has greater probability to select a sentence as a summary. Special symbol score is calculated as follows: Special Symbol Score of si = No.of Special Sym. Total Number of Words (12) Here, Special symbol score of Si is obtained by dividing the number of special symbols in a sentence by the total number of words in the sentence. 3.2.3.7 Date Format (F11) Dates are very important also for any news document. Presence of dates in the sentence increases the importance of the sentence, because date is more informative than any other words. Date format score is calculated by using following equation: Date Score of si = No.of date in sentence No.of dates in document (13) Here, Date score of Si is obtained dividing the number of dates in a sentence by the total number of dates in the whole document. 3.2.3.8 Presence of URL/Email Address (F12) Now-a-days use of Internet has widely spreaded. News document may have URL’s or Email address present in it, which provides more information about the document. So this valuable information should be present in summary. So sentences, which contain URL/Email, should give more priority while generating summary. URL/Email score of si = No.URLorEmail No.URLorEmailinDoc (14) Here, No.URLorEmail denotes the number of URL/Email in a sentence and No.URLor-EmailinDoc denotes the total number of URL/ Email in the whole document. 3.2.4 Sentence Total Score Calculation For every sentence in the document, all the scoring features are applied. Total score of a sentence are the summation of all twelve features value (from F1 to F12) which is shown in the following equation: Sentence Total Score for si = ∑ 𝐹(𝑘)12𝑘=1 (15) 3.3 Sentence Ranking After completion of total score calculation, every sentence of a news document will have a score. On the basis of assigned sentence score, sentence will be</s>
<s>sorted in descending order. This sorted list is the rank list of sentences of that news document. 3.4 Summary Generation This is final step of our proposed methodology. For summary generation, temporarily top 40% sentence will be extracted as summary from the rank list. This summary percentage is taken empirically. We have conducted an empirical testing over our methodology. We have extracted summary from the rank list as 30% to 45% and tested the result using ROUGE evaluation tool. We find out system gives better result when 40% sentences are extracted as summary. After selecting A Rule Based Extractive Text Summarization Technique for Bangla News Documents 49 Copyright © 2018 MECS I.J. Modern Education and Computer Science, 2018, 12, 44-53 sentences as temporary summary, cosine similarity is calculated among the selected sentence. If cosine similarity of any two sentences is greater than 0.6, smaller sentence is removed from the summary sentences and next top ranked sentence is selected as summary sentence. 0.6 similarity score means there are 60 percent similarity between two sentences. These similarities score have been used in recently publish journal paper for English news summarization [33]. The reason behind this action is to remove the almost similar sentence from summary that represent same information. On the other hand larger sentence represent all most all information of smaller sentence. That is why smaller sentence is removed. After performing this action, summary sentence will be arranged according to exact order of original document. These arranged sentences are treated as the summary of the document. 3.5 Pseudocode of Text Summarizing Technique on Bangla News Documents Input: Bangla news document SW: List of stop words CW: List of cue words Output: SUMMARY: Summary Sentence Begin: Segmenting the news document into sentence according to punctuation mark: ।, ?, ! Tokenize each sentence into word based on space after each word Remove words from words list which is member of stop word (SW) Convert all words into their base word with the help of stemming algorithm TW List of title words KW List of key words N  Number of sentences in the document /*Variable Initialization*/ SAG_SM 0 // Aggregate Similarity Score SBP 0 // Bushy path score STF-ISF  0 // term frequency and inverse sentence frequency score SN 0 // Numerical figure score SKW 0 // score of key word in sentence SP 0 // sentence positional score SDate 0 // score for presence of date in sentence SCW  0 // score for cue words in sentence SIV_C 0 // score for presences of inverted comma in sentence STW  0 // score for title words in sentence SSS  0 // score for presences of special symbol in sentence SEmail  0 // score for presences of email in sentence SCORE ∅ // for containing all sentences score SUMMARY ∅ // Summary sentences of input document For i 1 to N do SAG_SM Aggregate Similarity Score based on equation 1 and 2 SBP Bushy path score based on equation</s>
<s>3 STF-ISF  TF-ISF score based on equation 4 and SN Numerical figure score based on equation SKW key word Score based on equation 6 SP sentence positional score based on equation 7 SDate presence of date score based on equation 13 SCW  cue word score based on equation 9 SIV_C presence of inverted comma score based on equation 11 STW  title word score based on equation 8 SSS  presence of special symbol score based on equation 12 SEmail  presence of email address score based on equation 14 SCORE  Total score of a sentence based on equation 15 End Loop Sort SCORES in descending order SUMMARY Extract top 40% sentence as temporary summary from ordered list n number of sentences in temporary summary For i 1 to n do Calculate similarity among sentences If (similarity score>= 0.6 between any two sentences) Remove small sentence between two sentences from SUMMARY Add a top sentence to SUMMARY from remaining ordered list End if End Loop Sort SUMMARY in ascending order according to sentence position of input document Return SUMMARY End IV. EVALUATION AND RESULT Evaluating summarization method is a difficult task and the sophisticated way is yet to be achieved [3]. In this situation, several techniques have been applied to measure the quality of summary which is depended on the followings: a) importance of selected contents and b) presentation quality. Again, presentation quality can be assessed based on grammatical correctness and coherence. Considering all of these aspects, evaluation procedures are divided into two main categories as: a) intrinsic mode 50 A Rule Based Extractive Text Summarization Technique for Bangla News Documents Copyright © 2018 MECS I.J. Modern Education and Computer Science, 2018, 12, 44-53 and b) extrinsic mode [26]. Intrinsic evaluation of selected contents is usually done by comparing system generated summaries with model summaries written by human professionals. More specifically, evaluation is achieved by measuring the overlap between model summary and the automatically extracted summary as in ROUGE evaluation system [22]. In extrinsic evaluation method, the quality of summary is judged based on how it affects the completion of some other task. Proposed method is evaluated using intrinsic mode of summary evaluation. 4.1 Dataset In the initial stage of Bangla news summarization there is no benchmark dataset for evaluating Bangla news summarization system. To mitigate these problem, some researchers created a standard dataset for Bangla news summarization evaluation by analyzing 3400 Bangla news documents. These documents had been collected from the most popular Bangladeshi newspaper the Daily Prothom-Alo. These news documents contain variety of news that cover a wide range of topics like political, sports, crime, economy, environment, etc. After analyzing these documents, 200 documents had been selected randomly. The model summary of these documents was generated by two groups of scholars of Bangla Language, each group has three members. So for each document 6 model summaries are generated and they selected 3 summaries randomly. These model summaries was compared with the system generated summary. These</s>
<s>dataset has been used by several Bangla Natural Language Processing researchers in recent years. Point to be noted that several research works on Bangla news Summarization have been published based on these datasets [10, 11, 12, 20]. We have used this dataset [24] for evaluating our proposed method. We divide the dataset into two groups each contains 100 document randomly. Each document contains 3 model summaries. 4.2 Evaluation Evaluation of summary is not an easy task, because principally there is no ideal summary of a news document. For evaluation of a summary precision, recall, F-measure evaluation metrics are used. It is noticeable that these evaluation matrices have been considered in several news summarization systems for Bangla [10, 11, 12, 13, 20], and English [19, 26]. If ‘A’ indicates the number of sentences retrieved by summarizer and ‘B’ indicates the number of sentences that are relevant as compared to target set, Precision, Recall and F-measure are computed based on the following equations: Precision (P) =A∩B (16) Recall(R) = A∩B (17) F − measure =2×P×RP+R (18) 4.3 Experiment and Results To judge the efficiency of the proposed method, experiments have been conducted on 200 news documents. In each time, the system generated summary is compared with three model summaries of each news document and computed the average value of Precision, Recall and F-measure with ROUGE automatic evaluation package [23] which is shown in Table 1. Table 1. Average of ROUGE-1 scores of the proposed Method The proposed method is compared with five existing modern methods [10, 11, 12, 13, 20] found in the Bangla literature which have been published in recent years. The reason of selecting these methods is: all of the methods have been evaluated with same data set for which the results have been varied from the respective results claimed by the corresponding authors of the existing methods [20]. Comparison results based on ROUGE-1 of two dataset have been depicted in Fig. 2 and Fig. 3 respectively where method 1 is presented in [10], method 2 is in [11], method 3 is in [13], method 4 is in [12], and method 5 is in [20]. For mean comparisons of proposed method with five existing methods, T-test has been performed at 95% confidence interval. Precision, recall, and F-measure of five existing methods have tested for statistical satisfaction. In null hypothesis of T-test, we assume proposed method mean is less than or equal to each method mean and alternative hypothesis is proposed method mean is greater than from each method mean. For every time, t (calculated value) is greater than T (tabulated value). That indicates the rejection of null hypothesis and acceptance of alternative hypothesis. Thus it is easy to claim that proposed method achieved significantly better result than all the existing methods compared. Here significant improvement of proposed method has been shown from all the latest methods of Bangla news summarization. 4.4 Discussion on Results In this proposed method, some innovative features have been introduced for getting better performance. Features like</s>
<s>aggregate similarity, bushy path, key word in sentence etc. are newly used features in the field of Bangla news summarization. For these reasons, proposed method performed better than the previous methods.Data Set Different Measurement Value Average Recall Average Precision Average F_Measure Data Set1 0.6637533 0.603662 0.6244975 Data Set2 0.6904979 0.5911405 0.6306442 Combined dataset 0.677126 0.597401 0.627562 A Rule Based Extractive Text Summarization Technique for Bangla News Documents 51 Copyright © 2018 MECS I.J. Modern Education and Computer Science, 2018, 12, 44-53 Fig.2. Comparison of proposed method with the five latest existing methods based on the average ROUGE-1 scores using dataset-1 Fig.3. Comparison of proposed method with the five latest existing methods based on the average ROUGE-1 scores using dataset-2 In the previous subsection, the average of Recall, Precision and F-measure scores of ROUGE-1 have been shown for the proposed method. The comparison of the proposed method with the five latest Bangla news summarization methods has also been demonstrated for ROUGE-1 scores respectively. It has been found that the proposed method outperforms all of them. Now, the improvement of performance from the four latest existing methods is given in the following Table 2: Table 2. Improvement of news summarization in the proposed method than the five latest existing methods V. CONCLUSION AND FUTURE WORK A new approach has been illustrated here to summarize Bangla news document based on rule based approach. Though, there are many research works for English news summarization but these may not be directly applicable for Bangla because of the complexities of Bangla language in the structure of sentences, grammatical rules, inflection of words, and so on. Despite of these difficulties and challenges, in this work, an innovative method for summarizing Bangla news document has been introduced which produced extracted condensed news to the reader by using the tool produced using JAVA language. Thus, Bangla reader can save lot of time by using our approach to read only necessary news. In addition, graph based sentence scoring features are introduced for the first time for Bangla news summarization. On the other hand, corpus level and surface level sentence scoring features have also enhanced. Here a standard dataset is used for evaluation of proposed method. Proposed method shows significant improvement from all the latest Bangla news summarization methods. Evaluation has been done by measuring the similarity of system generated summaries with human professionals’ summaries using ROUGE evaluation package. The average precision, recall and F-measure score for proposed method is 0.60, 0.68, and 0.63 respectively. In our method we did not check any synonym. The words presented in different synonyms cannot be treated as same word because we do not have any tool for synonym identification. In future, we will try to address this issue. On the other hand, we will enhance sentence scoring features to make the system generated summary more closer to the human generated summary 0.500.520.540.560.580.600.620.640.660.68Method1Method2Method3Method4Method5ProposedMethod0.50.520.540.560.580.60.620.640.660.680.7Method1Method2Method3Method4Method5ProposedMethodMethod Name Improvement based on ROUGE-1 score Precision Recall F-Measure Method1 15.27% 24.53% 18.41% Method2 15.47% 24.48% 18.41% Method3 13.65% 22.90% 16.44% Method4</s>
<s>10.64% 21.20% 14.34% Method5 1.66% 10.40% 4.54% 52 A Rule Based Extractive Text Summarization Technique for Bangla News Documents Copyright © 2018 MECS I.J. Modern Education and Computer Science, 2018, 12, 44-53 ACKNOWLEDGMENT We would like to give thanks to Information and Communication Technology (ICT) Division, Ministry of ICT, Government of the People’s Republic of Bangladesh for supporting this research work. REFERENCES [1] First newspaper. retrieved from https://www.revolvy.com/page/Johann-Carolus [Online; accessed 5-may -2018]. [2] Ferreira, R., de Souza Cabral, L., Freitas, F., Lins, R. D., de França Silva, G., Simske, S. J., & Favaro, L. (2014). A multi-document summarization system based on statistics and linguistic treatment. Expert Systems with Applications, 41(13), 5780-5787. [3] Luhn, H. P. (1958). The automatic creation of literature abstracts. IBM Journal of research and development, 2(2), 159-165. [4] Haque, M. M., Pervin, S., & Begum, Z. (2013). Literature Review of Automatic Single Document Text Summarization Using NLP. International Journal of Innovation and Applied Studies, 3(3), 857-865. [5] Haque, M., Pervin, S., & Begum, Z. (2013). Literature review of automatic multiple documents text summarization. International Journal of Innovation and Applied Studies, 3(1), 121-129. [6] Akter, S., Asa, A. S., Uddin, M. P., Hossain, M. D., Roy, S. K., & Afjal, M. I. (2017, February). An extractive text summarization technique for Bengali document (s) using K-means clustering algorithm. In Imaging, Vision & Pattern Recognition (icIVPR), 2017 IEEE International Conference on(pp. 1-6). IEEE. [7] Chowdhury, M., Khalil, I., & Mofazzal, H. C. (2000). Bangla Vasar Byakaran. Dhaka: Ideal publication. [8] Islam, M. T., & Al Masum, S. M. (2004, December). Bhasa: A corpus-based information retrieval and summariser for bengali text. In Proceedings of the 7th International Conference on Computer and Information Technology. [9] Uddin, M. N., & Khan, S. A. (2007, December). A study on text summarization techniques and implement few of them for Bangla language. In Computer and information technology, 2007. iccit 2007. 10th international conference on (pp. 1-4). IEEE. [10] Sarkar, K. (2012). Bengali text summarization by sentence extraction. arXiv preprint arXiv:1201.2240. [11] Sarkar, K. (2012, August). An approach to summarizing Bengali news documents. In proceedings of the International Conference on Advances in Computing, Communications and Informatics (pp. 857-862). ACM. [12] Sarkar, K. (2014). A keyphrase-based approach to text summarization for English and bengali documents. International Journal of Technology Diffusion (IJTD), 5(2), 28-38. [13] Efat, M. I. A., Ibrahim, M., & Kayesh, H. (2013, May). Automated Bangla text summarization by sentence scoring and ranking. In Informatics, Electronics & Vision (ICIEV), 2013 International Conference on (pp. 1-5). IEEE. [14] B. language. (2017) History of bengali language. retrieved from https://www.cs.mcgill.ca/rwest/link-suggestion/wpcd2008-09 augmented/wp/b/Bengalilanguage.html. [Online; accessed 05-May-2017]. [15] T. T. of Inida. (2017) Nearly 60% of indians speak a language other than hindi. retrieved from http://timesofindia.indiatimes.com/india/ Nearly-60-of-Indians-speak-a-language-other-than-Hindi/articleshow/ 36922157.cms. [Online; accessed 05-March-2018]. [16] Inshorts. (2017) Bengali is an official language in africa’s sierra leone. retrieved from https://www.inshorts.com/news/bengali-is-an-official-language-in-africas-sierra-leone-1487699311123.[Online; accessed 06-February-2018] [17] Abujar, S., Hasan, M., Shahin, M. S. I., & Hossain, S. A. (2017, July). A heuristic approach of text summarization for Bengali documentation. In</s>
<s>Computing, Communication and Networking Technologies (ICCCNT), 2017 8th International Conference on (pp. 1-8). IEEE. [18] R. B. System. (2017) Rule based system. Retrieved from http://www.j-paine.org/students/ lectures/lect3/node5.html. [Online; accessed 01-April-2017]. [19] Oliveira, H., Ferreira, R., Lima, R., Lins, R. D., Freitas, F., Riss, M., & Simske, S. J. (2016). Assessing shallow sentence scoring techniques and combinations for single and multi-document summarization. Expert Systems with Applications, 65, 68-86. [20] Haque, M., Pervin, S., & Begum, Z. (2017). An Innovative Approach of Bangla Text Summarization by Introducing Pronoun Replacement and Improved Sentence Ranking. Journal of Information Processing Systems, 13(4). [21] Wong, S. M., Ziarko, W., & Wong, P. C. (1985, June). Generalized vector spaces model in information retrieval. In Proceedings of the 8th annual international ACM SIGIR conference on Research and development in information retrieval (pp. 18-25). ACM. [22] Lin, C. Y., & Hovy, E. (2003, May). Automatic evaluation of summaries using n-gram co-occurrence statistics. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1 (pp. 71-78). Association for Computational Linguistics. [23] R. 2.0. (2016) Java package for evaluation of summarization tasks with updated rouge measures. Retrieved from http://kavita-ganesan.com/content/rouge-2.0. [Online; accessed 25-May-2016]. [24] B. N. L. P. Community (2016) Dataset for evaluating Bangla text summarization system. Retrieved from http://bnlpc.org/research.php. [Online; accessed 8-August-2017]. [25] Edmundson, H. P. (1969). New methods in automatic extracting. Journal of the ACM (JACM), 16(2), 264-285. [26] Hovy, E., & Lin, C. Y. (1999). Automated Text Summarization in SUMMARIST. Advances in Automatic Text Summarization, 81-94. [27] Hariharan, S., Ramkumar, T., & Srinivasan, R. (2013). Enhanced graph based approach for multi document summarization. Int. Arab J. Inf. Technol., 10(4), 334-341. [28] Baxendale, P. B. (1958). Machine-made index for technical literature—an experiment. IBM Journal of Research and Development, 2(4), 354-361. [29] Bangla Stop word list: Retrieved from https://github.com/stopwords-iso/stopwords-bn [Online; accessed 10-August-2017]. [30] Value Normalization: Retrieved from https://en.wikipedia.org/wiki/Normalization_(statistics) [Online; accessed 12-November -2017]. A Rule Based Extractive Text Summarization Technique for Bangla News Documents 53 Copyright © 2018 MECS I.J. Modern Education and Computer Science, 2018, 12, 44-53 [31] Bangla News Paper list: Retrieved from http://www.24livenewspaper.com/bangla-newspaper[Online; accessed 5-March -2018]. [32] Haque, M. M., Pervin, S., & Begum, Z. (2015, December). Automatic Bengali news documents summarization by introducing sentence frequency and clustering. In Computer and Information Technology (ICCIT), 2015 18th International Conference on (pp. 156-160). IEEE. [33] Oliveira, Hilário, et al. "Assessing shallow sentence scoring techniques and combinations for single and multi-document summarization." Expert Systems with Applications 65 (2016): 68-86. Authors’ Profiles Partha Protim Ghosh completed his BSSE and MSSE degree in Software Engineering from the Institute of Information Technology, University of Dhaka. His research interests include NLP, Text Summarization, and software engineering. Rezvi Shahariar has completed both B.Sc. and M.Sc. degree in CSE from the University of Dhaka. He is now serving as Assistant Professor at the Institute of Information Technology, University of Dhaka. His research interests include Machine Learning, Data Science, NLP, Ad Hoc networking, and Security. Muhammad Asif Hossain Khan</s>
<s>has completed both B.Sc. and M.Sc. degree in CSE from the University of Dhaka. He also completed PhD from the University of Tokyo, Japan. Currently, he is working as an Associate Professor at the Department of Computer Science and Engineering, University of Dhaka. His research interests include NLP, Information Retrieval, Image Processing, and Machine Learning. How to cite this paper: Partha Protim Ghosh, Rezvi Shahariar, Muhammad Asif Hossain Khan, "A Rule Based Extractive Text Summarization Technique for Bangla News Documents", International Journal of Modern Education and Computer Science(IJMECS), Vol.10, No.12, pp. 44-53, 2018.DOI: 10.5815/ijmecs.2018.12.06 View publication statsView publication statshttps://www.researchgate.net/publication/330421056</s>
<s>Automatic Bangla Text Summarization Using Term Frequency and Semantic Similarity Approach2018 21st International Conference on Computer and Information Technology (ICCIT), 21-23 December, 2018 978-1-5386-9242-4/18/$31.00 ©2018 IEEE Automatic Bangla Text Summarization Using Term Frequency and Semantic Similarity Approach Avik Sarkar, Md. Sharif Hossen Department of Information and Communication Technology Comilla University, Comilla-3506, Bangladesh ssavi.ict@gmail.com, mshossen@cou.ac.bd Abstract— With the increasing amount of data within the cloud, it is harder to get the expected one. This leads to the idea of text summarization. Automatic text summarization is a tool for summarizing textual data into a short and concise piece of information via which people can have the idea about the content. Several approaches are introduced but there are a little amount of work has been done on Bangla text summarizing techniques due to some different and multifaceted structure of Bangla language. This paper illustrates the implementation of term frequency and semantic sentence similarity based summarizing approaches to summarize a single Bangla document. Removing stopwords, noisy words, lemmatization, tokenization has been done beforehand. Both of these methods return a bunch of top-ranked sentences to create a summary. The rank of a sentence is determined by the term frequency for the first approach and the sentence similarity for the second approach. The experimental result shows a favorable outcome for both of the approaches. Further improvements of these approaches certainly will return an enchanting outcome. Keywords—text mining, text summarization, python, nltk, wordnet, brown corpus, sentence similarity, word order similarity I. INTRODUCTION Data mining is a buzzword from several decades. It is the approach of finding new information by testing large pre-existing databases [1]. Text mining or text data mining is defined as the process of extracting a previously understandable, potential, unknown patterns or knowledge from the collection of massive and unstructured text data or corpus [2]. Text summarization is one of the challenging and important problems of text mining. Text summarization is the way to summarize single or multi-document texts and returns a short and concise amount of information that contains the gist of the whole content and gives a fair idea about the content. Therefore, processing documents are a perfunctory task, mostly due to the lack of standards [3]. Automatic text summarization is the task of generating a concise and fluent summary while storing main contents and overall theme [4]. Natural language processing is one of the most appealing research areas for data scientists and analyzers nowadays. In order to cope up with NLP research in the native languages, the researchers are working hard. In Bangladesh, researchers are also working in text mining sector on Bangla language. In this paper, we have discussed two mostly adapted extractive text summarization approaches to summarize a single document Bangla text. A. Classification of Text Summarization Automatic text summarization is such a process of generating summaries with the help of a gently written computer program. A text summarizing has three main steps. These steps are topic identification, making an interpretation, and the generation of summary [6]. Automatic text summarization</s>
<s>can be classified based on several criteria. The different dimension of text summarization can be generally categorized based on its input type, purpose and output types [5]. There is also another type of classification which is based on the uses of external resources. The diagram below shows the summarization types under several criteria – Fig. 1. Type of summarization According to the number of documents, summarizations are of two types, namely, single and multi-document. According to the purpose, summarizations are three types, namely, generic, domain-specific and query oriented. According to the uses of external resources, summarizations are of three types, namely, knowledge-rich and knowledge poor. According to the types of output, summarizations are of two types, namely, abstraction based and extraction based. This paper deals with single document extraction based summarization. Before getting into the detail, we discuss the abstraction and extraction based summarization techniques in a nutshell. Abstraction based summarization produces an abstract of the original text by using the interpretation procedure and generates a summary that expresses the same in a more concise way. Extractive summaries are produced by identifying important sentences which have directly been selected from the document. These types of summarizations are mostly found (Aliguliyev, 2009; Ko and Seo, 2008), where selected document sentences are coherently combined and compressed to exclude the sections of the sentences (Ganesan et al., 2010; Khan et al., 2015) [5]. In this approach, sentences are given some scores based on different criteria and then the sentences with relatively maximum rating are chosen to pick for summarization. It uses several types of natural language process (NLP) to retrieve information. B. Usual Procedure of Text Summarization Usually, the basic text summarization includes the following steps – Step 1: The first step is to tokenize words or sentences, removing stopwords, stemming, lemmatization, word frequency calculation etc. Step 2: The next step is associated with word scoring, sentence scoring, graph construction, semantic similarity calculation between sentences etc. These calculations have been done with the help of brown corpus and wordnet. Step 3: The final step is to decide which sentences should be picked and how to place those sentences in order to produce a well understandable summary. C. Usual Procedure for Extractive Summarization Extractive summarization is such a summarization procedure in which the system returns the most relevant sentences as a summary without redundant appearances of sentences. These sentences are chosen based on compression rate, which is used to define the ratio between the length of the source and the summary text. Compression rate (about 5 - 30 %) is acceptable for a summary content. The procedures of the extractive technique include term frequency method, cluster-based method, sentence similarity-based method, fuzzy logic based method, neural network-based method, meta heuristic search approach, Query-based approach, topic-driven maximal marginal method, Concept obtained method, feature weight based regression analysis method, centroid-based summarization etc. The paper is designed as follows: In section II, we discuss the related works. Section III shows the methodology used in the research. Experimental analysis</s>
<s>and discussion are mentioned in Section IV. Finally, Section V includes the conclusion with the future plan. II. RELATED WORKS In the last several years, a significant amount of extractive algorithms based on several features have been developed. R. Mihalcea and P. Tarau proposed a sentence ranking based approach in [7]. In this approach, the rank of the sentence will be decided by a similarity function. The approach in [7] is improved by Barrios, Lopez, Argerich, Wachenchauzer in [8]. They improved the sentence similarity calculation in three different ways and proposed three different techniques, namely, Longest Common Substring, Cosine TF-IDF, and BM25 / BM25+. A cue-based hub authority text summarization approach proposed by J.Zhang in [9] is used for multiple documents. It uses k-nearest neighboring to detect sub-topics. J. Steinberger and K. Jezek proposed a new evaluation measure in [10], which is based on the latent semantic analysis (LSA) and can capture the main topic of the document. Another technique proposed by Y. Onuyang in [11] is based on the hierarchical representation. To identify the subsumption of the words, he calculated point-wise minimum information (PMI) and later on higher PMI to determine the words are correlated or not. It was a multi-document summarization technique. A query based extractive summarization technique proposed by J. Siddiki and K.Gupta in [12] is based on sentence clustering method. The semantic and syntactic similarity was the core idea to form the cluster. R. Nallapati, F. Zhai, B. Zhou proposed a recurrent neural network based sequence model for extractive summarization in [13]. The idea was based on maximizing the Rogue score of sentences with respect to gold summaries to select the sentences. They also proposed the novel training technique to train the system in an abstractive way to eliminate the need of approximate extractive labels generation. A Bayesian summarization model proposed by Daume et al in [14] was query-focused summarization technique. A Bayesian sentence-based topic model is proposed by Wang et al. [15] based on the term document and term sentence association. In [16], Celikyilmaz et al. proposed a two-phase hybrid model. The first one was the hierarchical topic model to discover the topic structure of all the sentences and then they compute the similarity between sentences based on human-provided summaries using a novel tree-based scoring technique. Using the scores in the second stage they train a regression model according to the lexical and structural characteristics of the sentences and use the model to form a new summarized document. Gong and Liu in [17] proposed a technique that calculates highly ranked sentences based on the semantic calculation that involves selecting the appearance of words in a sentence. They transformed the document into an N x N vector, where the column contains the words and the row contains the sentences. Later on, the scoring has been done by TF-IDF method. A feature-based sentence scoring approach is proposed by K. Meena and D. Gopalani in [19]. For calculating the score of a sentence, they used a function named</s>
<s>fitness function. The fitness function works with about 21 different features. Each of the features has several scores and the sum of those features multiplied by a constant value will result in the sentence score. A hybrid function is introduced by AL-Khassawneh, Salim, Jarrah in [6]. This hybrid function is used as an improvised technique of triangle – graph based text summarization approach. This hybrid function consists of four different similarity measures to find similarity between sentences to create the graph. Several features are involved with that approach such as- title words, sentence lengths, sentence positions, numerical data, thematic words and sentence to sentence similarity. An approach to calculate the score of a sentence proposed by Ramanujam and Kaliappan uses the Naïve Bayesian Classifier based on timestamp strategy in [20]. This timestamp approach is used to achieve the coherent looking summary that extracts more relevant information from multiple documents. Scoring strategy is associated with word frequency, readability and comprehensibility. In Bengali language, K. Sarkar proposed a technique in [21] based on the sentence ranking associated with TF-IDF calculation of thematic term, positional values and the sentence length for an appearance in the document. Combining all the features together a sentence scoring equation is formed and k-most scored sentences are selected for the summary. Another approach based on TF-IDF and k-means clustering algorithm for sentence selection is proposed in [22] for Bangla text summarization. They applied TF-IDF approach for word scoring, and to select k-most scored sentence for generating the summary they use K-means clustering algorithm. A new semantic similarity measure proposed by Sinha, Jana, Dasgupta, and Basu in [23] is based on the hierarchical formation of words. The similarity will be the same if the words are in the same category. If they are in a different category, then the distance will be the similarity of the words. Based on this concept they proposed a lexicon. Except for those approaches in Bengal language, there are more other similar approaches proposed by M.A. Uddin [24] and M. Imbrahim [25] based on TF-IDF. III. WORKING METHODOLOGY Here, we will use two different techniques to summarize Bangla text document. The first one is the term frequency and the other one is the semantic similarity-based approach. The idea for the first is picked from the concepts illustrated in [10]. The concept for the semantic similarity approach is picked from [18], which is basically a graph-based data model. For every approach, there are pre-processing we have to complete. Those are tokenization, removing stopwords, and lemmatization. (a) Tokenization: There are basically two types of tokenization. First one is word tokenization and the other is sentence tokenization. In Bangla language, the period (.) is denoted by দাঁিড় and the symbol by ( ). Taking this issue in account a self-implemented tokenizing system is designed to tokenize both words and sentences. (b) Removing Stopwords: Removing stopwords are basically those frequent words like আজ, কাল, অথবা, এবং, ন বা, অ থায় etc., which actually do not have much effect</s>
<s>on the meaning of a sentence. A self-implemented library of stopwords is designed in these cases although these are not enough and the contribution process is still going on. (c) Lemmatization: There are several words in different formations like শহর, শহের, শহেরর, শহরতলী, শহর েলা, শহরস হ etc. They basically denote the same meaning শহর but with the inclusion of terms like ‘s’, ‘es’ it shows the different form. To remove redundancy, the conversion of several formations into an actual word is also needed. (d) Removing Duplicate Sentences: In a summary, duplicate line appearance is not expected. Hence, the removal of duplicate lines is applied. A. Term Frequency Based Summarization In this approach, word frequency will be calculated after the completion of pre-processing steps. The map of frequency is then filtered. That is, we have to ignore very high and very low-frequency terms. By ignoring those terms, we remove the noisy terms from the sentences. Noisy words are those terms that appear frequently or only a few times into the content but do not contain much information. By setting an apparent range of terms and maintaining the range, the system encounters only those terms that are relevant to the content. In this way, the sentences are ranked according to the frequency of the terms they contain. From those sentences, top K sentences are selected for the final summary. The algorithm of the whole procedure is given as follows: TermFreqSum(input text T) 1. Tokenize sentences in T and save to S 2. Remove stopwords from sentences 3. For each sentence in S 4. Count frequency of each word W in a sentence 5. For each word Wi 6. M = Maximum(freq[Wi]) 7. freq[Wi] = freq[Wi]/M 8. If(freq[Wi]>=max_cut or freq[Wi]<=min_cut) 9. Ignore Word Wi 10. For each ith sentence in S 11. For each word in S 12. If Wi in freq 13. rank[i] += freq[Wi] 14. Top K sentences are selected for the final summary In line 8, two terms, i.e., max_cut and min_cut are introduced. These variables contain two values, which can be put using observation between the ranges the summarizer will produce a decent result. According to our observation, we use min_cut = 0.1 and max_cut = 0.9. Applying this approach, a single document text summarization can be possible within a very short time. As there is a short task of only mapping the terms and counting them, as well as, prioritizing them by some other calculations, the execution time is much less. Due to using the max and min cut ranges, there are some possibilities to lose some information. While working with some random training data, it has been seen that it returns a decent summary. Redundant lines for summary will also be eliminated. This similar approach can also be applicable for multi-document summarization. B. Semantic Similarity Based Summarization In this approach, sentences are considered as a single node of a weighted graph after the pre-processing steps. Graph nodes are connected with one another where the weight of</s>
<s>each edge is the similarity between two nodes, i.e., two sentences. To calculate the semantic, we have to consider two semantic similarity calculations, namely, word order and sentence order. The combination of those two similarities is actually the weight of the edge between any two sentences, i.e., the similarity between two sentences. The similarity between a pair of words is calculated based on two functions- f(l) and f(h) where l is the shortest path between any two words in WordNet database and h is the height of their lowest common sub-summer (LCS). The WordNet is basically a database where words are stored based on the several categories. Synonym of a certain word is placed on the same category to get complete similarity results between them. The function f(l) and f(h) normalizes their values within 0 and 1. The equation is given below: �̃� = 𝑓(𝑙)𝑓(ℎ) … … … (1) where 𝑓(𝑙) = 𝑒 … … … … (2) and 𝑓(ℎ) = ℎ ℎℎ ℎ … … … (3) Here, 𝛼 ∈ [0, 1] and 𝛽 ∈ (0, 1]. Based on these calculations while choosing similar words, the algorithm chooses the most similar one. The calculation of the sentence semantic similarity is the combination of the similarity of both words, i.e., order similarity and semantic similarity. The semantic similarity is calculated using a cosine similarity between the semantic vectors of two sentences. The semantic vector is a vocabulary of a union of words of two sentences. If the word occurs in both of the sentences, then the semantic similarity is 1, otherwise, the similarity is calculated against all other words in the sentences. If the calculation leads to a threshold, φ, value, then the value of the word is φ else it is 0. The similarity value further can be attenuated by the information content hold by brown corpus. Equation is given below: 𝑆 = 𝑠 . 𝑠‖𝑠 ‖. ‖𝑠 ‖… … … … … (4) where 𝑠 = �̃�. 𝐼(𝑤 ). 𝐼(𝑤 ) … … … … (5) 𝐼(𝑤 ) = 1 − log(n + 1)log(N + 1) … … … … (6) n = number of times word 𝑤 in corpus N = number of words in the corpus Word order similarity calculation can be done by calculating the word vector for each of the sentences and computing a normalized similarity. The word order vector will be the union of words in both of the sentences. If a word appears in both of the sentences, then the similarity will be recorded otherwise the similarity to the most similar words in the sentence is recorded if it does not cross the threshold value, η, else it is 0. The eqution is given below: 𝑆 = ‖𝑟 − 𝑟 ‖‖𝑟 + 𝑟 ‖ … … … … (7) where 𝑟 = word position vector for sentence 1 𝑟 = word position vector for sentence 2 So, the final equation for the sentence similarity approach is as follows: 𝑆 𝑇 , 𝑇</s>
<s>= 𝛿𝑆 + (1 − 𝛿)𝑆 = 𝛿 𝑠 . 𝑠‖𝑠 ‖. ‖𝑠 ‖+ (1 − 𝛿)‖𝑟 − 𝑟 ‖‖𝑟 + 𝑟 ‖ … … … (8) The algorithm of the semantic similarity based approach is given as follows – SemSimSum(Sentence 𝑠 , Sentence 𝑠 ) 1. Set 𝛼 = 0.2, 𝛽 = 0.45, 𝜂 = 0.4, 𝜑 = 0.2, 𝛿 = 0.85 2. Make a word vector by doing union of 𝑠 and 𝑠 3. f(l) is calculated using equation 2 4. f(h) is calculated equation 3 5. Word order similarity between 𝑠 and 𝑠 is calculated using equation 1 6. Semantic similarity is calculated using equation 4 7. Word order similarity is also calculated using equation 7 8. The sentence similarity is calculated equation 8 9. Finally, top ranked k sentence pairs are selected for the summary As we previously remove the duplicate sentences, there is no chance of having any redundant sentences. This approach provides mostly the gist of the contents. Besides, it contains a more accurate result than the TF approach discussed above. IV. Experimental Result and Analysis To test the approaches described above we have used some training dataset randomly found. Dataset is collected from Facebook posts, online news content, and custom written texts from books. To compare those approaches with the human mind, we took help from some random users to make a summary. Given four samples with the input of different sizes and their corresponding output – Sample 1: Size = 1.28 KB Input: মা ষ ি র াণী । জগেতর অ া ািণর সিহত মা েষর অয়াথক - মা ষ িবেবক ও ি র অিধকারী । এই িবেবক, ি ও ান নাই বিলয়া আর সকল ানী মা ষ অেপ া িন । ান ও ম ে র উতকষ সাধর কিরয়া মা ষ জগেতর েক অ য়কীিত াপন কিরয়ােছ, জগেতর ক াণ সাধন কিরেতেছ, প বল ও অথবল মা ষেক বড় বা মহৎ কিরেত পাের না । মা ষ বড় হয় ান ও ম ে র িবকােশ । ান ও ম ে র ত িবকােশ জািতর জীবন উ ত । ত মা ষই জা য় জীবেনর িত া ও উ য়ন আনয়েন স ম । Term Frequency Output: ান ও ম ে র উতকষ সাধর কিরয়া মা ষ জগেতর েক অ য়কীিত াপন কিরয়ােছ , জগেতর ক াণ সাধন কিরেতেছ , প বল ও অথবল মা ষেক বড় বা মহৎ কিরেত পাের না । Semantic Sentence Similarity Output: মা ষ বড় হয় ান ও ম ে র িবকােশ । A Random User Output: ি র ানী িহেসেব ান ও ম ে র েন মা ষ জগেত য অমরকীিত গেড় েলেছ প বল ও অথবল িদেয় তা কখেনা স ব নয় । Using the input as 1.28 KB, the term frequency takes 0.058 seconds to produce the output while graph theoretic needs 10 seconds. Sample 2: Size = 1.81 KB Input: ভাত বাঙািলর ব কােলর ি য় খা । স সাদা চােলর গরম ভােতর কদর সবচাইেত বিশ িছল বেল মেন হয় । েরােনা সািহেত</s>
<s>ভােলা খাবােরর ন না িহেসেব য-তািলকা দওয়া হেয়েছ, তা হেলা কলার পাতায় গরম ভাত, গাওয়া িঘ, নািলতা শাক, মৗরালা মাছ আর খািনকটা ধ । লাউ, ব ন ইত ািদ তরকাির র খত সকােলর বাঙািলরা, িক ডাল তখেনা বাধহয় খেত কেরিন । মাছ তা ি য় ব ই িছল । িবেশষ কের ইিলশ মাছ । ঁ টিকর চল সকােলও িছল িবেশষ কের দি ণা েল । ছাগেলর মাংস সবাই খত । হিরেণর মাংস িবেয়বািড়েত বা এরকম উৎসেব দখা যত । পািখর মাংসও তা-ই । সমােজর িক লাক শা ক খত । ীর, দই, পােয়স, ছানা-এসব িছল বাঙািলর িনত ি য় । আম-কাঁঠাল, তাল-নারেকল িছল ি য় ফল । ব চল িছল না , িপেঠ িল, বাতাসা, কদমা-এসেবর। মসলা- দওয়া পান পান খেত সকেল ভালবাসত । Term Frequency Output: ব চল িছল না , িপেঠ িল, বাতাসা, কদমা-এসেবর। মসলা- দওয়া পান পান খেত সকেল ভালবাসত । মাছ তা ি য় ব ই িছল । ভাত বাঙািলর ব কােলর ি য় খা । েরােনা সািহেত ভােলা খাবােরর ন না িহেসেব য-তািলকা দওয়া হেয়েছ, তা হেলা কলার পাতায় গরম ভাত, গাওয়া িঘ, নািলতা শাক, মৗরালা মাছ আর খািনকটা ধ । Semantic Sentence Similarity Output: মাছ তা ি য় ব ই িছল । আম-কাঁঠাল, তাল-নারেকল িছল ি য় ফল । িবেশষ কের ইিলশ মাছ । ঁ টিকর চল সকােলও িছল িবেশষ কের দি ণা েল । A Random User Output: বাঙািল জািতর জীবনযা ার পিরচেয়র মে খ াভ াস অ তম। াচীনকাল থেক এেদেশর মা ষ িবিচ ধরেনর সাধারণ খাবার খত। উৎসব বা িবেয়েত হিরেণর মাংস পিরেবশন করা হেতা। সমােজর সকল েরর ও অ েলর খা াভ াস ায় একই ধরেনর িছল। Using the input as 1.81 KB, the term frequency takes 0.058 seconds to produce the output while graph theoretic needs 13 seconds. Sample 3: Size = 3.65 KB Input: িশ া বা ান অজন হেলা সাধনার াপার তেব এই সাধনার সাধক হেত হেব িশে র িনেজেকই একজেনর সাধনা কখনও অ কউ কের িদেত পাের না যার সাধনা তােকই সাধন করেত হয় অ থায় সাধনার ফলাফল কখনই আশা প হয় না আমােদর অেনেকর মে ই এক িবেশষ বণতা ল করা যায়, তা হেলা বা িশ েকর উপর স ণ ভরসা কের বেস থাকা আমােদর এই বণতার কারেণই আমােদর িশ া শতভাগ পির ণ হয় না িকংবা িশ ক িনঃসে েহ একজন ছাে র িনকট ভরসার পা হেব এটাই াভািবক িক তার মােন এই নয় য, ই তার িশ ােক অ ের েথ দেবন অ ের েথ দওয়ার দািয় র নয় েথ নওয়ার দািয় িশে র বড়েজার পথ দিখেয় িদেত পােরন মা িশ েক বেল িদেত পােরন কান পথ তার জ উ ম, কান পেথ, িকভােব হেট গেল স তার কাি ত ব র দখা পেত পাের িক এরপেরর সব দািয় ই িশে র র দখােনা পেথ, র িনেদিশত প ায় হেট যেত হেব িশে র িনেজেকই স কভােব স পথ পািড় িদেয় কাি ত ব অজন কের আনা িশে রই দািয় আমরা ায়ই ছাে র খারাপ ফলাফেলর জ ই িশ কেকই দাষােরাপ কির িক খারাপ ফলাফেলর জ কখনও িশ ক দায়ী নয়, বরং ছা</s>
<s>রাই দায়ী িক িশ েকর দখােনা পেথ ছা যিদ হাঁটেত না পাের স অেযা তা মা ছাে র িশ পথ হেল স র ভার িশ েকই স ভার র উপর চািপেয় িদেল তা কখনও িবচার হয় নাম ল কামনা কেরন এবং ক ােণর পথই দিখেয় থােকন িকসাধেন যিদ িশে র সাধনায় থােক, তেব তা একা ই িশে রTerm Frequency Output: র দখােনা পেথ, প ায় হেট যেত হেব িশে র িনেজেকই মা িশ েকপােরন কান পথ তার জ উ ম, কান পেথ, িকভােব হেটকাি ত ব র দখা পেত পাের Semantic Sentence Similarity Output: দিখেয় িদেত পােরন মা িশ েক বেল িদেত পােরনজ উ ম, কান পেথ, িকভােব হেট গেল স তার কাি ত পাের A Random User Output: িব ার সাধনা িশ েককরেত হয়, উ রসাধক মা কবল উ ম পথ দশসপেথ সাধেন কের িসি লাভ করেত িশ েকই Using the input as 3.65 KB, the term frequency takes 0.066 seconds to produce the output while graph theoretic needs 22 seconds. Sample 4: Size = 10.9 KB We have used another sample with an input size of 10.9 KB where frequency and graph-theoretic approaches take 0.455 and 100 seconds respectively to produce their outputs. For page limitation, we cannot include it.uploaded it on the internet [26]. Figure 2 shows the runtime complexity (measured in seconds) with respect to the text size (measured in KB).Fig. 2. Runtime vs. text size Applying term frequency method within this text document results a list of noisy words like াভািবক, িশ ােক etc. Based on the max_cut and min_cut ranges, this method removes most frequent and less frequent terms in order to retrieve the best possible summary. In semantic similarity approach, unfortunately, we cannot manage any valid Bangla WordNet to peিশ েকই বহন করেত হয় না িশে র িক সই ক াণ িশে র অপারগতা , র িনেদিশত িশ েক বেল িদেত হেট গেল স তার বড়েজার পথ পােরন কান পথ তার ব র দখা পেত িশ েক িনেজ অজন দশ করান িক Using the input as 3.65 KB, the term frequency takes 0.066 seconds to produce the output while graph theoretic input size of theoretic approaches to produce their include it. We have time complexity (measured in seconds) with respect to the text size (measured in KB). Applying term frequency method within this text document results a list of noisy words like অজন, হেট, etc. Based on the max_cut and min_cut ranges, this method removes most frequent and less frequent terms in order to retrieve the best possible summary. In semantic similarity approach, unfortunately, we cannot manage any valid Bangla WordNet to perfectly classify the distance. The synonyms are here considered as different words. The distance also cannot perfectly be measured due to the lack of WordNet. Brown corpus does not contain any Bangla resources and hence the calculations are incomplete. Improvement of all of these issues will certainly provide a far better result than the current result. Besides, these approaches can be improved by extracting several other features like header terms, cue</s>
<s>words, title similarity, thematic features, named entiBased on the execution time, the term frequency approach is pretty faster (shown in Figure 2) but for retrieving a better quality summary, the semantic similarity approach is much better. V. Conclusion and Future WorksIn this paper, to summarize a sitext, two approaches, namely, term frequency and semantic sentence similarity-based approachThe first one is based on the calculations of of terms in the content and the other one is based on semantic and word order similarity between sentences. Both approaches are extraction based amost relevant sentences from which a certain numbersentences are selected to produce a summary. measurement is a great issue in teproblem. So, the improvement of the quality of similarity measurement by adding several features, WordNet for Bangla language and contributing Bangla text resources in brown corpus could choice. From the analysis result, we see that frequency summarizer is pretty faster than but for retrieving a better quality summary, the semantic similarity approach is much better. As a future work to improve the quality of sentence similarity approach, we would like to develop a category based Bangla WordNet where words will be arranged under different categories. Base words and synonyms will also be arranged within the same category. REFERENCES[1] Verloren, “Wikipedia Data Mining Article,[2] Yu Zhang, Mengdong Chen, and Lianzhong LiuText Mining,” IEEE International Conference on Software Engineering and Service Science, Beijing, China, 2015.[3] Juan-Manuel, and Torres-Moreno, “Automatic Text Summarization (Cognitive Science and Knowledge Management)Wiley-ISTE, 2014. [4] Mehdi Allahyari, Seyedamin Pouriyeh, Mehdi Assefi, Saeid Safaei, Elizabeth D. Trippe, Juan B. Gutierrez, Summarization Techniques: A Brief Survey,[5] Yogan Jaya Kumar, Ong Sing Goh, Choon, and Puspalata C Suppiah, “A Review on Automatic Text Summarization Approaches,” Journal of Computer Science, vol. 12, Iss. 4, pp. 178-190, 2016. [6] Yazan Alaya AL-Khassawneh, Naomie SalimJarrah, “Improving Triangle-Graph Based Text Summarization using Hybrid Similarity Function,” Indian Journal of Science & Technolog, vol. 10, Iss. 8, 2017. [7] R. Mihalcea, and P. Tarau, “Textrank: Bringing Order into TProceedings of EMNLP, Association for Computational Linguistics, Barcelona, Spain, pp. 404-[8] F. Barrios, F. Lopez, L. Argerich, and R. Wachenchauzer“Variations of the Similarity Function of TextRank for Automated Summarization,” Argentine Symposium on Artificial Inpp. 65-72, 2015. [9] J. Zhang, L. Sun, and Q. Zhou, “Cueapproach for Multi-document Text SummarizationInternational Conference on Natural Language Processing and Knowledge Engineering, pp. 642 –645, China, 2005.classify the distance. The synonyms are here considered as different words. The distance also cannot perfectly be measured due to the lack of WordNet. Brown corpus does not contain any Bangla resources and hence the rovement of all of these issues will certainly provide a far better result than the current result. Besides, these approaches can be improved by extracting several other features like header terms, cue words, title similarity, thematic features, named entity etc. Based on the execution time, the term frequency approach is pretty faster (shown in Figure 2) but for retrieving a better quality summary, the semantic similarity approach is Conclusion and Future Works In this paper,</s>
<s>to summarize a single document Bangla , term frequency and semantic based approaches are implemented. the calculations of the frequency of terms in the content and the other one is based on the semantic and word order similarity between sentences. Both approaches are extraction based and return a set of sentences from which a certain number of selected to produce a summary. Similarity ment is a great issue in text summarization the quality of similarity adding several features, e.g., enriching WordNet for Bangla language and contributing Bangla text orpus could certainly be a fruitful the analysis result, we see that frequency summarizer is pretty faster than but for retrieving a better quality summary, the semantic similarity approach is much As a future work to improve the quality of sentence e to continue our study to a category based Bangla WordNet where words will be arranged under different categories. Base words anged within the same EFERENCES Wikipedia Data Mining Article,” 2002. and Lianzhong Liu, “A review on IEEE International Conference on Software Engineering and Service Science, Beijing, China, 2015. Automatic Text Summarization owledge Management),” 1st Edition, Mehdi Allahyari, Seyedamin Pouriyeh, Mehdi Assefi, Saeid Safaei, Elizabeth D. Trippe, Juan B. Gutierrez, and Krys Kochut, “Text Summarization Techniques: A Brief Survey,” arXiv, USA, 2017. Yogan Jaya Kumar, Ong Sing Goh, Halizah Basiron, Ngo Hea A Review on Automatic Text Journal of Computer Science, vol. Khassawneh, Naomie Salim, and Mutasem Graph Based Text Summarization Indian Journal of Science & “Textrank: Bringing Order into Texts,” Association for Computational -411, 2004. , L. Argerich, and R. Wachenchauzer, Variations of the Similarity Function of TextRank for Automated Argentine Symposium on Artificial Intelligence, Q. Zhou, “Cue-based Hub-Authority document Text Summarization,” IEEE on Natural Language Processing and 645, China, 2005. [10] Josef Steinberger, and Karel Jezek, “Evaluation Mesaures for Text Summarization,” Computing and Informatics, vol. 28, pp. 1001–1026, 2009. [11] Y. Ouyang, and W. Li, Q. Lu, “An Integrated Multi-document Summarization Approach based on Word Hierarchical Representation”, Proceedings of the ACL-IJCNLP, pp. 113-116, China, 2009. [12] T. J. Siddiki, and V. K. Gupta, “Multi-document Summarization using Sentence Clustering”, IEEE Proceedings of International Conference on Intelligent Human Computer Interaction, India, 2012. [13] Ramesh Nallapati, Feifei Zhai, Bowen Zhou, and SummaRuNNer, “A Recurrent Neural Network based Sequence Model for Extractive Summarization of Documents,” Thirty-First AAAI Conference on Artificial Intelligence, 2016. [14] Hal Daumé III, and Daniel Marcu, “Bayesian Query-focused Summarization,” Proceedings of the International Conference on Computational Linguistics, Association for Computational Linguistics, pp. 305–312, 2009. [15] Dingding Wang, Shenghuo Zhu, Tao Li, and Yihong Gong, “Multi-Document Summarization Using Sentence-Based Topic Models,” Proceedings of the ACL-IJCNLP, Association for Computational Linguistics, pp. 297–300. [16] Asli Celikyilmaz, and Dilek Hakkani-Tur, “A Hybrid Hierarchical Model for Multi-Document Summarization,” Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, Uppsala, Sweden, pp. 815-824, 2010.. [17] Yihong Gong, and Xin Liu, “Generic Text Summarization Using Relevance Measure, and Latent Semantic Analysis,” In Proceedings of the 24th annual international ACM SIGIR conference on Research and development</s>
<s>in information retrieval, New Orleans, Louisiana, USA, pp. 19–25, 2001. [18] Yuhua Li, McLean, Zuhair A. Bandar, James D. O'Shea, and Keely Crockett, “Setnece Similarity Based on Semantic Nets and Corpus Statistics,” IEEE Transactions on Knowledge and Data Engineering, vol. 18, Iss. 8, pp. 1138 – 1150, 2006. [19] Yogesh Kumar Meena, and Dinesh Gopalani, “Evolutionary Algorithms for Extractive Automatic Text Summarization,” Procedia Computer Science, International Conference on Intelligent Computing, Communication & Convergence, Interscience Institute of Management and Technology, Bhubaneswar, Odisha, India, pp. 244-249, 2015. [20] Nedunchelian Ramanujam, and Manivannan Kaliappan, “An Automatic Multi document Text Summarization Approach Based on Naive Bayesian Clasifier Using Timestamp Strategy,” The Scientific World Journal, Hindawi Publishing Corporation, 2016. [21] Kamal Sarkar, “Bengali Text Summarization by Sentence Extraction,” Proceedings of International Conference on Business and Information Management,” NIT Durgapur, PP 233-245, 2012. [22] Sumya Akter, Arsy Siddika Asa, Md. Palash Uddin, Md. Delowar Hossain, Shikhor Kumer Roy, and Masud Ibn Afjal, “An Extractive Text Summarization Technique for Bengali Document(s) using K-means Clustering Algorithm,” IEEE International Conference on Imaging, Vision & Pattern Recognition (icIVPR), Dhaka, Bangladesh, 2017. [23] Mnajira Sinha, and Abhik Jana, Tirthankar Dasgupta, and Anupam Basu, “A New Semantic Lexicon and Similarity Measjure in Bangla,” Proceedings of the 3rd Workshop on Cognitive Aspects of the Lexicon, Mumbai, India. [24] M. A. Uddin, K. Z. Sultana, and M. A. Alom, “A Multi-Document Text Summarization for Bengali Text,” IEEE International Forum on Strategic Technology (IFOST), Bangladesh, 2014. [25] M. I. A. Efat, M. Ibrahim, and H. Kayesh, “Automated Bangla Text Summarization by Sentence Scoring and Ranking,” IEEE International Conference on Informatics, Electronics & Vision (ICIEV), Bangladesh, 2013. [26] https://github.com/sharifhossen/Bangla-Text-Summarization-using-Graph-Theoretic-and-Frequcny-Summarizer, 2018.</s>
<s>Paper Title (use style: paper title)See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/317645830A Heuristic Approach of Text Summarization for Bengali DocumentationConference Paper · July 2017DOI: 10.1109/ICCCNT.2017.8204166CITATIONSREADS7134 authors:Some of the authors of this publication are also working on these related projects:An offline and online-based Android application “TravelHelp” to assist the travelers visually and verbally for Outing View projectMS Thesis View projectSheikh AbujarDaffodil International University52 PUBLICATIONS 83 CITATIONS SEE PROFILEMahmudul HasanSaitama University33 PUBLICATIONS 71 CITATIONS SEE PROFILEM.s.I ShahinJahangirnagar University4 PUBLICATIONS 18 CITATIONS SEE PROFILESyed Akhter HossainDaffodil International University99 PUBLICATIONS 476 CITATIONS SEE PROFILEAll content following this page was uploaded by Mahmudul Hasan on 19 August 2017.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/317645830_A_Heuristic_Approach_of_Text_Summarization_for_Bengali_Documentation?enrichId=rgreq-5d73fe0fa58bed26214dea36a5c3e540-XXX&enrichSource=Y292ZXJQYWdlOzMxNzY0NTgzMDtBUzo1Mjg4OTY0NDI1NDAwMzJAMTUwMzExMDE0NjgxNg%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/317645830_A_Heuristic_Approach_of_Text_Summarization_for_Bengali_Documentation?enrichId=rgreq-5d73fe0fa58bed26214dea36a5c3e540-XXX&enrichSource=Y292ZXJQYWdlOzMxNzY0NTgzMDtBUzo1Mjg4OTY0NDI1NDAwMzJAMTUwMzExMDE0NjgxNg%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/An-offline-and-online-based-Android-application-TravelHelp-to-assist-the-travelers-visually-and-verbally-for-Outing?enrichId=rgreq-5d73fe0fa58bed26214dea36a5c3e540-XXX&enrichSource=Y292ZXJQYWdlOzMxNzY0NTgzMDtBUzo1Mjg4OTY0NDI1NDAwMzJAMTUwMzExMDE0NjgxNg%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/MS-Thesis-27?enrichId=rgreq-5d73fe0fa58bed26214dea36a5c3e540-XXX&enrichSource=Y292ZXJQYWdlOzMxNzY0NTgzMDtBUzo1Mjg4OTY0NDI1NDAwMzJAMTUwMzExMDE0NjgxNg%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-5d73fe0fa58bed26214dea36a5c3e540-XXX&enrichSource=Y292ZXJQYWdlOzMxNzY0NTgzMDtBUzo1Mjg4OTY0NDI1NDAwMzJAMTUwMzExMDE0NjgxNg%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sheikh_Abujar?enrichId=rgreq-5d73fe0fa58bed26214dea36a5c3e540-XXX&enrichSource=Y292ZXJQYWdlOzMxNzY0NTgzMDtBUzo1Mjg4OTY0NDI1NDAwMzJAMTUwMzExMDE0NjgxNg%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sheikh_Abujar?enrichId=rgreq-5d73fe0fa58bed26214dea36a5c3e540-XXX&enrichSource=Y292ZXJQYWdlOzMxNzY0NTgzMDtBUzo1Mjg4OTY0NDI1NDAwMzJAMTUwMzExMDE0NjgxNg%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Daffodil_International_University?enrichId=rgreq-5d73fe0fa58bed26214dea36a5c3e540-XXX&enrichSource=Y292ZXJQYWdlOzMxNzY0NTgzMDtBUzo1Mjg4OTY0NDI1NDAwMzJAMTUwMzExMDE0NjgxNg%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sheikh_Abujar?enrichId=rgreq-5d73fe0fa58bed26214dea36a5c3e540-XXX&enrichSource=Y292ZXJQYWdlOzMxNzY0NTgzMDtBUzo1Mjg4OTY0NDI1NDAwMzJAMTUwMzExMDE0NjgxNg%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mahmudul_Hasan20?enrichId=rgreq-5d73fe0fa58bed26214dea36a5c3e540-XXX&enrichSource=Y292ZXJQYWdlOzMxNzY0NTgzMDtBUzo1Mjg4OTY0NDI1NDAwMzJAMTUwMzExMDE0NjgxNg%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mahmudul_Hasan20?enrichId=rgreq-5d73fe0fa58bed26214dea36a5c3e540-XXX&enrichSource=Y292ZXJQYWdlOzMxNzY0NTgzMDtBUzo1Mjg4OTY0NDI1NDAwMzJAMTUwMzExMDE0NjgxNg%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Saitama_University?enrichId=rgreq-5d73fe0fa58bed26214dea36a5c3e540-XXX&enrichSource=Y292ZXJQYWdlOzMxNzY0NTgzMDtBUzo1Mjg4OTY0NDI1NDAwMzJAMTUwMzExMDE0NjgxNg%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mahmudul_Hasan20?enrichId=rgreq-5d73fe0fa58bed26214dea36a5c3e540-XXX&enrichSource=Y292ZXJQYWdlOzMxNzY0NTgzMDtBUzo1Mjg4OTY0NDI1NDAwMzJAMTUwMzExMDE0NjgxNg%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Msi_Shahin?enrichId=rgreq-5d73fe0fa58bed26214dea36a5c3e540-XXX&enrichSource=Y292ZXJQYWdlOzMxNzY0NTgzMDtBUzo1Mjg4OTY0NDI1NDAwMzJAMTUwMzExMDE0NjgxNg%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Msi_Shahin?enrichId=rgreq-5d73fe0fa58bed26214dea36a5c3e540-XXX&enrichSource=Y292ZXJQYWdlOzMxNzY0NTgzMDtBUzo1Mjg4OTY0NDI1NDAwMzJAMTUwMzExMDE0NjgxNg%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Jahangirnagar_University?enrichId=rgreq-5d73fe0fa58bed26214dea36a5c3e540-XXX&enrichSource=Y292ZXJQYWdlOzMxNzY0NTgzMDtBUzo1Mjg4OTY0NDI1NDAwMzJAMTUwMzExMDE0NjgxNg%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Msi_Shahin?enrichId=rgreq-5d73fe0fa58bed26214dea36a5c3e540-XXX&enrichSource=Y292ZXJQYWdlOzMxNzY0NTgzMDtBUzo1Mjg4OTY0NDI1NDAwMzJAMTUwMzExMDE0NjgxNg%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Syed_Hossain5?enrichId=rgreq-5d73fe0fa58bed26214dea36a5c3e540-XXX&enrichSource=Y292ZXJQYWdlOzMxNzY0NTgzMDtBUzo1Mjg4OTY0NDI1NDAwMzJAMTUwMzExMDE0NjgxNg%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Syed_Hossain5?enrichId=rgreq-5d73fe0fa58bed26214dea36a5c3e540-XXX&enrichSource=Y292ZXJQYWdlOzMxNzY0NTgzMDtBUzo1Mjg4OTY0NDI1NDAwMzJAMTUwMzExMDE0NjgxNg%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Daffodil_International_University?enrichId=rgreq-5d73fe0fa58bed26214dea36a5c3e540-XXX&enrichSource=Y292ZXJQYWdlOzMxNzY0NTgzMDtBUzo1Mjg4OTY0NDI1NDAwMzJAMTUwMzExMDE0NjgxNg%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Syed_Hossain5?enrichId=rgreq-5d73fe0fa58bed26214dea36a5c3e540-XXX&enrichSource=Y292ZXJQYWdlOzMxNzY0NTgzMDtBUzo1Mjg4OTY0NDI1NDAwMzJAMTUwMzExMDE0NjgxNg%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mahmudul_Hasan20?enrichId=rgreq-5d73fe0fa58bed26214dea36a5c3e540-XXX&enrichSource=Y292ZXJQYWdlOzMxNzY0NTgzMDtBUzo1Mjg4OTY0NDI1NDAwMzJAMTUwMzExMDE0NjgxNg%3D%3D&el=1_x_10&_esc=publicationCoverPdfA Heuristic Approach of Text Summarization for Bengali DocumentationSheikh Abujar Dept. of CSE Jahangirnagar University Savar, Dhaka,Bangladesh sheikhabujar@gmail.com Mahmudul Hasan Dept. of CSE Comilla University Comilla, Bangladesh mhasanraju@gmail.com M.S.I Shahin Dept. of CSE Jahangirnagar University Savar, Dhaka, Bangladesh msi.shahin71@gmail.com Syed Akhter Hossain Dept. of CSE Daffodil International University Dhanomondi, Dhaka, Bangladesh aktarhosaain@daffodilvarsity.edu.bd Abstract— Automated Text Summarization is a technique of summarizing any document or text automatically. Summarized text is the concise form of the given text. In Natural language processing many text summarization techniques are available for English language, but only a few for Bangla language. Bangla is one of the most taught and used language all over the world. Most of the text summarization techniques are implemented in two different ways, known as abstractive or extractive approach. This paper deal with the summarization of Bangla text based on extractive method. A new efficient extractive summarization method is proposed in this work. The other summarization tools developed for Bangla language seems not much appropriate from application point of view. The proposed analysis models are applicable for Bangla text summarization. In the proposed approach, basic extractive summarization is applied with new proposed model and a set of Bangla text analysis rules derived from the heuristics. Every Bangla sentences and words from original text is analyzed properly with Bangla sentence clustering method. This work proposed a new type of sentence scoring processes for Bangla text summarization. In the evaluation of this technique, the system reflects good accuracy of results, comparing to that of the human generated summarized result and other Bangla text summarization tools. Keywords—Bangla Text Summarization, sentence scoring, Sentence Analysis, Language Processing, NLP. 1. INTRODUCTION A massive increase of information is a part of our life today. Information over Internet and offline are increasing so rapidly even more than the amount of printed data. Those electronic information like- news portal, blogs, e-books, etc., is very much difficult to summarize as the amount of information is large. In order to find out the essence of these information is very much pains-taking, nevertheless it is not feasible to sieve useful information from these large amount of data coming from any source of documents. The only way is to summarize these data is through automated text summarization process. This will classify all the data together and present the succinct information</s>
<s>clearly. It will have to maintain a standard summarization procedure everywhere and more importantly it will help saving time and efforts both. Text summarization is the process of automatically preparing a concise statement of any given text. English text summarization had a revolutionary research output and currently these developed summarizers’ works accurately. In Bengali language, this research is not up to the mark and as a result there is no satisfactory summarizer at all which can be applied in Bangla text processing despite the content and users. A Bengali text summarizer has become now indispensable and very demanding. A large number of Internet, official or personal users will be benefited by using this Bangla text summarizer. Summarization process could be done in two different ways: abstractive and extractive. The extractive approach find out the most used words and then score sentences from different perspective. In other words abstractive summarization clarify the contents and then improve the coherence among sentences by eliminating redundancies [1]. In extractive summary it will not add any other additional words or sentence into the summarized paragraph, however the abstractive summarize process may add new sentences into the summary. Therefore abstractive summarization is more difficult than extractive approach, however precise result will come after implementing abstractive method. Thus the use of extractive methods are much as it is easy to implement and except some cases, it works perfectly. Extractive method follow few rules, initially it represents the whole document and separate into paragraphs, sentences and words respectively. Initially extractive method find out and removes the stop words. Now the document is ready to score sentences and then complete the summarization by selecting higher scored sentences [2]. In this research it has explained, how the Bengali sentences should be scored after removing stop words. Advanced sentence scoring methods are proposed here which will help to score sentences precisely. In the text summarization a text document can be concise and the basic ideas of the topic also be realized whether it is relevant or not. Multiple news reports can be summarized and able to find out the relationships between those. Any trending topics can be show through graphical representation without analyzing all those information individually. Text summarization can be done for a single file or a set of files form different sources. IEEE - 402228th ICCCNT 2017 July 3 -5, 2017, IIT Delhi, Delhi, India The rest of the paper is explained as follows: In section II, literature review is discussed, which contains the previous research summary of Bengali Text summarization done by other researchers. In section III, narrated proposed new extractive approach [14] [15] for Bangla text with quantitative assessments is discussed. Lastly, section IV and V details about the experimental results with discussion and conclusion respectively. 2. LITERATURE REVIEW Extractive text summarization is basically formed in three different phases. Measuring the intermediate orientation of originl text is the exigent part of Text analysis. Scoring every sentences and finally consolidating those high scored sentences will produce a better</s>
<s>extractive summary[1]. This section describes previous works, related to those phases and state of arts research. Rafel, et al. explained all the basic requirements of extractive research approaches including those three features, are - Text analysis, sentence scoring and summarizing [1]. Here fifteen (15) different sentence scoring methods were explained. Those methods were assessed on news, blogs and different articles. Every word was scored in six different ways. In that case, word co-occurrences were analyzed using n-gram based process [8] and lexical similarities were identified. Sentence scoring methods rendered new features like, sentence centrality based on sentence similarity algorithm [9]. Aggregate similarities between sentences were used widely while evaluating performances. These were narrated in possible ways for improving sentence score results. In the work, possible reasons of occurring polysemy were mentioned. Harsha, et al. commenced a hybrid summarizing technique for multi-text documents including all those basic requirements. The authors suggested for generating word graph and word net [4]. Through word graph, important nodes could be identified and for generating word graph, they have used heuristic rules. Word net is basically a lexical data dictionary, through which - words meaning and models could be provided. Different semantic meanings help to identify category of words. Vocabulary and set of synset will be provided through domain ontology [10]. Iftekharul et al. summarized Bangla text by sentence scoring and ranking [5]. Necessary text preprocessing measures for Bangla were discussed separately. A lightweight stemmer was introduced purpose of identifying canonical forms of words. Identifying various cue words from Bangla text and apprehend the skeleton of the document from title and header, improves the summary quality better perspective of Bangla text summarization. Here the performance of the summarization [11] is not up to the mark. Bangla texts couldn’t be analyzed like other language. Because grammatical rules and sentence patterns are very different here. The reason behind choosing a modified extractive approach for summarizing Bangla text, was that. Several approaches applied here, were already implemented in different extractive text summarization models. Though those are basic methods of extractive summarization. Such as sentence scoring, identifying word frequency, Sentence position, etc. Here, few new methods were introduced, such as repeated word distance, absolute deviation of sentences, frequent words percentile – which will be calculated by the rate of frequent words carried by each sentence as well as being used in other sentences, prime sentences identification – which will help to identify the most important sentences based on all above analysis. This was a hypothesis and found very good result after implementation. Other research were stated several methods for scoring sentences and words, though the individual sentences may have values based on the other related words being used in somewhere else on that document or being redirected to that sentence though some imitating words. Several times, same words were being used in different sentences in different forms. Either those sentences are internally linked or similar in semantic meanings. The relationship between those sentences were not identified for Bangla language yet.</s>
<s>Similar sentences may not took position sequentially or in very near position. So standard deviation of sentences will help to identify the positional difference between sentences. From there, dependent and optimal sentences could be identified. Identifying Cue words or leading words, had already implemented in other languages, though it have several techniques. Here a method was proposed which found suitable for Bangla language. Several known topics were introduced here, in different way for Bangla language. Summarizer of English language or other language will not work for Bangla language, though the topics are very similar to cover. And few new topics were introduced as per required or necessity for the betterment of getting optimal solution. To overcome those limitations, a model is being proposed and discussed the output factors. And overall, this model provides optimal solutions for Bangla Documents. 3. PROPOSED METHOD Bengali text summarization is an application of Natural language processing. This processing will be responsible for summarizing any Bengali text document and intelligently propose a summary. The most popular approaches for text summarization is Extractive summarization approach. Many researchers contributed in this area primarily for English language. The prime factor of this research area could be addressed by the following research question: How to identify those sentences, which contain the main gist of the given text? In general, there are three different approaches: (i) Word Scoring – evaluate every words to identify the most frequent and important words from the text; (ii) Sentence scoring – to identify relations among sentence and figure out the main leading sentences. In addition sentence position and redundancy detection will help to generate a precise summarization; and (iii) Graph scoring – analyze the relationship between words and sentences [1]. In addition distance between similar words and sentences will help to identify the uniform representation of sentences. Unnecessary sentences could be avoided through our proposed technique of words and sentences processing. IEEE - 402228th ICCCNT 2017 July 3 -5, 2017, IIT Delhi, Delhi, India Fig. 1. Steps of Proposed Text summarization technique The proposed Bangla text summarization technique will follow three different phases as shown in figure 1. The steps are: (i) preprocessing with Linguistic Analysis–Extract sentences from the text document and tokenize all those sentences with different segmentation process. Tokenization disjoins sentences into words, numbers, sentences and symbols, (ii) Prime sentences identification– the main leading sentences will be extracted from original set of documents through word analysis and sentence analysis process. Because words and sentence, both factors are equally important for preparing a qualitative summary and (iii) final processing – All those prime sentences will be evaluated again to increase the possibilities of representing accurate sentences as summarized output. Throughout our proposed rules and models, final processing features will generate a better quality summary from Bangla text. Detail of every part of our proposed model is explained in rest of the portion as follows. Entire Design of our proposed model is given in figure 1. 3.1 Preprocessing with Linguistic Analysis Initially the document must</s>
<s>be processed at least once before summarization. In Bengali language, sentences may not associated in any order. Before ranking those sentences or words individually, it is important to prepare those sentences to get better and accurate sentence score. Scoring methods was divided in two different segments - sentence scoring and word scoring. Both scores will help to identify most leading sentences as well most important keywords of the topic. It is being used to represent word graph. Two different methods – Word Analysis and Sentence Analysis are used to score those sentences more accurately. Before that, the Linguistic Analysis part will be completed. Linguistic Analysis will be done though tokenizing every sentence, by removing stop words and finally stemming words. 3.1.1 Bangla Text Tokenization Text summarization or document summarization is very similar, because ultimately it have to summarize many sentences, and sentences are combination of words. Sentences could be found in structured or non-structured form. For summarizing any text or document, it is important to identify the essence of those sentences. So, it is important to understand every words form every sentences. In that case, tokenization will separate those words from sentences and prepare a data set for further analysis. Analyzing every words individually will increase the possibilities to collect set of prime sentences for further Analysis. Tokenization will segment all those token words, for example – numbers, symbols, etc. A sentence is a set of tokens and important sentences always may not contain maximum high frequency words. Rather it may contain important information, such as – important dates, time, specific amount for any data, etc. For that findings, it is necessary to analyze those tokens individually and prepare a precise output set of prime sentences. 3.1.2 Stop words removal Similar sentences could be joined though few conjunctive words. This kind of words will be eliminated in classification process. In Bengali language, words like এবং (And), কিন্তু (But), etc. are being used widely in sentences. For example: “বাংলাদেশ বনাম ভারদের কিদিট মযাচ এর উদেদশয বাংলাদেশ কিদিট টিম আজ ভারে সফদর রওনা হদেদে কিন্তু আগামীিাল ধমমঘট এর িারদন খেলা স্থকগে িরা হদেদে”. Here two different sentences were introduced though the information category is same. The word –কিন্তু was used to add two different sentences together, only because of that information are interrelated. But the meaning of those sentences are not similar but dependent. Those two sentences are equally capable to become a prime sentence individually. So, here scoring process will start after eliminating stop words. 3.1.3 Stemming Bengali is morphologically rich language [3]. Same words could be represented in different lexicon order. But the meaning of those words will not be changed a lot, because the root of those words are same. Through these root words, the relationship between sentences could assume easily. Word scoring is an essential part of extractive summarization approach, so if those words considered as IEEE - 402228th ICCCNT 2017 July 3 -5, 2017, IIT Delhi, Delhi, India different token then it would be difficult to found</s>
<s>the similar words and set of similar tokens indeed. Similar tokens will help to find out the amount of detached words and provide a set of similar words. Through this, it is possible to trace the relationship among sentences and finally avoid sentence redundancy. Stemming will work like, consider there are few words as – খেলা, খেকল, খেদলকে, খেলব, খেলদে, খেকলোকে, খেলা ধুলা, etc. All these words are similar but just in different form of word – খেলা (Play). Stemming has three different phases like Root, Surface form and suffixes [3]. Table 1 represents the different segments of stemming analysis example. Table1: Stemming segments Root Surface form Suffixes খেলা খেকল, খেদলা, খেলদবা ক , খ া, খবা বাংলাদেশ বাংলাদেদশর, বাংলাদেশদি খ র, খি গাকি গাকিদে, গাকির, গাকিটি খে, র, টি Suffixes are the additional parts of root word form. All these surface forms will be considered as a stemming cluster, which will be send for scoring word frequency. All those sentences containing same surface form may count as similar sentences and for final processing it will provide a set of similar sentence token. From where prime sentences will be picked for further analysis. 3.1.4 Word Analysis Sentences consist of set of words. Words have the information of entire sentence. All words together represents an information. The analysis of words is more important to find out the words graph and words frequency and many other important parts, describe in rest of this section. (a) Word Frequency – Words could be used redundantly. And the topic related words may use in maximum sentences. The number of every times a word is used in total text, is called word frequency. The stemming cluster tokens will be considered as a same rooted word and frequency for all those words will be added in the same root token. For example, Table 2 is representing a word frequency (after stemming) of a given text. Based on only this word frequency scores, is it possible to find out the topic, subject or context of the given text? Table 2: Word frequency after stemming The given word frequency scores are being generated after stemming, and the maximum frequency of words are listed here. The input text detail is given below in Table 3. Table 3: Input text details Source: http://www.prothom-alo.com/international/article/1005109 Category of text: General News report Title of text: ক্যামেরুমে ট্রেে লাইেচ্য যত হমে নেহত ৫৩ Total sentences : 8 Total words: 106 The given input text is a very small news article of 8 sentences, as well as 106 words. The maximum frequent words are: ক্যামেরুে, ট্রেে, যাত্রী, etc. All these are leading words of that given news text. Then the sentences those are containing these words are also leading sentences. And the Topic identification process for cross matching the leading sentences would be easier and accurate. (b) Numeric value identification–Numeric values are always important. It holds significant information about the given text. Numeric values may address dates, years, any unit of amount, etc. In every text,</s>
<s>numeric values could be found and those addressed precise information. The sentences containing numeric values will be considered as a high priority token for preliminary prime sentence clustering segment. (c) Repeated Words distance – Similar words rarely used twice in a sentence. In a passage similar or related sentences could be found in contiguous tokens. From the sentence analogy matrix, simply it is possible to omit many redundant sentences. It will not be applicable if the distance of the words is very far. This calculation will be based on the number of total paragraph in the given text or the total length of entire text. Through repeated word distance analysis, some values will be collected for analyzing important words effect rate. The words effect rate is explained in Equation (1) as follows: 𝐸𝑟 = (∑ 𝑃𝑖∑ 𝑃𝑗∑ 𝑆𝑖∑ 𝑆𝑗) + 𝑅𝑘 − − − (1) 𝑊ℎ𝑒𝑟𝑒, 1 < 𝐸𝑟 < 5 The Words Effect rate will be calculated individually for every frequent word and will be recorded into word dataset. Example of effect rate sample is mentioned in Table 4. Here Repeating Nature is calculated as Low=1, Average=2 and High = 3. Table 4: Effect rate of words considering repeated distance Variable Value Word : ট্রেে Word frequency : 5 Number of total paragraph (Pj): 5 Word used in number of total paragraph (Pi): 4 Number of total sentences (Sj): 8 Number of total sentences used (Si): 5 Repeating nature (Rk): High Maximum used in paragraph no: 2 Effect rate (Er): 4.43 Word Frequency ক্যামেরুে 4 ট্রেে 5 ট্রেশ 3 নেহত 2 যাত্রী 3 ৩০০ 2 লাইেচ্য যত 2 IEEE - 402228th ICCCNT 2017 July 3 -5, 2017, IIT Delhi, Delhi, India Here after calculation, The Effect rate of the word “ট্রেে” is 4.43. (d) Cue Words –In Bengali language, information could be expressed by using more than one sentence. Semantic relation could be found in between of those linked sentences. Few semantic relation emphasize the summary or gist of those group of sentences. So these sentences could be accepted as a prime sentence and as representative of those groups of sentences. Words like – খেদহেু(Since), খমাটিথা(In a Word),এোিাও(Also),জরুরী(Emergency),পকরদশদে(Afterward),অেঃপর (Hence) ,ইকেমদধয(Already)etc. are cue words in Bengali language. This words redirects towards the main leading sentence. 3.1.5 Sentence Analysis Sentence Scoring is one of the best approaches to determining leading sentences and representative content. Sentences will be evaluated in many forms and the word analysis factors will help to identify the target sentences as well. Several steps of sentence analysis are discussed in rest of the part. (a) Summation of frequent words – Set of frequent words in a sentence will be grouped together, and the total weighted values of those words will be calculated sentence by sentence. Maximum rate of weighted sentences will be considered as a representative sentence. (b) Sentence Length – Amount of words in a sentence is considered as sentence length. Many small size sentences may achieve maximum weight in both word and sentence analysis. But the</s>
<s>information in that sentence may not enough to represent the summation of a scenario. So, calculating mean value of those Sentence length will give a minimum acceptance rate for prime sentence, as well help to identify the Sentence position. (c) Sentence Position– Position of sentences has very significant influence over the given content of documents. Different paragraph of input document may contain different types of information. First and last sentence of the document represents meaningful and significant information as well as for paragraphs. So, these two sentences will be considered as a prime sentence in extractive summarization approach, though the sentence frequency is low. Other sentences will be calculated through Gaussian distribution theorem. Sentence will be plotted in ascending order based on their Sentence length and as the absolute deviation factor, which will be calculated sequentially. It will help to omit less weighted sentences. The Absolute Deviation Factor (AD) is explained in Equation (2) as follows: 𝐴𝐷 = |(|𝑥 − 𝜇|) − 1| − − − (2) Where, x= is Sentence Length (which sentence to identified); μ= is the mean value of all Sentence Length. AD= Absolute Deviation Factor Table 5, contain population of Sentence Length and calculated values of Absolute Deviation Factor (AD) into equation 2. Given sentence Length information is taken from a dummy set of data. Table 5: Absolute Deviation Factor (AD) Sentence No Sentence Length Absolute Deviation Factor (AD) 1 5 0.39 2 8 0.62 3 9 0.7 4 11 0.85 5 13 1 6 16 0.77 7 18 0.62 8 18 0.62 9 19 0.54 10 27 0.077 The output data is developed based on Gaussian distribution theorem. The mean values will be in peak and rest of other values will be found as high as much near of the mean value. The figure 2 represents the plotting of sentences after calculating AD. Fig. 2: Absolute Deviation Factor of Sentence Mean of sentences position deviation could be calculated, it would be useful to identify average distance between sentences position. Based on that result a standard value of selecting minimum or maximum position could be defined. (d) Uniform Sentences –Uniform sentences could be considered as identical sentences. Sentences that represent an information which already had discussed. Those linked to a chain set of information’s. In Bengali language those linked sentences could be identified, if few set of words matched. Words like –কেকনবদলন ,োরাবদলদেন ,এজনযই ,সুেরাং ,এরফদল, cte . This set of information’s could be considered as a similar topic. (e) Imitating Sentences – Sentences that reference in many other sentences is a set of Imitating Sentences. In a document, a topic is discussed in various passages in many forms. Those set of similar semantic data should be considered a group of similar data. And identifying those data will help to avoid data redundancy and omit the chance to represent similar data into summarize text. In extractive approach, one of the major problem is to identify the summarized sentences, either properly linked or not basis of morphological,</s>
<s>linguistic and semantic rules. By identifying 0 5 10 15 20 25 30Absolute Deviation Factor (AD)IEEE - 402228th ICCCNT 2017 July 3 -5, 2017, IIT Delhi, Delhi, India those linking words, will help to extract related sentence, as well. Fig 3. Example of Imitating Sentences detection. Basically here the Imitating Sentences detection was implemented to reconstruct the sentences if necessary. In figure 3, there are two sentences and both are linked together. If the second sentence is being selected as final summarization sentence with a high score? Then the problem is, second sentence extends some information of first sentence because of the word “কেকন বদলন” (He says). It directly indicates that subject is stated in previous sentence. In abstractive method summarization, the subject will be identified and placed instead the word “কেকন বদলন”, but in extractive approach it is difficult to identify the subject to what it states for. So here the uniform sentences will be prepared by eliminating the linking words, like –কেকন বদলন, সুেরাং, োহদল, এজনয, এই িারদন, etc. (f) Skeleton of Document –Document title and headers contain the main idea of the given text. Those words could affect the weighted words. So extracting skeleton data of document and comparing those with existing weighted data is important. It sometimes help to identify the important sentences, which are really leading information’s. (g) Frequent Word Percentile – Every sentence apprehends information, as well as treated like individually important factor. Frequent words associated in a sentence must affect the entire sentence. The more frequent words is found in a sentence, the more it increases the possibilities to get selected as prime sentence. But the only criteria is not carrying most frequent words, as the sentence length is a factor too. Sometime through this process, few small sentences get prioritized. Though, those are not appropriate. That’s why this module will be implemented on only those sentences, which full fill the minimum sentence length. A percentile value will be added to every sentence, as per combination of frequent words weight and sentence length. The frequent word percentile and effect rate is explained in Equation (3) as follows: 𝐸𝑤 =𝐹𝑤 + 𝑇𝑤100− − − (3) 𝑊ℎ𝑒𝑟𝑒, 𝐹𝑤 = Here EW= Effect Rate; TW = Total weight of frequent words; FW = Frequent words percentile; Wk = Total number of used word; WN = Total number of words. Table 6: Effect rate of frequent words Sentence No. WN Wk TW EW 1 18 10 25 0.25 2 26 4 10 0.10 3 17 1 3 0.03 4 9 5 12 0.12 5 16 5 16 0.16 6 6 2 4 0.04 7 5 3 7 0.07 8 6 2 4 0.04 In table 6, sentences with maximum weighted value and total number of words produced maximum effect rate. For example, sentence number 1, 4 and 5 have maximum effect rate. Sentence number 6 and 8 have similar WN values and sentence 3 have less weighted value then other sentence. But the effect rate is high</s>
<s>for sentence 1, because the ratio of Wk and TWis also higher than other sentence. 3.1.6 Final Processing In this proposed method, words and sentences were clustered in three different ways. Through the deep analysis of input document by Word and Sentence Analysis, prime sentences were categorized for producing quality summarized output. Prime sentences will be grouped by Word and sentence analysis. And then those set of grouped sentences will be send for final processing. Here, those prime sentences will be evaluated again. (h) Prime Sentences –Prime sentences are the set of sentences which scored maximum, in sentence scoring method. The sentence scoring method is explained in Equation (4) as follows: 𝑆𝑘 = 𝐸𝑤 + 𝑇𝑤 + 𝐴𝐷 + 𝑆𝑢 + 𝑆𝑖 + 𝐷𝑘 + 𝑊𝑛 + 𝐸𝑟 + 𝑊𝑐 − − − − − − − − − −(4) Here, Sk is the score of every individual sentence. EW is the effect rate of frequent words in a sentence. TW holds the sum of every weighted words used in that sentence. AD defines the rate of absolute deviation factor of that sentence from the mean sentence length. SU, Si and Dk represents the value of Number of Uniform sentences, imitating sentences and skeleton of document respectively. From analysis, the value of Numeric value identification, repeated words distance and Cue words are stated as WN, Er and Wc. After calculating every sentences, most scored words will be ready for final processing. All these sentence are stated as prime sentences. (i) Aggregate Similarities–The final set of prime sentences may contain similar sentences. For a better summarization, similar sentences should be omit. Those similar sentences will be identified and set as group of similar sentence. (j) Final Gist Analysis – Sentences form aggregate similarities, will be considered as similar token. The main leading sentence will be extracted from those groups and prepare a Gist of final information. Lastly, the Gist tokens will IEEE - 402228th ICCCNT 2017 July 3 -5, 2017, IIT Delhi, Delhi, India be crosschecked with the topic words. Finally, the sentences will be released for publishing. (j) Sentence Ranking – Sentence will be ranked based on the sentence position. As those sentences were found in an order in the given input document. Then, the sentences will be demonstrated as final summary. 4. EXPERIMENTAL RESULTS AND DISCUSSION The aim of text summarization is to generate extractive summary. Here different topic based Bangla text document were tested. For experimental purpose, 3 different Bangla Text were tested by this system. In addition, those documents were tested by different human users too. The experimental result were stated in figure 4. Fig. 4: Output statistics by system vs. human user The experimental results were calculated out of 5. Different types of news and articles were tested through this system. Python and Natural Language Tool kit (NLTK version - 3) Library file was used to develop the system. Accuracy of output was evaluated by comparing the system generated summary with human generated summary. Example of</s>
<s>a news article summarization is given below. Original Text – ক্যামেরুমে ট্রেে লাইেচ্য যত হমে নেহত ৫৩ পনিে আনিক্ার ট্রেশ ক্যামেরুমে অনতনরক্ত যাত্রীবাহী এক্টি ট্রেে লাইেচ্য যত হমে ক্েপমে ৫৩ জে নেহত ও ৩০০ জে আহত হমেমে । গত ক্াল শুক্রবার ট্রেশটির রাজধােী ইোউমে ও অর্থনেনতক্ ট্রক্ন্দ্রস্থল দুমোলা শহমরর েমধয যাতাোত ক্রার সেে ট্রোট শহর ইমসক্ার ক্ামে লাইেচ্য যত হমে ট্রেেটির বনগ লেমলা উমগ ট্রগমল হতাহমতর এ ঘটো ঘমট । নবনবনসর প্রনতমবেমে বলা হমেমে, সম্প্রনত ভারী বৃনিপামতর ফমল ভূনেধমসর সৃনি হওোে ট্রেশটি জয মে সেক্ ট্রযাগমযাগ নবপযথস্ত হমে পমে । এমত ক্মর স্বাভানবক্ অবস্থার ট্রচ্মে ট্রেমে যাত্রীমের চ্াপ অমেক্ ট্রবমে যাে । বাতথ াসংস্থা এনপ জানেমেমে, স্বাভানবক্ অবস্থাে গমে ৬০০ যাত্রী চ্লাচ্ল ক্রমলও ট্রেেটি ১৩০০ যাত্রী বহে ক্রনেল । ক্যামেরুমের পনরবহেেন্ত্রী এডগাডথ অযালাইেমেনব জানেমেমেে এ দুঘথটোে ৩০০ জে আহত হমেমে । নেহত বযনক্তর সংখ্যা আরও বােমত পামর । System Generated Output – পনিে আনিক্ার ট্রেশ ক্যামেরুমে অনতনরক্ত যাত্রীবাহী এক্টি ট্রেে লাইেচ্য যত হমে ক্েপমে ৫৩ জে নেহত ও ৩০০ জে আহত হমেমে । স্বাভানবক্ অবস্থাে গমে ৬০০ যাত্রী চ্লাচ্ল ক্রমলও ট্রেেটি ১৩০০ যাত্রী বহে ক্রনেল । নেহত বযনক্তর সংখ্যা আরও বােমত পামর । Human Generated Output – ক্যামেরুমে অনতনরক্ত যাত্রীবাহী এক্টি ট্রেে লাইেচ্য যত হমে ক্েপমে ৫৩ জে নেহত ও ৩০০ জে আহত হমেমে । ভূনেধমসর ফমল ট্রেশটি জয মে সেক্ ট্রযাগমযাগ নবপযথস্ত হমে পমে । স্বাভানবক্ অবস্থাে গমে ৬০০ যাত্রী চ্লাচ্ল ক্রমলও ট্রেেটি ১৩০০ যাত্রী বহে ক্রনেল । নেহত বযনক্তর সংখ্যা আরও বােমত পামর । 5. CONCLUSION AND FUTURE WORK This work explains and introduced different approaches of extractive text summarization methods in relation to efficient Bangla text processing. The proposed technique is implemented for efficient Bengali extractive text summarization process. We developed a better summarizer for Bengali language. In extractive approaches, the scoring methods are mostly used. But here, scoring methods were implemented based on different association and linguistic rules. Relations between words and sentences were extracted. Here two steps summarization techniques were implemented. Selection of prime sentences is one of the most significant task. Topic of the document could be extracted from the word and sentence frequency. Nonetheless there are lots of improvements required for an enhanced summarizer. Development so far, for English language is far better than other languages. Abstractive summarization, definitely provides better summary than the extractive one. But the problem is, implementing abstractive method requires many development phases. However, a hybrid approaches is the future of summarization in Bengali language. Furthermore, we wish for validate the proposed approach for producing new sentences taking into account the words with highest tradeoffs could be suitable for sophisticated summarization. 6. ACKNOWLEDGMENT We would like to thanks, Department of Science and Engineering of four universities: Daffodil International University, Jahangirnagar University, Britannia University and Comilla University, Bangladesh for facilitating such joint research. REFERENCES [1] Rafael Ferreira et al. “Assessing Sentence Scoring Techniques for Extractive Text Summarization”, Elsevier Ltd., Expert Systems with Applications 40 (2013) 5755-5764. [2] Ani Nenkova, Kathleen McKeown , “A survey of text summarization techniques”, Springer Science+Business Media, LLC 2012 [3] Amitava Das,Sivaji Bandyopadhyay, “Morphological Stemming Cluster</s>
<s>Identification for Bangla”, Jadavpur University, Kolkata 700032, India, 2011. [4] Harsha Dave and Shree Jaswal, “Multiple Text Document Summarization System using hybrid Summarization technique” NGCT-2015, Dehradun, India, 4-5 September 2015. [5] Efat, Md Iftekharul Alam, Mohammad Ibrahim, and Humayun Kayesh. "Automated Bangla text summarization by sentence scoring and ranking." In International Conference on Informatics, Electronics & Vision (ICIEV), , pp. 1-5, IEEE, 2013. [6] Anusha Bagalkotkar , Ashesh Kandelwal ,Shivam Pandey , S. Sowmya Kamath “A Novel Technique for Efficient Text Document Summarization as a Service” ICACC-2013, 29-31 August 2013, Kochi, Kerala, India. IEEE - 402228th ICCCNT 2017 July 3 -5, 2017, IIT Delhi, Delhi, India [7] E Lloret, M Palomar, “Analyzing the use of word graphs for abstractive text summarization” IMMM 2011, October 23-29, 2011 - Barcelona, Spain [8] Mariòo, José B., Banchs, Rafael E., Crego, Josep M., Gispert, Adrià, Lambert, Patrik, Fonollosa, José A. R., et al. (2006). N-gram-based machine translation. Computational Linguistics, 32(4), 527–549. [9] Haque, Rejwanul, Naskar, Sudip Kumar, Way, Andy, Costa-jussa, Marta R., & Banchs, Rafael E. (2010). Sentence similarity-based source context modelling in pbsmt. In Proceedings of the 2010 international conference on asian language processing (pp. 257–260). IEEE Computer Society [10] Lin, F. and Sandkuhl, K, A Survey of Exploiting WordNet in Ontology Matching, In IFIP International Federation for Information Processing, Volume 276; Artificial Intelligence and Practice II; Max Bramer; (Boston: Springer) 2008, pp. 341350 [11] K. Sarkar, “Bengali text summarization by sentence extraction,” In Proceedings of International Conference on Business and Information Management (ICBIM-2012), NIT Durgapur, pp. 233-245, 2012. [12] El-Shishtawy, Tarek, and Fatma El-Ghannam, "Keyphrase based Arabic summarizer (KPAS)." In 8th International Conference on Informatics and Systems (INFOS), pp. NLP-7. IEEE, 2012. [13] M. Kutlu, C. Cigir, and I. Cicekli, “Generic text summarization for Turkish.” The Computer Journal, vol.53, no.8, pp.1315-1323,2010. [14] Atif Khan and Naomie Salim, “A Review on Abstractive Summarizati on Methods”, Journal of Theoretical and Applied Information Technology, Vol. 59, No. 1, January 2014. [15] F. Liu, J. Flanigan, S. Thomson, N. Sadeh and N. A. Smith, “Toward Abstractive Summarization Using Semantic Representations” (2015). [16] N. Kumar, K. Srinathan and V. Varma, “A Knowledge Induced Graph-Theoretical Model for Extract and Abstract Single Document Summarization”, In Computational Linguistics and Intelligent Text Processing , Springer Berlin Heidelberg, pp. 408–423, (2013). [17] Abujar, Sheikh, and Mahmudul Hasan. "A comprehensive text analysis for Bengali TTS using unicode." Informatics, Electronics and Vision (ICIEV), 2016 5th International Conference on. IEEE, 2016. [18] Mihalcea, Rada, Courtney Corley, and Carlo Strapparava. "Corpus-based and knowledge-based measures of text semantic similarity." AAAI. Vol. 6. 2006. [19] Islam, Aminul, and Diana Inkpen. "Semantic text similarity using corpus-based word similarity and string similarity." ACM Transactions on Knowledge Discovery from Data (TKDD) 2.2 (2008): 10. [20] Mohler, Michael, and Rada Mihalcea. "Text-to-text semantic similarity for automatic short answer grading." Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, 2009. [21] Gomaa, Wael H., and Aly A. Fahmy. "A survey of text similarity</s>
<s>approaches." International Journal of Computer Applications 68.13 (2013). [22] Bär, Daniel, Torsten Zesch, and Iryna Gurevych. "DKPro Similarity: An Open Source Framework for Text Similarity." ACL (Conference System Demonstrations). 2013. [23] Huang, Anna. "Similarity measures for text document clustering." Proceedings of the sixth new zealand computer science research student conference (NZCSRSC2008), Christchurch, New Zealand. 2008. [24] Bilenko, Mikhail, and Raymond J. Mooney. "Adaptive duplicate detection using learnable string similarity measures." Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2003. [25] Pedersen, Ted, Siddharth Patwardhan, and Jason Michelizzi. "WordNet:: Similarity: measuring the relatedness of concepts." Demonstration papers at HLT-NAACL 2004. Association for Computational Linguistics, 2004. IEEE - 402228th ICCCNT 2017 July 3 -5, 2017, IIT Delhi, Delhi, India View publication statsView publication statshttps://www.researchgate.net/publication/317645830</s>
<s>Enhaneement ofKeyphrase-Based Approach of Automatie Bangla Text Summarization Md. Majharul Haque, Suraiya Pervin Department of Computer Science & Engineering University ofDhaka Dhaka-l000, Bangladesh Email: mazharul_ 13@yahoo.com.suraiya@du.ac.bd Abstract-An approach of automatie Bangla text summarization is presented here by enhancing an existing keyphrase-based method. The enhancement is accomplished with three steps as folIows: (i) modifying the keyphrases selection process, (ii) including the first sentence in summary if it contains any title word and (iii) counting numerical figure which is presented in digits and words for senten ce scoring. Step by step performance analysis of our proposed approach is portrayed for two datasets. Performance is measured with ROUGE (Recall Oriented Understudy for Gisting Evaluation) automatie evaluation package. The results, based on ROUGE-l and ROUGE-2 scores, show that the proposed enhancement has significant influence for Bangla text summarization over existing keyphrase-based method. Keywords-Bangla text sllmmarization; keyphrase; first sentence; nllmericalfigllre; ROUGE I. INTRODUCTlON As long as Internet users are increasing, electronic contents (e-contents) are growing proportionally irrespective of language. The estimated size of the websites, which holds e­contents, was around 4.69 billion pages in 27 May, 2016 [I] and its size is increasing exponentially in each second. Users are encumbered with the huge volume of e-contents or texts, whereas they expect the concise information or knowledge within the shortest time. In such a situation, text summarization technique would be an indispensable solution because of generating a quick overview of an entire document within expected time of users [2]. The state-of-art-works in this field [2, 3, 4, 5, 6] have been focused on automatic text summarization in different languages starting with English. Automatic English text summarization technique was firstly proposed by Luhn [3] on the basis of term frequency, around five decades ago. With the increasing amount of texts, a notable development in English text summarization was proposed by Edmundson [4] by considering text title, cue­words and location of sentences. Still, the trend is being continued not only for English but also for Bangla text summarization [7, 8, 9]. As Bangla is the 7th most spoken language in the world [10], e-contents in Bangla are dramatically increasing throughout the cyber world. Therefore, an efficient Bangla text summarization technique is essential for researchers, international news agencies and individuals. 978-1-5090-2597-8/16/$31.00@2016 IEEE Zerina Begum Institute of Information Technology University of Dhaka Dhaka-IOOO, Bangladesh Email: zerin@iit.du.ac.bd Unlike English which has seen a large number of systems developed to cater to it, other languages are less fortunate [11] . So far, few attempts have been made for Bangla text summarization [7, 9]. In 2004, Islam and Masum [12] proposed the first technique of Automatic Bangla text summarization, in which query terms were used for document indexing and information retrieval. After few years, some methods from the survey work regarding English text summarization systems were implemented to summarize Bangla text by Uddin and Khan [13] . They presented Bangla text summarization based on i) location method, ii) cue method, iii) text title, iv) term frequency and v) numerical data. Furthermore, other prominent</s>
<s>researchers [7, 8] worked on some features (already implemented for English [4]) for Bangla text summarization and fine-tuned for better performance in [9]. Apart from the previous approaches, keyphrase based summarization method outperforms for both Bangla and English text, which was proposed by Sarkar in 2014 [2]. However, there are some limitations in sentence selection based on the frequency of keyphrases, term frequency and position of the sentences. Besides, this method [2] does not set the minimum length of keyphrases and that is why single word keyphrases may mislead for Bangla text summarization result as they always get higher rank in analysis. Additionally, general positional score of sentences can't differentiate the importance of the first sentence which can be very significant for Bangla news document. Moreover, some other proficient features can also be utilized in sentence ranking for better performance. Details ofthe keyphrase based method [2] and its limitations are discussed in section TI. In this paper, we have proposed some enhancements in the keyphrase based method [2] for Bangla text summarization and the results outperform over existing one. The enhancements include: i) setting the minimum length for keyphrases, ii) considering the first sentence specially and iii) counting numerical figure from words and digits for sentence scoring. The rest of this paper is organized as folIows: Section II describes keyphrase based method. Section III presents the proposed enhancements. Evaluation and results are depicted in section IV. Finally, this paper is concluded in section V with future works. II. KEYPHRASE BASED METHOD Our enhancements rest on the foundation of keyphrase based text summarization method developed by Sarker in 2014 [2]. In this method [2] , the keyphrases are extracted from any senten ce as a sequence of words containing no punctuation mark and stop words. If any keyphrase consists of more than 5 words, it is not considered. Keyphrases with multiple words are segmented to single words, double words and so on to get more keyphrases. All the keyphrases are ranked using phrase frequency inverse document frequency (PF-IDF) and sentences are scored based on their position and term frequency. There are two phases for summary generation. In the first phase (phase-I), candidate summary sentences are selected which contain top ranked keyphrases. Phase-I considers the sentences for selection which appear early (within position 5) in the document as per the author's experiment. From these candidate sentences, top scored sentences are selected as final summary sentences. If phase-I fails to generate summary of user desired length, phase-2 is activated. In the second phase (phase-2), more summary sentences are selected based on the sentences' score from the rest of the sentences. Point to be mentioned that the applicable fields of keyphrases are indexing [14] , searching [15] , summarizing [16, 17], etc. For the purpose of text summarization, mult-word keyphrases are used in [16] where redundancy is less emphasized [2]. Again, noun phrases are taken as keyphrases for Arabic text summarization in [17]. The keyphrase based method described in [2] is different from the</s>
<s>others [16, 17] because sequence of words are extracted here as keyphrases and there is a specific way of redundancy elimination. In this method [2], when any sentence is selected for summary, selected sentence may contain one or more keyphrases. So, keyphrases which exist in the previously selected sentence(s), keep them apart so that they can't keep part in further sentences selection which results redundancy elimination. It was stated that the keyphrase based methodology [2] outperforms LEAD baseline method (where first n words are selected as summary) and the methods in [7,9]. The F-measure score was shown 0.4242 for Bangla text summarization [2]. However, there are some limitations in this keyphrase based method [2]. Here, keyphrases with multiple words are breakdown to single words, double words, etc. so that there will be more keyphrases. Again, keyphrases with single word (got from the breakdown of a keyphrase ofmultiple words) can have chance to appear again as single word and in the breakdown of other keyphrases. So, keyphrases with single word have more chance to be high frequent. As the keyphrases are ranked based on their frequency, keyphrases with length 1 (single word) are getting higher rank. Based on our investigation with 200 test documents, keyphrases with length 1 may not reflect any concept where reflecting concept was the principal reason of using keyphrases. In this regard, a minimum length of keyphrases should be proposed for better performance as enhancement in Bangla text summarization. Again, sentences are scored based on their position and term-frequency only [2]. But, these features can't differentiate the first sentence of a document which can be very significant for Bangla news document (discussed in the next section). So, another enhancement has been proposed here to treat the first sentence specially. III. PROPOSED ENHANCEMENTS The keyphrase based method [2] has been enhanced in this paper and eamed better performance as folIows : (i) Modifying the keyphrases selection process, (ii) Considering the first sentence specially and (iii) Counting numeri ca I figure presented in words and digits for sentence scoring. Details of the proposed enhancements are given below: A. ModifYing the Keyphrases Selection Process In the existing method [2], keyphrases are selected as sequence of words from any sentence containing no punctuation mark and stop words where length of keyphrases can be from 1 to 5. Keyphrases are considered there by c1aiming that they contain the key concepts. But, it has been found in our observation of 200 test documents that single word keyphrases may not reflect any concept. We may think about the following sentence for example, "~ ~ ~ ~ ~ c~" (ditiyo bishwjuddhe onek manush mara geche -Many people were died in the Second World War). Now, if a keyphrase is selected as '~" (ditiyo - Second), it is unable to reflect any meaningful thing. But if a keyphrase is selected as '~ ~" (ditiyo bishwjuddhe - Second World War), it may contain a concept. In this regard, the keyphrases with single word are ignored in the</s>
<s>proposed enhancement. An experiment has been done with 200 news documents and 600 model summaries (three summaries are for each document) by removing keyphrases with length 1 (single word). In the experiment, the F-measure score is increased from 0.4513 (minimum length of keyphrases is I) to 0.4625 (minimum length ofkeyphrases is 2). Here, F­measure has been calculated using (6) with our training dataset (discussed later in the section IV). Again, it has been found that performance is decreased if we set the minimum length of keyphrases as 3 or 4. In the existing keyphrase based method [2] , the score of a keyphrase is computed as the product of phrase frequency (PF) and inverse document frequency (IDF) when the length of keyphrase is I. Otherwise, the score is computed as the product of phrase frequency (PF) and the logarithmic value of total number of documents in a given corpus. But, after the proposed modification in keyphrases selection process, the score of each keyphrase is computed using (1) : SCORE. = fO,ifpLen=l pf' Ldf lPF * Log(N), if pLen > 1 (1) where Plen: length of phrase in terms of words, PF: frequency of a phrase, N: the total number of documents in the corpus (a collection of documents in a domain under consideration). In (1), the score of keyphrase is set to 0 if the Plen is equal to I so that single word keyphrases will be ignored in the ranking. In the keyphrase based method [2], sentences are primarily selected based on top ranked keyphrases (rank is calculated 2016 IEEE Region 10 Conference (TENCON) - Proceedings ofthe International Conference 43 using (1)) and sentences are finally selected based on sentence' score (calculated using term-frequency and position feature). But as per observation, the way of sentence selection and scoring can be optimized for the betterment of summary preparation which is discussed in the next two points Band e. B. Considering the first sentence specially In the existing keyphrase based method [2] , the sentence score is depended on position and term-frequency. The positional score is the highest for the first sentence and the lowest for the last where the score is gradually decreasing from the first sentence. But in most of the time for news documents, the first sentence is much important than any other sentences as per our experiment which is explained in the lower part of this sub-section. So, general positional score (which is gradually decreasing) is not applicable for the first sentence of news documents. Again, some existing summarization methods emphasized on sentences those contain any title word [4, 8] and in news documents, the first sentence contains the fuB title often. So, an extra care is proposed here for the first sentence of the input document. In the experiment with our training dataset (200 documents and 600 model summaries), it has been found that the first sentence is existed in the summary for 78% times. So, if the first sentence is always kept in summary, there</s>
<s>will be wrong selection for 22% (100 - 78) times. But, after scrutinizing one step ahead, it has been found that if the first sentence contains any title word, it is existed in summary for 88% times where error rate is 12% (100-88). So, it is proposed here that the first sentence is selected in summary if it contains any title word. e. Counting numerical figure presented in words and digits for sentence scoring A new feature (counting numerical figure presented in words and digits) is recommended here for sentence scoring as an enhancement. In [13] numerical figure (in digits) was counted and shown that a sentence can be significant for containing numerical figure. But, the numerical figure can be presented in words which can't be identified easily like digits. We may consider the following two sentences for example, "~~ 'f[a'[ -<OO~I ~ <!Wf '1"1 ~I" (korimer jonmo shal 20061 tahar boyosh dosh bochhor - Karim's birth year is 2006. He is ten years old.). Existing procedure [13] can find 1 numerical figure from the first sentence and unable to find any numerical figure from the second sentence as the numerical figure " 'f"f" (dosh - ten) is presented in words. So, a technique is introduced here to recognize numerical figure from both words and digits by checking the following conditions: a) First part of the word is constituted with the foBowing: 0(0 ), ~(1 ), -«2 ), 1!>(3 ), 8(4 ), Q(5 ), ~(6 ), '1(7 ), 1;(8 ), ll>(9 ) or '\!I<!''' (ek-one), "~' (dui - two), "ft"!" (tin -three) . . .. . .... "~" (atanobboi - ninety eight), "~' (niranobboi - ninety nine). While checking numerical figure from digits, decimal point (.) is also considered. b) In the second part (if any), it contains " "[\5" (shoto -hundred), '~" (hazar - thousand), ''of'lI>'' (lokkho - million), etc. c) The third part (if any) is suffix Iike "~" (khana­that) , "~" (khani-that), "~" (ti - this) , ''U"t'' (ta - this), etc. If any word meets these three conditions, the word is tagged as numerical figure. We have experimented on 200 test documents for the proposed technique and found that numerical figure can be identified for 100% and 92% those are presented in digits and words respectively. The score for the existence of numerical figure is counted using the following equation: (2) where SN! is the score of sentence for the existence of numerical figure in digits (Ndigits) and words (Nwords). After considering all the proposed enhancements, the senten ce score is computed finally as folIows: WtinalCi) = W i + SNtCi) (3) where Wjinal(i) is the final score of the i-th sentence, W i : the computed score of the i-th sentence according to the existing keyphrase based method using term-frequency and position feature [2], SNf(i) : the score for the existence ofnumerical figure based on (2). It is noticeable that incorporation of score for the numerical figure has upgraded the performance significantly (shown in section</s>
<s>IV). The remarkable point is that position of sentence and text title have been considered in several existing methods [4, 12]. And, numerical figure (in digits) has already been counted in [13] . But, to the best of our knowledge, (i) selecting the first sentence if it contains any title word and (ii) counting numerical figure from words for sentence scoring, have not been proposed in any existing method. So, it can be said that the proposed enhancements have brought something different. IV. EVALUATION AND RESULTS A . Dataset From Bangla daily newspapers, 400 news documents (each document has 18 to 25 lines of Unicode text) have been collected as test corpus. These news documents contain variety of news that covers a wide range of topics Iike political, sports, crime, economy, environment, etc. Three human judges have generated summaries for each document. These human generated summaries are considered as reference/model summaries. These 400 documents-summaries are divided into two datasets as (i) randomly selected 200 documents with corresponding model summaries are taken as training set and (ii) other 200 documents with corresponding model summaries are treated as performance evaluation set. The evaluation set has been uploaded to internet so that other researchers may use this [18]. We have used human generated model summaries as there is no bench mark dataset for evaluating Bangla text summarization. Again, the dataset of 400 test documents is 44 2016 IEEE Region 10 Conference (TENCON) - Proceedings ofthe International Conference around ten times larger than the evaluation dataset of some existing methods [2, 7, 9]. Moreover, so me existing methods [2, 7, 9] were evaluated against one model summary only. But the proposed method is evaluated here with three model summaries of each test document. Here, someone may raise question for using human generated model summaries. But, the remarkable point is that human generated model summaries were also used for English text summarization methods despite the existence of bench mark dataset [19, 20] and for other languages where there was no bench mark dataset [7,9, 11]. B. Evaluation Evaluating the quality of a summary is a difficult problem, principally because there is no ideal summary [21]. For relatively straightforward news documents, human summarizers te nd to agree only approximately 60% content overlapping [21]. In our proposed method, Precision, Recall and F-measure are brought into playas these have long application as important evaluation matrices in information retrieval field [22] . If A indicates the number of sentences retrieved by summarizer and B indicates the number of senten ces that are relevant as compared to target set, Precision, Recall and F-measure are computed based on the following equations: p .. ,pi AnB reCISlOn \' / = -A-AnB Recal! (R) = -B-F-measure = 2 x P x R P+R C. Experiments and results (4) (5) (6) Wehave developed the keyphrase based method [2] and incorporated the proposed enhancements with a server side scripting language named PHP (Hypertext Preprocessor). In the existing method [2] , summary length is specified by</s>
<s>user but in our implementation, one third sentences are selected as final summary. Performance has been measured after incorporating each proposed feature and clearly shown the step by step progress ofperformance in the fig. I. Based on the result in fig. I, it is apparent that every incorporated feature is important for better performance. Here, Precision, Recall and F-measure have been calculated using (4), (5), and (6) respectively upon training dataset (discussed in the beginning ofthis section). 0.70 0 .68 0.65 0.63 0.60 0.58 0.55 0.53 0.50 0.48 0.45 0.43 0.40 0.38 0.35 0.33 0.30 Recall Precision F-measure • Existing keyphrase based method . Ignore keyphrases which consist of one ward • Keepthe first sentence in summary if it contains any titleword • Count numerical figure presented in digits • Count numerical figure presented in words and digits Fig. J. Step by step improvement ofperformance for enhancements TABLE I. COMPARJSON ON THE BASIS OF ROUGE-1 SCORES FOR 200 DOCUMENTS WlTH 95% CONFIDENCE INTERVAL AVj!;. Recall A vg_ Precision AVI! F measure Proposed m ethod 0.6819 0.5757 0.6166 Keyp hrase based 0.5515 0.5603 0.5496 method [2] TABLE 11. COMPARJSON ON THE BASIS OF ROUGE-2 SCORES FOR 200 DOCUMENTS WlTH 95% CONFIDENCE INTERVAL AVI! RecaU A VI! Precision AVI! F measure Proposed m ethod 0.6433 0.5459 0.5830 Keyp hrase based 0.5075 0.5165 0.5060 method [2] In the fig. 1, the utilized features in each step of improvement include all the features of previous step(s) and better performance is obtained by combining all the proposed enhancements. Again, comparison has been tumed between the existing method and its' enhanced version using the evaluation dataset of 200 documents of Bangla Unicode text [18]. ROUGE automatic evaluation package has been utilized here as it can be applied for Unicode text [23]. Average Recall, Precision and F-measure have been calculated based on ROUGE-l and ROUGE-2 scores and displayed in table land table 11 respectively. In the evaluation, it has been found that result of the keyphrase based method has been varied from the result c1aimed by the author in his paper [2] as it is evaluated with different set of data. For the simulation, the proposed method and the existing method [2] have been implemented with a server side scripting language. Same list of stop words [24] have been used for implementing both the methods. Based on the evaluation results in table land table 11, it can be said that the method has shown better performance after incorporating the proposed enhancements (a. ignoring the single word keyphrases, b. considering the first sentence specially and c. counting numerical figure in words and digits for sentence scoring). V. CONCLUSION AND FUTURE WORKS A keyphrase based Bangla text summarization method has been investigated here in depth and proposed three enhancements. Explanation has been given for each proposed enhancements to c1arify the importance. Step by step progress of performance has been demonstrated in the evaluation section. Moreover, an overview of several automatic Bangla text summarization methods has been given in the</s>
<s>introduction part of this paper. In the overview, it has been indicated with reference that most of the incorporated features in various existing methods of Bangla text summarization were collected from the methods of English text. In this regard, the two introduced features for enhancements (i. considering the first sentence if it contains any title word and ii. counting numerical figure from words) have brought something different. Finally, it has been shown on the basis of ROUGE-I and ROUGE-2 evaluation scores that the system after enhancements is performing better than the existing system. Here, the enhancements have been proposed only for Bangla text summarization. In future, we hope to introduce more features for important sentence identification and adapt 2016 IEEE Region 10 Conference (TENCON) - Proceedings ofthe International Conference 45 the proposed enhancements for both Bangla and English text summarization. Acknowledgments This research work is funded by a Fellowship Scholarship from Information and Communication Technology Division, Government of the People's Republic of Bangladesh. There is also a valuable support from the Central Bank ofBangladesh. References [I] Kunder, M., "The size of the world wide web," online avai lab le at: www.worldwidewebsize.coml? (last accessed May-2016). [2] Kamal Sarkar, " A Keyphrase-Based Approach to Text Summarization for English and Bengali Documents", International Journal of Technology Diffusion (IJTD), vol. 5, issue 2, pp. 28-38, Apri l 2014. [3] Hans P. Luhn, "The Automatie Creation of Literature Abstracts," IBM Journal ofResearch and Development, vol. 2, no. 2, pp. 159-165, 1958. [4] H. P. Edmundson, "New Methods in Automatie Extracting," Journal of the Association for Computing Machinery, vol. 16, no. 2, pp. 264-285, April 1969. [5] Md. Majharul Haque, Suraiya Pervin, and Zerina Begum, " Literature Review of Automatie Multiple Documents Text Summarization," International Journal of Innovation and Applied Studies, vol. 3, no. I , pp. 121-129, May 2013. [6] Md. Majharu l Haque, Suraiya Pervin, and Zerina Begum, " Literature Review of Automatie Single Document Text Summarization Using NLP," International Journal of Innovation and Applied Studies, vol. 3, no. 3, pp. 857-865, July 2013. [7] K. Sarkar, "Bengali text summarization by sentence extraction," Proceedings of International Conference on Business and Information Management (ICBIM-2012), NIT Durgapur, pp. 233-245, 2012. [8] Md. Iftekharul Alam Efat, Mohammad Ibrahim, and Humayun Kayesh , "Automated Bangla Text Summarization by Sentence Scoring and Ranking," International Conference on Informatics, Electronics & Vision (lCIEV), IEEE, pp. 1-5,2013. [9] K. Sarkar, " An approach to summarizing Bengali news documents," Ln proceedings ofthe International Conference on Advances in Computing, Communications and Informatics, ACM, pp. 857-862, 2012. [10] Banglapedia, the national Encyc10pedia of Bangladesh, Asiatic Society of Bangladesh, Dhaka, 2003. [lI] Aqil M. Azmia and Suha AI-Thanyyan, "A text summarizer for Arabic," Journal of Computer Speech & Language, Elsevier, vol. 26, issue 4, pp. 260-273,2012. [12] Md Tawhidullslam and Shaikh Mostafa AI Masum, " Bhasa: A Corpus­Based Information Retrieval and Summariser for Bengali Text," In Proceedings of the 7th International Conference on Computer and Information Technology, 2004. [13] Md. Ni zam Uddin and Shakil Akter</s>
<s>Khan, "A Study on Text Summarization Techniques and Implement Few of Them for Bangla Language," I Oth International conference on Computer and Information technology, IEEE, pp. 1-4, 2007. [14] Turney, P. D., " Learning algorithms for keyphrase extraction," Information Retrieval , vol. 2, no. 4, pp. 303- 336, 2000. [15] Wu, Y. F. B. and Li , Q. , "Document keyphrases as subject metadata: Incorporating document key concepts in search results," Information Retrieval , vol. 11 , no. 3, pp. 229- 249, 2008. [16] D ' Avanzo, E. and Magnini , B. , " A keyphrase-based approach to summarization: The LAKE system at DUC-2005," Ln Proceedings of DUC, 2005. [17] Hamzah Noori Fejer and Nazlia Omar, "Automatie Arabic Text Summarization Using C lustering and Keyphrase Extraction," International Conference on Lnformation Technology and Multimedia (LCLMU), Putrajaya, Malaysia, pp. 293-298, November 18 - 20, 2014. [18] Bangla Natural Language Processing Community, "Dataset for Evluating Bangla Text Summarization System," online available at: http://bnlpc.org/research.php (last accessed September-20 16). [19] Rafael Ferreira, Frederico Freitas, Luciano de Souza Cabral , Rafael Dueire Lins, and Rinaldo Lima, "A Four Dimension Graph Model for Automatie Text Summarization," IEEE/WLC/ACM International Conferences on Web Lntelligence (Wl) and Intelligent Agent Technology (lAT), pp. 389-396, 2013. [20] Jingqiang Chen and Hai Zhuge, "Summarization ofscientific documents by detecting common facts in citations," Future Generation Computer Systems, Elsevier, vol. 32, pp. 246- 252, 2014. [21] Dragomir R. Radev, Eduard Hovy, and Kathleen McKeown, " Introduction to the special issue on summarization," Journal of Computational Linguistics, MIT Press, vol. 28, no. 4, pp. 399-408, December 2002. [22] Shanmugasundaram Hariharan, Thirunavukarasu Ramkumar, and Rengaramanujam Srinivasan, "Enhanced Graph Based Approach for Multi Document Summarization," The International Arab Joumal of In formation Technology, vol. 10, no. 4, July 2013. [23] ROUGE 2.0 - Java Package for Evaluation ofSummarization Tasks with Updated ROUGE Measures, online available at: http://kavita­ganesan.com/contentlrouge-2.0 (last accessed May-2016). [24] Indian Statistical Institute, " List of stop words for Bengali language," online available at: http://www.isical.ac.in/- fire/data/stopwords_ list_ben.txt (last accessed May-2016). 46 2016 IEEE Region 10 Conference (TENCON) - Proceedings ofthe International Conference</s>
<s>DOI: 10.4018/IJTD.20201001.oaInternational Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020This article published as an Open Access article distributed under the terms of the Creative Commons Attribution License(http://creativecommons.org/licenses/by/4.0/) which permits unrestricted use, distribution, and production in any medium,provided the author of the original work and original publication source are properly credited.Approaches and Trends of Automatic Bangla Text Summarization:Challenges and OpportunitiesMd. Majharul Haque, Bangladesh Bank, BangladeshSuraiya Pervin, University of Dhaka, BangladeshAnowar Hossain, Brain Station 23, BangladeshZerina Begum, University of Dhaka, BangladeshABSTRACTAs long as the internet user is increasing, online electronic content is growing proportionally irrespective of languages. A lot of research works on English text summarization have come to light to deal with this gigantic body of online text. Unfortunately, a few works have been accomplished for Bangla though a huge number of people are involved with this language. This article has tried to explore the trend of research work on Bangla text summarization. Fourteen approaches have been briefly expounded here by addressing the pros and cons with some scope of improvement. A comparison has also been turned based on their incorporated features and evaluation results. It is expected that this article will draw the attention of more researchers in the area of Bangla text summarization and give a crystal-clear message about the opportunities to the next generation. The integrated message about all the existing methods has been depicted here to reveal the importance of Bangla text summarization. To the best of the author’s knowledge, this is the first review study in this ground.KeywORDSBangla, Electronic Content, Internet User, Online Text, Text SummarizationINTRODUCTIONThe quantity of online available information increases rapidly with the development of the World Wide Web (Ai, Zheng, & Zhang, 2010) and the problem of information overload is rising proportionally. People are encumbered with the enormous body of electronic contents or texts, whereas they expect brief information within the shortest time. So the automatic text summarization is needed to process the large document efficiently and scavenging useful information from it (Ferreira & Souza, 2014). The goal of automatic text summarization is to condense the source text into a shorter version with preserving its information content and overall meaning (Kumar & Salim, 2012; Gupta & Lehal, 2010; Hovy, 2005).International Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020Two main categories of text summarization algorithms are extractive and abstractive (Mani, Klein, House, & Hirschman, 2002). Extraction techniques simply copy significant sentences but abstraction requires deep natural language processing which is yet to reach a mature stage even for the English language (Ye, Chua, Kan, & Qiu, 2007). The summarization task can also be classified as a single document and multiple documents text summarization (Nenkova & McKeown, 2012). The research was first started naively on the single document but today information is found from various sources on any single topic for which multiple documents summarization is demanded (Haque, Pervin, & Begum, 2013a).The state-of-art-works (Kumar & Salim, 2012; Gupta & Lehal, 2010) focused on text summarization in various languages which</s>
<s>were started with English text. The automatic English text summarization has begun around five decades ago by Luhn (1958) based on term-frequency. It was extended by Baxendale (1958) by incorporating the position of sentences and cue-phrases for sentence ranking. Edmundson (1969) included three additional features as (1) cue words, (2) title or heading words, and (3) location of sentences along with term frequency. Various research works are available in the arena of English text summarization (Haque et al., 2013a, 2013b), and it has witnessed the continuous involvement of many proficient researchers. However, to this age, a few works have been presented for Bangla text summarization (Sarkar, 2012a) where most of the features have been considered from the papers of English text. There are also significant amount of review papers for English text summarization regarding the discussion on various research works (Haque et al., 2013a, 2013b) from where people can understand in which point they should focus. In these circumstances, a review study on Bangla text summarization is in need so that researchers of this ground can focus on the specific points to improve.The contribution of this paper is as follows:1. Draw a survey with comparative study for the fourteen approaches of Bangla text summarization with pros and cons as well as the opportunities of improvement;2. To the best of our knowledge, all the papers of Bangla text summarization have been included here from the beginning of the research work on this ground to now. It is expected that this survey will attract more researchers in this arena and give them a clear direction about the scope of improvement;3. It has been tried to explore the pros and cons of each paper with explicit discussion;4. Ultimately, an analysis has been drawn for some distinguished features (used in several existing methods) to show the performance improvement for each.Though there are some existing review papers for the Indian languages and the English language to the best of our knowledge this is the first attempt to illustrate a survey, especially for Bangla text summarization.The rest of the paper is organized as follows: The next section presents the motivation behind Bangla text summarization and then the challenges are pointed in brief. Later on, various approaches of Bangla text summarization along with some prospects, limitations, and scope of improvement have been described. Experimental results for each feature and a comparison of these approaches have also been depicted. Finally, the conclusion is turned at last.MOTIVATION ON BANGLA TeXT SUMMARIZATIONBangla is the 7th most spoken language in the world from more than 3500 languages and it is the native language for 250 million people (Chowdhury, Khalil, & Chowdhury, 2000). It is the mother language of Bangladesh and the second most spoken language in India. Today, many computerized contents such as web sites, word documents, etc. have been developed in Bangla because of the large International Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020community of Bangla-speaking people. Moreover, there are several online Bangla newspapers and more</s>
<s>of them are coming to the scene. So, e-contents in Bangla are dramatically increasing throughout the cyber world. In these circumstances, to cope-up with this large volume of text, automatic Bangla text summarization would be an invaluable solution.We believe the following scenario will reveal the utmost importance of Bangla text summarization: if 1 of every 10 people reads Bangla newspaper regularly than 25 million people from 250 million (who speak in Bangla) are doing so. While reading the newspaper, the Bangla text summarization system can have a valuable impact if it makes a summary of all news by one-third of the total content. We may easily assume that one of us spend at least 30 minutes regularly for reading the newspaper. So, if there will be a summary consists of one-third of the content, it will save at least 10 minutes (one-third of 30 minutes) per day for each people. Based on the assumption, for the 25 million people (who read Bangla newspaper), the system will save in total 10*25 million = 250 million minutes per day which is around 475 years. Indeed, nothing more is needed to say about the impact of automatic Bangla text summarization.It is well known that the structure of Bangla’s sentence is much different from English (Chowdhury, Khalil, & Chowdhury, 2000). So, the existing methods of English text summarization can’t apply to Bangla. Therefore, an efficient Bangla text summarization technique is essential for researchers, international news agencies and individuals.CHALLeNGeS IN ReSeARCH wORK FOR BANGLA TeXTChallenges in the research work in the ground of Bangla text are as follows:• Automatic computerized services are hardly available for Bangla for facilitating research work;• The lexical database like WordNet in English (Miller, 1995) does not exist for Bangla;• There is no database of ontological meanings for Bangla words that can be used programmatically;• Since there are a few research works exist for Bangla language, there are a few directions regarding any problem of this field.Some other problems have also been discussed in (Karim, Kaykobad, & Murshed, 2013; Zaman, 2015) about the research work for Bangla. Further, scope of knowledge sharing is also limited as there are few researchers in this ground. Despite these difficulties, some approaches have been proposed for Bangla text summarization. These approaches are discussed in the next section.APPROACHeS OF BANGLA TeXT SUMMARIZATIONIn this section, attempts of Bangla text summarization have been depicted with their strength and weakness. The scope of improvement has also been tried to explore as follows.In 2004, Islam and Masum (2014) presented ‘Bhasa’, a corpus oriented search engine and summarizer. It performs document indexing and information retrieval based on keywords using vector space retrieval model (“Vector space retrieval model”, 2016) for Unicode Bangla text. Corpus files can be ranked and documents can be summarized by this method based on frequent appearance of query terms. The document is treated here as one vector and query terms are treated as different vectors to get the similarity between them. A tokenizer has been used here that</s>
<s>can determine different terms, abbreviations, tags, sentence’ boundary, headings, and titles. This method has the following modules: 1) TF-IDF (term frequency-inverse document frequency) calculation module, 2) keyword search module, and 3) summary generation module. It has utilized the concept of useful, unimportant and important words’ list while ranking sentences.International Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020Discussion: This (Islam & Masum, 2014) is the first approach for Bangla text summarization along with a search engine based on our observation. It has addressed the problem of dangling pronoun and attempted to solve in extracted summary sentences but the solution has been claimed here without giving any explanation. Even, it is not specified which modules/sub-modules of this method is for text summarization or search engine. As per the TF-IDF calculation and similarity measurement of each sentence with a given query, it is exposed that the method is effective as a search engine but not for summarization. Finally, no evaluation has been given to show the application of this method in real life.A few years later, some techniques from the investigation of English text summarization systems were applied to summarize Bangla text by Uddin and Khan (2007). They have proposed a method by incorporating some existing methods of English as follows: 1) location method, 2) cue method, 3) title method, 4) term frequency, and 5) numerical data. They have taken 40% higher-ranked sentences from the input document as a summary. It has been found that 40% of the extract by this system has got the point 8.4 from a human professional in the range of 0 to 10.Discussion: The remarkable point of this paper (Uddin & Khan, 2007) is to show that some features of English text summarization can also be applicable for Bangla. However, this method didn’t specify the exact contribution of each feature for sentence ranking. Moreover, numerical data has been considered for sentence scoring but numerical data can be presented in words instead of digits which can be considered for improvement. While evaluating this method, the score for each system generated summary was calculated but the comparison with human-generated summary/ any model summary has not been shown.Extraction based Bangla text summarization was again presented by Sarkar (2012a). This is an easy-to-implement approach like the method of Edmandson (1969) with the three major steps: (1) preprocessing, (2) sentence ranking, and (3) summary generation. The impact of the thematic term has been investigated and features like word-frequency, length, and position of the sentence have been utilized for sentence ranking. It was claimed that the system performs batter than the LEAD baseline method (the first n words of an input article are considered as the summary in LEAD baseline method). Average unigram based recall score was found as 0.4122.Discussion: This method (Sarkar, 2012a) is fully based on almost four decades of old English text summarization method (Edmundson, 1969) which can be upgraded by incorporating modern natural language processing techniques like sentence clustering, redundancy removal, etc. Moreover, in the evaluation, only</s>
<s>one model summary has been used for each of the test document but more model summaries can be developed for sophisticated evaluation results (Haque, Pervin, & Begum, 2016).In 2012, Sarkar (2012b) proposed another method by tuning each feature of his previous method (Sarkar, 2012a) for better summarization performance. This approach has four major steps (1) preprocessing (2) extraction of candidate summary sentences (3) ranking the candidate summary sentences (4) summary generation. This is also based on word-frequency, sentence position and sentence length that is similar to (Sarkar, 2012a). In this approach, some threshold points have been adjusted for the position of sentences, TF*IDF values and the minimum length of sentences. The impact of each feature has been specified with experiments for sentence ranking.Discussion: This method (Sarkar, 2012b) has surpassed the LEAD baseline method, baseline that uses term-frequency with sentence location and the method described in (Sarkar, 2012a). All the features have been tuned here for better performance. However, this method is also based on an old English text summarization procedure (Edmundson, 1969). Moreover, the evaluation has been turned here against only one model summary where more than one model summary can be used for getting a sophisticated evaluation result (Haque, Pervin, & Begum, 2016). This system can also be upgraded by incorporating modern natural language processing techniques as discussed for the previous method.In 2013, Efat, Ibrahim, and Kayesh (2013) introduced a method for Bangla text summarization by sentence scoring and ranking. Their system is alienated into three segments: (1) pre-processing the test document, (2) sentence scoring, and (3) generating a summary. Sentence scoring is depended on International Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020term frequency, position, cue words and skeleton of the document. Skeleton of the document consists of the words in title and headers.Discussion: It was stated in their paper that the system performs well when the document completely depends on a particular theme (Efat, Ibrahim, & Kayesh, 2013). So, the system can be more user-friendly by eliminating this dependency. The average accuracy of this proposed method has been claimed 83.57% against human-generated summarization which is really a good sign but they didn’t give a comparison with any existing method. An experiment has also been turned to measure the contribution of each feature while sentence ranking. Nevertheless, the evaluation result is for a particular theme only which may not be comparable with other generic text summarization methods.Abstraction based Bangla text summarization system was proposed for the first time in 2014 by Kallimani, Srinivasa, and Reddy (2014). They focused on a unified model with attribute-based Information Extraction rules and class-based templates. They have claimed for adaptation of this system over four Indian languages as Kannada, Hindi, Bangla, and Telugu. The document to be summarized is subjected to preprocessing, namely – Parts of Speech (POS) tagging and Named Entity Recognition (NER). TF/IDF rule-based classifier has also been used to categorize the document which determines the applicable classes. In this system (Kallimani, Srinivasa, & Reddy, 2014), classes are</s>
<s>blueprints where the identified attributes are set according to this blueprint. Attributes are primary pieces of information as follows - NAME, PLACE, DOB (date of birth), DOD (date of demise), and AWARDS. The most significant part of this system is the template-based sentence generation where templates are generic structures of sentences with some gap of crucial pieces of information. The extracted attributes are mapped with the templates to generate summary sentences.Discussion: It is well known that abstraction based English text summarization is yet in an immature stage (Ye, Chua, Kan & Qiu, 2007) though the research work on English text summarization was begun in 1958 (Luhn, 1958). In this situation, this method (Kallimani et al., 2014) has reported for Bangla abstractive summarization. Attribute extraction of this method is noticeable which is required for informative sentence generation. Nevertheless, the utilized template is creating the same structure of sentences always which can be monotonous. It is also questionable that using the template is enough or not for all types of sentences while abstraction. So, there is a scope of improvement for generating refined sentences with the identified attributes. According to their evaluation, the system achieved an average 86.24% precision, 78.93% recall, and 81.50% F-measure is an intrinsic evaluation. The evaluation that has been turned here it seemed to be for attribute extraction only. It is because the Precision and Recall value can be measured by matching for the important items extracted by the system against the important items that exist in the text.Research work has been accomplished for multiple document text summarization for Bangla language in 2014 by Uddin, Sultana, and Alam (2014) for the first time. In this paper, a primary summary is generated at first by sentence scoring on the basis of term-frequency. It has been reported that the words are replaced with their common synonym before term frequency calculation so that different words with the same meaning will be treated as the same word. Cosine similarity for each sentence to every other sentence of primary summary has been calculated to get the relevance between them. A graph-based model is then applied with the A* (Aker, Cohn, & Gaizauskas, 2010) searching algorithm on the primary summary for creating the final gist. It has been claimed that the selection of the starting point of a summary is effective by this method. The performance evaluation has been completed against the human-generated summary. Unigram based Recall Score was found 56% and the similarity between manual and system generated summary was shown 86.60%. The relevance among three human judges has also been shown evidently in their paper to expose that a single sentence is not taken as significant or worthless by all judges equally.Discussion: It is noticeable that this is the first task for Bangla multiple documents text summarization. This method has selected the most relevant sentence as the starting point of summary but no theoretical or practical reason has been stated behind this. Even the source of getting synonyms for each word before</s>
<s>term-frequency calculation has not been mentioned. After selecting the final International Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020summary sentences, there is no direction for the ordering of the sentences of different sources which is very much necessary to make the text lucid and understandable.Apart from the previous approaches, keyphrase based summarization method outperforms for both Bangla and English text, which was proposed by (Sarkar, 2014). Keyphrases are extracted as a sequence of words from any sentence that contains no punctuation mark and stop words. All the keyphrases are ranked as per their frequency and the sentences are ranked based on position and term frequency. Summary sentences are selected in two phases. In phase-1, candidate summary sentences are chosen that contain top-ranked keyphrases. From the chosen sentences, top-ranked sentences are selected that have the position not more than fifth in place in the document. If phase-1 fails to generate a summary of user-desired length, phase-2 is activated and select more summary sentences based on the sentences’ score from the rest of the sentences.Discussion: This (Sarkar, 2014) is the first task on keyphrase-based sentence extraction for Bangla text summarization. It was claimed that keyphrases can reflect the concept of a document more clearly than words. This method has set the upper limit in the length of keyphrases but no lower limit has been set. So, an experiment can be done here to set the lower limit in the length of keyphrases for better performance. In the evaluation, this method outperforms all the existing methods of Bangla text summarization. However, the same type of method has already been introduced for English (Sarkar, 2013). For sentence scoring, only position and term frequency have been considered that have already been introduced around four decades ago (Edmundson, 1969). Today, it can be seen that a lot of significant features have been invented by various researchers for text summarization (Haque, Pervin, & Begum, 2013a, 2013b). So, the performance of this research work can be enhanced by adding more features for sentence scoring.Sentence clustering-based Bangla news document summarization was published for the first time in 2015 (Haque, Pervin, & Begum, 2015). They have introduced sentence frequency along with term frequency. Sentences are ranked here by doing an algebraic sum of the scores of term frequency, sentence frequency, and numerical figure. Initially, sentence frequency is set to zero (0) for each and then every sentence is matched with others. If any sentence is found containing 60% terms of any other sentence, the smaller sentence is removed and the frequency of larger sentences is increased between them. Sentences are clustered according to their cosine similarity ratio and one-third top-ranked sentences are selected from each cluster. It was claimed that clustering helps for better coverage of information in summary.Discussion: This method (Haque, Pervin, & Begum, 2015) has introduced sentence clustering for the first time in Bangla text summarization. Sentence frequency is another contribution of this research work which assists in redundancy elimination and sentence ranking. However, clustering</s>
<s>by cosine similarity is very much conventional which works on directly matching of terms between two sentences (Yang, Cai, Zhang, & Shi, 2014). This clustering strategy can be updated by utilizing background knowledge from Banglapedia so that two different terms can be matched semantically and lexically. In the updated way, two sentences can be in the same cluster though they may have low cosine similarity. Again, the numerical figure has been counted here for sentence ranking but no strategy has been proposed to identify numerical figures if they are in words’ form other than digits. Moreover, the weight of each sentence ranking feature can be set experimentally for better summarization performance. In the performance measurement against human-generated summary, the F-measure score has been found 0.632 where only 20 documents have been considered.A well-established keyphrase based method (Sarkar, 2014) has been enhanced by Haque et al. (Haque, Pervin, & Begum, 2016) for Bangla news documents summarization. Here, the existing method (Sarkar, 2014) is scrutinized and the way of betterment is mentioned. The enhancements incorporate: (i) modifying the keyphrases selection process, (ii) including the first sentence in summary if it contains any title word and (iii) counting numerical figure which is presented in digits and words for sentence scoring. The evaluation has been drawn by considering 200 documents with 3 summaries for each (in total 3x200 = 600 summaries) using ROUGE (Recall Oriented Understudy for Gisting International Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020Evaluation) (Lin & Hovy, 2003; “ROUGE 2.0”, 2016) score. In the evaluation, the F-measure score has been enhanced from 0.5496 to 0.6166 (ROUGE-1) and from 0.5050 to 0.5830 (ROUGE-2).Discussion: The method (Haque, Pervin, & Begum, 2016) is an enhancement of an existing keyphrase based method presented in (Sarkar, 2014). It is remarkable that the new method significantly outperforms the existing methods. Point to be mentioned that this research work has utilized the ROUGE package (“ROUGE 2.0”, 2016) for the first time for evaluating the Bangla text summarization system. It has counted numerical figures from words for sentence scoring which was not noticed by any other methods based on our study. Nevertheless, the text form of the numerical figure with three digits contains more than one word in the Bangla language. For example, “123 – ” is written as “one hundred twenty-three – ”. In this situation, one numerical figure can be counted for two times (one hundred – and - twenty-three). No mechanism has been included here to handle this issue. Again, background knowledge of keyphrases (from Banglapedia) can be considered while sentence ranking for upgrading performance. Another significant point that has not been covered is dangling pronoun resolution. If a sentence is extracted where a pronoun is available but the sentence which is containing the noun of that pronoun is not included in the summary, the summary will be ambiguous.Later on, another method has been presented by Haque, Pervin, and Begum (2017a) where pronoun replacement is accomplished for the first time to minimize the</s>
<s>dangling pronoun from summary. After replacing pronoun by the corresponding noun, sentences are ranked by considering (i) term frequency, (ii) sentence frequency, (iii) numerical figures and (iv) title words. Dependency parsing has been introduced here for general and special tagging of unknown words based on the tag of known words. The first sentence is included in summary always if it contains any title word. It has been found from the ROUGE evaluation results that the method outperforms the four latest existing methods (Sarkar, 2012a, 2012b, 2014; Efat et al., 2013).Discussion: In this method (Haque, Pervin, & Begum, 2017a), pronoun replacement by the corresponding noun has been utilized for the first time. The numerical figures have been considered here for both digits and words form. Their system is a rule-based system that utilized a hidden Markov model and Markov chain model. It has been claimed that 3,000 Bangla news documents have been analyzed to explore the rules. Along with the parts-of-speech tagging, they have introduced special tagging including Acronyms, Repeated words, Occupation, Name of humans and places, etc. Dependency parsing is another notable feature for boosting the tagging procedure. However, most of the rules, they have used here for dependency parsing, pronoun replacing and special tagging, have no grammatical reference which is the principal concern of this paper. Though they have identified the full human name and recalled the full name from the part of the name, there can be a chance of a high false-positive rate to treat any word accurately as the part of the name. So, it can be stated that there is a significant scope of improvement for this method.A heuristic approach of Bengali text summarization has been proposed by Abujar, Hasan, Shahin, and Hossain (2017). They have claimed for deriving some rules of Bangla text analysis. Three phases have been accomplished here as (i) preprocessing with linguistic analysis, (ii) Prime sentence (the main leading sentence) identification by words and sentence analysis, and (iii) final processing for the betterment of summary generation. The sentence analogy matrix has been utilized and sentence imitation has been considered to omit redundant sentences. They have calculated the effective rate of words by considering the repeated distance. The first and last positional sentences of each paragraph have been treated as significant for sentence scoring. They have claimed that throughout the proposed rules and models, final processing features can generate a better quality summary from Bangla text. The evaluation has been done with the human-generated summary with only three different texts where the performance is showing almost similar to humans without mentioning the actual performance in the numerical figure.Discussion: In the paper (Abujar et al., 2017), the relations between words and sentences have been revealed and the prime sentence has been selected with some other steps to develop a better International Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020summarization system. Words’ effect rate identification is deemed to be significant but no justification has been provided for the range of effective</s>
<s>rate. Cue words have been considered though positive or negative cue words could be differentiated for the improvement. Imitating of sentences is taken care of so that redundancy can be minimized but a similar feature has already been proposed in (Haque, Pervin, & Begum, 2015) as sentence frequency which has not been mentioned. The identification of prime sentences can have a false-positive rate which deserves a statistical analysis. Furthermore, the method can be tested with a publicly available dataset (“Dataset”, 2016) rather than human-generated summaries only. However, it has been explicitly mentioned in the paper that there are lots of improvements required for an enhanced summarizer.Ghosh, Shahariar, and Khan (2018) proposed a Rule-based extractive summarization system by utilizing 12 features in 2018. According to their statement, major contributions are: (i) applying graph-based sentence scoring features, (ii) introducing some features for the first time as like aggregate similarity, bushy path, keyword in the sentence, presence of inverted comma and special symbol, (iii) removing redundant information from the summary. The first sentence, based on position, has been emphasized for summary generation and the importance is downgraded to the second, third and so on. The Cue-words and title words have also been taken into account for important sentence identification.Discussion: In this method (Ghosh, Shahariar, & Khan, 2018), they have stated that 12 features have been utilized. It is appreciated that they have brought some new features for Bangla text summarization and outperformed all the existing methods in evaluation. The evaluation has been turned with a published dataset (Haque, Pervin, & Begum, 2015) with ROUGE evaluation tools. The comparison has been shown with the 5 latest existing methods. However, all the features have been equally considered without depicting any analytical result for each feature individually. Haque et al. (Haque, Pervin, & Begum, 2017a) showed that the weight of every feature should not be the same. Moreover, there is no partial implementation of the method so that the contribution of each feature can be individually distinguished. They have considered the numerical figure which is presented in digits only and ignored the figure that can be presented in words. Furthermore, the significant issue of dangling pronoun (Haque, Pervin, & Begum, 2017b) has not been addressed in their research work.Sikder, Hossain, and Robi (2019) presented a method of Bangla text summarization by combining some mathematical and Bangla grammatical rules in 2019. It has been claimed that they have introduced the first idea of extraction method including grammatical view which is a path of abstraction. According to the paper, the main contribution of this research work is sentence relevancy, meaning analysis, joining and eliminating odd sentences. After preprocessing, sentence ranking has been done by considering Term frequency, sentence position, sentence’ similarity and then 70% top-ranked sentences are selected as the primary summary. From the primary summary, sentence joining has been done by considering some Bangla grammatical rules where two or more sentences are transformed into a single sentence. While joining, the related nearest sentences are identified for each</s>
<s>sentence of the primary summary and also distinguish the structure of all the related sentences. Finally, simplified sentences are generated for each sentence and place in the appropriate position. The evaluation of this method has been accomplished with a human-generated summary for six different documents.Discussion: It is appreciated that the method (Sikder, Hossain, & Robi, 2019) has considered Bangla grammatical rules along with mathematical rules and introduced the path of abstraction based summarization. This has defined the ways of constructing new sentences from related consecutive sentences. This paper claimed for the first step in Bangla text abstraction whereas the abstraction based Bangla text summarization has been presented in 2014 (Kallimani et al., 2014). Though the method has been proposed for Bangla text summarization, they have claimed that their method can be extended easily for other languages also. However, it is deemed to be impossible for the claimed extension for other languages because there is a clear explanation that there are significant differences between Bangla and English based on grammatical rules (Haque, Pervin, & Begum, 2017a). Moreover, no analytical justification has been provided for the sentence positional score that the first sentence International Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020will have top importance and downgraded for the next sentences gradually. Sentence joining is an appreciated step but an analytical review is needed to show the impact of this step. After all, the evaluation could be done by utilizing more documents instead of six documents only.Some noticeable points about Bangla text summarization methodologies are as follows:• Most of the utilized features have been taken from existing English text summarization;• There is no common dataset publicly available for evaluating Bangla text summarization system. In this regard, we have generated and uploaded dataset in (“Dataset”, 2016) which can be used in any upcoming methods. This dataset has already been used for by some research works (“Dataset”, 2016). We hope that it will be helpful for researchers to evaluate their methods in future;• Semantic knowledge-base can be implemented to help in creating proficient Bangla text summarization system;• A lexical dictionary like WordNet (Miller, 1995) in English can be developed for Bangla text.eXPeRMeNT wITH DIFFeReNT FeATUReSExperiment has been done by generating summary of Bangla text document where only one input is taken which is a Bangla news document. Through the experiment, impact analysis of different features has been accomplished. In this analysis, the generated summary has been compared with three model summaries of 200 news documents each and the results of evaluation is the average results of the comparisons. Precision, Recall and F-measure are brought into play here as these have long used as important evaluation matrices in information retrieval field. If ‘A’ indicates the number of sentences retrieved by summarizer and ‘B’ indicates the number of sentences that are relevant as compared to target set, Precision, Recall and F-measure are computed based on the following equations:Precision (P) = A B∩ (1)Recall (R) = A B∩ (2)F-measure = 2× ×P RP R</s>
<s>(3)To show the impact of features in the betterment of summary generation, some distinguished features have been selected to apply as follows:1. Pronoun replacing by the corresponding noun to minimize the number of dangling pronouns;2. Sentence ranking by:a. Term frequency inverse document frequency calculation;b. Sentence frequency measurement with redundancy elimination;c. Counting the existence of numerical data from digits;d. Counting the existence of numerical data from words;e. Computing title word score;3. Considering the first sentence especially if it contains any title word;4. Adjustment of the coefficients of all the attributes listed in the above point (ii).International Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020The development code has been written in PHP web based programming language where the 363 Bangla stop words have been used from the Indian Statistical Institute website (“Stop Words”, 2016). The simulation has been done in a laptop with Processor: Intel(R) Core(TM) i5-3210M CPU @2.50GHz 2.50GHz, Installed memory (RAM): 8.00 GB (7.60 GB usable), System type: Windows 7 Professional 64-bit Operating System.The impact has been estimated as Precision, Recall, and F-measure using Equations (1), (2), and (3) respectively using a publicly available dataset (“Dataset”, 2016) of 200 news document with three summaries for each of the document (total 600 summaries). Every time, the system generated summary is compared with three model summaries of each document, and compute the average value of Precision, Recall and F-measure with ROUGE (“ROUGE 2.0”, 2016) automatic evaluation package. Point to be mentioned that the features have been implemented as like the feature mentioned in some paper as follows:1. Sentence Frequency Calculation: If there are two or more sentences are found with 60% similarity, the long sentence is kept and the other is deleted as in paper (Haque, Pervin, & Begum, 2015);2. Replacing Pronoun by Corresponding Noun: The pronoun has been replaced by the corresponding noun so that the related noun and pronoun will be treated as same word. Moreover, it will impact in the word frequency calculation. This has been implemented with the rule based replacement of pronoun as in (Haque, Pervin, & Begum, 2017b);3. Count Numerical Figure From Digits: The numerical figure can be presented in digits which has been counted with the pattern recognition addressed in (Haque, Pervin, & Begum, 2016);4. Count Numerical Figure From Words and Digits: The numerical figure can be presented in both words and digits which has been counted with the pattern recognition addressed in (Haque, Pervin, & Begum, 2016);5. Considering Title Words: The title words score has been used in several methods for sentence ranking as in (Sarkar, 2012a; Haque et al., 2013a, 2013b; Islam & Masum, 2014; Efat, Ibrahim, & Kayesh, 2013);6. Coefficients Adjustment: Sentence ranking has been done in several Bangla text summarization methods using some parameters like numerical figure, sentence frequency, title word, term frequency, etc. For these parameters, the impact of all these are not same according to the paper (Sarkar, 2012b, Haque, Pervin, & Begum, 2015) for which the coefficients of each parameters are adjusted for getting better</s>
<s>performance;7. Considering the First Sentence Specially: The first sentence of every document is considered as important (Haque, Pervin, & Begum, 2015) which is also considered by us.In Figure 1, all the above features has been added one by one and the utilized features in each step include all the features of the previous step(s).Figure 1. Step by step improvement of performance for including each featureInternational Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020After generating summary by using only term-frequency, the F-measure score has been found as 0.4124 using the same dataset (“Dataset”, 2016). After incorporating each feature the F-measure score has been raised as mentioned in Table 1.COMPARISON AMONG VARIOUS APPROACHeSFourteen approaches have been discussed in the previous section with their pros and cons. Table 2 turns the comparison among these approaches based on their incorporated features and evaluation results.continued on following pageTable 1. Percentage of performance improvementSN# Features Improvement1 Sentence frequency calculation 6.71%2 Replacing pronoun by corresponding noun 9.66%3 Count numerical figure from digits 6.15%4 Count numerical figure from words and digits 5.11%5 Considering title words 4.96%6 Coefficients adjustment 2.47%7 Considering the first sentence specially 4.47%Table 2. Comparison among fourteen approaches of Bangla text summarizationSn# Researcher (s), YearIncorporated Distinguished Features Remarks Evaluation Result1 Islam and Masum (2014)i) Term frequency ii) Useful word list with Important and unimportant word listThis method has keyword search module for summary generation where keywords are selected on the basis of tf-idf and list of useful, important and unimportant words.No evaluation has been drawn.2 Uddin and Khan (2007)i) Using location method ii) Using Cue method iii) Using Title method iv) Term frequencyThis research work has shown that some features of English text summarization can be used for Bangla text.Got 8.4 from human professional in the range of 0 to 10 point with 40% extraction.3 Sarkar (2012a)i) Term frequency ii) Sentence length iii) Sentence positionThe impact of thematic term has been investigated here and some statistical measures have been incorporated for sentence scoring.Unigram based recall score is 0.4122.4 Sarkar (2012b)i) Term frequency ii) Sentence length iii) Sentence positionIt was claimed that the features used here in more effective way for news document summarization than in the previous method (Sarkar, 2012a). In this approach, some threshold points have been adjusted for position of sentences, TF-IDF values and minimum length for selecting summary sentences.Precision, Recall and F-measure values have been claimed 0.3659, 0.5064 and 0.4169 respectively.International Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020Table 2. Continuedcontinued on following pageSn# Researcher (s), YearIncorporated Distinguished Features Remarks Evaluation Result5 Efat et al. (2013)i) Term frequency ii) Sentence position iii) Skeleton of documentTheir system is alienated into three segments as pre-processing the test document, sentence scoring and summarization based on sentences’ score.The average accuracy of this proposed method has been found 83.57% against human generated summary.6 Kallimani et al. (2014)i) Parts-of-speech tagging ii) Named entity recognition iii) Utilizing template of sentence iv) AbstractionInput document is categorized to apply specific classes. Some attributes are extracted from</s>
<s>the document and mapped with the template of sentence for summary sentence generation.The system achieved an average of 86.24% precision, 78.93% recall, and 81.5% F-measure while intrinsic evaluation.7 Uddin et al. (2014)i) Term frequency ii) Cosine similarity among sentences iii) A* (Aker et al., 2010) searching algorithm iv) Multi-document text summarizationThis is a multi-document text summarization system. A primary summary is generated at first by sentence scoring. A graph based model is then applied with the A* (Aker et al., 2010) searching algorithm on the primary summary for creating the final gist.Unigram based Recall Score was claimed as 56%.8 Sarker (2014)i) Keyphrase extraction ii) Sentence position iii) Term frequencyThis is a keyphrase-based sentence extraction method for both Bangla and English document. Here, keyphrases are sequence of words without any stop words and punctuation marks. Two phases approach have been applied here. First phase will select sentence on the basis of top ranked keyphrases and sentence score. Second phase is activated if summary can’t be created in first phase and select more sentences based on score.It was claimed that this method outperforms existing methods (Sarkar, 2012a, 2012b) of Bangla text summarization. The F-measure score has been found 0.4242 in the evaluation for Bangla text summarization.9 Haque et al. (2015)i) Term frequency ii) Sentence frequency iii) Counting Numerical figure iv) Sentence clusteringIn this method, sentences are ranked using term frequency, sentence frequency and counting numerical figure. Initially, sentence frequency is set to zero (0) for each. Then, if any sentence is found containing 60% terms of any other, smaller sentence is removed and the frequency of larger sentence is increased between them. Sentence clustering has been utilized here to carry diversified information in summary. After clustering, one third top ranked sentences are selected.Precision, Recall and F-measure values have been claimed as 0.608, 0.664 and 0.632 respectively.10 Haque et al. (2016)i) Setting minimum length of keyphrases ii) Considering the first sentence iii) Counting numerical figure from both digits and words.This method is an enhancement of an existing keyphrase based method of (Sarkar, 2014). The enhancements include: i) setting the minimum length for keyphrases, ii) considering the first sentence specially and iii) counting numerical figure from words and digits for sentence scoring.As per ROUGE-1 score, the value of F-measure was found as 0.6166 (F-measure for the previous method (Sarkar, 2014): 0.5495) in the evaluation with same dataset mentioned in (Haque, Pervin, Begum, 2016).International Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020Table 2. Continuedcontinued on following pageSn# Researcher (s), YearIncorporated Distinguished Features Remarks Evaluation Result11 Haque et al. (2017)i) Term frequency ii) Sentence frequency iii) Counting Numerical figure both in numeric and word forms iv) Title words v) General & Special tagging vi) Dependency parsing vii) Replacement of pronounsIn this method, two significant contributions are (i) pronoun replacement to solve the issue of dangling pronoun and (ii) dependency parsing to enhance the tagging procedure. After replacing pronoun by corresponding noun, sentences are ranked where special tagging has been utilized so that important</s>
<s>sentences will be in higher rank. Moreover, the first sentence is included in summary always if it contains any title word.F-measure scores have been found as 0.6003 and 0.5708 for ROUGE-1 and ROUGE-2 respectively by the evaluation with a publicly available dataset from (“Dataset”, 2016). Comparison has been turned with the four existing methods (Sarkar, 2012a; 2012b; Efat et al., 2013; Sarkar, 2014) where this method has outperformed others.12 Abujar et al. (2017)i) Term frequency ii) Words and sentences analysis iii) Numerical value identification iv) Sentence position and length v) Title words vi) Cue words vii) Word effect rate viii) Prime sentence identification ix) Aggregate similarity measurement x) Detection of imitating of sentencesIn this method the authors have claimed for deriving some rules of Bangla text analysis. After preprocessing with linguistic analysis, prime sentence has been identified by words and sentence analysis. They have calculated the effect rate of words by considering repeated distance and aggregate similarity has been measured to eliminate the redundancy. Moreover, sentence position, length, cue words, numerical value, term frequency and title words have been considered for sentence ranking.The evaluation has been done with human-generated summary with only three different texts where the performance is showing almost similar to human without mentioning the actual performance in numerical figure. No comparison has been shown against any existing methods.13 Ghosh et al. (2018)i) Aggregate similarity ii) BushiPath iii) Term frequency inverse sentence frequency iv) Keywords v) Sentence position vi) Title words vii) Cue words viii) Numerical value ix) Inverted comma x) Special symbol xi) Date format xii) URL/Email addressThis is a rule based extraction based text summarization which has utilized 12 features for sentence ranking. Major contributions of this method includes: (i) applying graph based sentence scoring features, (ii) introducing some features for the first time as like aggregate similarity, bushy path, keyword in sentence, presence of inverted comma and special symbol, (iii) removing redundant information from summary.F-measure scores have been found as 0.6276 for ROUGE-1 evaluation result with a publicly available dataset from (“Dataset”, 2016). Comparison has been turned with the five existing methods (Sarkar, 2012a; 2012b; Efat et al., 2013; Sarkar, 2014; Haque et al., 2017) where this method has outperformed others.International Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020CONCLUSIONIn this paper, fourteen approaches of Bangla text summarization have been described where thirteen methods are for the single document and one is for multiple documents. Two methods are there for abstraction based text summarization. The trend in the research work of Bangla text summarization has been tried to explore in brief. A comparison table has also been drawn to view the similarity and differences among them. The strength and weaknesses of all the methods have been discussed along with the scope of improvement. It has been indicated with the reference that most of the incorporated features in various existing methods of Bangla text summarization were collected from the methods of English text but with a different angle, because the structure of Bangla is</s>
<s>different from English. After all, this little number of efforts is raising hope for a more sophisticated methodology soon. It is also expected that this review paper will help the next generation to know the basement of research works in Bangla text summarization and to get the direction of future works.Our future work is to do an impact analysis of different features to identify their ratio of workability for different pattern of documents for Bangla text summarization.ACKNOwLeDGMeNTThis research work is funded by a Fellowship Scholarship from Information and Communication Technology Division, Government of the Peoples Republic of Bangladesh. There is also a valuable support from the Central Bank of Bangladesh.Table 2. ContinuedSn# Researcher (s), YearIncorporated Distinguished Features Remarks Evaluation Result14 Sikder et al. (2019)i) Term frequency ii) Sentence relevancy iii) Position iv) Bangla grammatical rules v) Primary summary generation vi) Sentence simplification rules vii) Sentence joining and linking viii) AbstractionThis method has considered Bangla grammatical rules along with mathematical rules and introduced the path of abstraction based summarization. It has been mentioned that the main contribution of this research work is sentence relevancy, meaning analysis, joining and eliminating odd sentences. After sentence ranking, 70% top ranked sentences are selected as the primary summary. From these sentences, sentences are joined, redundancy eliminated and simplified to generate final summary.The evaluation has been accomplished with human-generated summary for six documents only. The result of evaluation has been depicted in graph without mentioning the actual numerical value about the performance.International Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020ReFeReNCeSAbujar, S., Hasan, M., Shahin, M. S. I., & Hossain, S. A. (2017). A Heuristic Approach of Text Summarization for Bengali Documentation. 8th International Conference on Computing, Communication and Networking Technologies (ICCCNT), 1-8. doi:10.1109/ICCCNT.2017.8204166Ai, D., Zheng, Y., & Zhang, D. (2010). Automatic text summarization based on latent semantic indexing. Journal of Artificial Life and Robotics, 15(1), 25–29. doi:10.1007/s10015-010-0759-xAker, A., Cohn, T., & Gaizauskas, R. (2010). Multi-document summarization using A* search and discriminative training. Conference on Empirical Methods in Natural language Processing, 482-491.Banglapedia. (2003). National Encyclopedia of Bangladesh. Asiatic Society of Bangladesh.Baxendale, P. B. (1958). Machine-made Index for Technical Literature -An Experiment. IBM Journal of Research and Development, 2(4), 354–361. doi:10.1147/rd.24.0354Chowdhury, M., Khalil, I., & Chowdhury, M. H. (2000). Bangla VasarByakaran. Ideal Publication.Dataset for evaluating Bangla text summarization system. (n.d.). Bangla Natural Language Processing Community. Retrieved at Jan, 2016 from http://bnlpc.org/research.phpDhanya, P.M., & Jathavedan, M. (2013). Comparative Study of Text Summarization in Indian Languages. International Journal of Computer Applications, 75(6), 17-21.Edmundson, H. P. (1969). New Methods in Automatic Extracting. Journal of the Association for Computing Machinery, 16(2), 264–285. doi:10.1145/321510.321519Efat, M. I. A., Ibrahim, M., & Kayesh, H. (2013). Automated Bangla Text Summarization by Sentence Scoring and Ranking. In International Conference on Informatics, Electronics & Vision (ICIEV). IEEE. doi:10.1109/ICIEV.2013.6572686Ferreira, R., & Souza, L. D. (2014). A multi-document summarization system based on statistics and linguistic treatment. Journal of Expert Systems with Applications, Elsevier, 41(13), 5780–5787. doi:10.1016/j.eswa.2014.03.023Ghosh, P., Shahariar, R., & Khan, M. (2018). A Rule Based Extractive</s>
<s>Text Summarization Technique for Bangla News Documents. International Journal of Modern Education and Computer Science, 10(12), 44–53. doi:10.5815/ijmecs.2018.12.06Gupta, V. (2013). A Survey of Text Summarizers for Indian Languages and Comparison of their Performance. Journal of Emerging Technologies in Web Intelligence, 5(4), 361–366. doi:10.4304/jetwi.5.4.361-366Gupta, V., & Lehal, G. S. (2010). A Survey of Text Summarization Extractive Techniques. Journal of Emerging Technologies in Web Intelligence, 2(3), 258–268. doi:10.4304/jetwi.2.3.258-268Haque, M. M., Pervin, S., & Begum, Z. (2013a). Literature Review of Automatic Multiple Documents Text Summarization. International Journal of Innovation and Applied Studies, 3(1), 121–129.Haque, M. M., Pervin, S., & Begum, Z. (2013b). Literature Review of Automatic Single Document Text Summarization Using NLP. International Journal of Innovation and Applied Studies, 3(3), 857–865.Haque, M. M., Pervin, S., & Begum, Z. (2015). Automatic Bengali news documents summarization by introducing sentence frequency and clustering. In 18th International Conference on Computer and Information Technology (ICCIT), (pp. 156 – 160). doi:10.1109/ICCITechn.2015.7488060Haque, M. M., Pervin, S., & Begum, Z. (2016). Enhancement of Keyphrase-Based Approach of Automatic Bangla Text Summarization. Tencon Conference. doi:10.1109/TENCON.2016.7847955Haque, M. M., Pervin, S., & Begum, Z. (2017a). An Innovative Approach of Bangla Text Summarization by Introducing Pronoun Replacement and Improved Sentence Ranking. Journal of Information Processing Systems, 13(4), 752–777. doi:10.3745/JIPS.04.0038Haque, M. M., Pervin, S., & Begum, Z. (2017b). Rule Based Replacement of Pronoun by Corresponding Noun for Bangla News Documents. International Journal of Technology Diffusion, 8(2), 26–42. doi:10.4018/IJTD.2017040102http://dx.doi.org/10.1109/ICCCNT.2017.8204166http://dx.doi.org/10.1007/s10015-010-0759-xhttp://dx.doi.org/10.1147/rd.24.0354http://bnlpc.org/research.phphttp://dx.doi.org/10.1145/321510.321519http://dx.doi.org/10.1109/ICIEV.2013.6572686http://dx.doi.org/10.1109/ICIEV.2013.6572686http://dx.doi.org/10.1016/j.eswa.2014.03.023http://dx.doi.org/10.1016/j.eswa.2014.03.023http://dx.doi.org/10.5815/ijmecs.2018.12.06http://dx.doi.org/10.4304/jetwi.5.4.361-366http://dx.doi.org/10.4304/jetwi.2.3.258-268http://dx.doi.org/10.1109/ICCITechn.2015.7488060http://dx.doi.org/10.1109/TENCON.2016.7847955http://dx.doi.org/10.3745/JIPS.04.0038http://dx.doi.org/10.4018/IJTD.2017040102International Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020Hovy, E. (2005). Automated Text Summarization. In R. Mitkov (Ed.), The Oxford Handbook of Computational Linguistics (pp. 583–598). Oxford University Press.Islam, M. T., & Masum, S. M. A. (2004). Bhasa: A Corpus-Based Information Retrieval and Summariser for Bengali Text. Proceedings of the 7th International Conference on Computer and Information Technology.Kallimani, J. S., Srinivasa, K. G., & Reddy, E. B. (2014). A Comprehensive Analysis of Guided Abstractive Text Summarization. International Journal of Computer Science Issues, 11(6).Karim, M. A., Kaykobad, M., & Murshed, M. (2013). Technical Challenges and Design Issues in Bangla Language Processing. IGI Global. doi:10.4018/978-1-4666-3970-6Kumar, Y. J., & Salim, N. (2012). Automatic Multi Document Summarization Approaches. Journal of Computational Science, 8(1), 133–140. doi:10.3844/jcssp.2012.133.140Lin, C., & Hovy, E. (2003). Automatic Evaluation of Summaries Using N-gram Co-Occurrence Statistics. Proceedings of the Human Technology Conference 2003 (HLT-NAACL-2003). doi:10.3115/1073445.1073465Luhn, H. P. (1958). The Automatic Creation of Literature Abstracts. IBM Journal of Research and Development, 2(2), 159–165. doi:10.1147/rd.22.0159Mani, I., Klein, G., House, D., Hirschman, L., Firmin, T., & Sundheim, B. (2002). SUMMAC: A text summarization evaluation. Natural Language Engineering, 8(1), 43–68. doi:10.1017/S1351324901002741Miller, G. (1995). WordNet: A Lexical Database for English. Communications of the ACM, 38(11), 39–41. doi:10.1145/219717.219748Nenkova, A., & McKeown, K. (2012). A survey of text summarization techniques. In Mining text data (pp. 43–76). Springer. doi:10.1007/978-1-4614-3223-4_3ROUGE 2.0. (n.d.). Java Package for Evaluation of Summarization Tasks with Updated ROUGE Measures. Retrieved from http://kavita-ganesan.com/content/rouge-2.0Saggion, H., & Poibeau, T. (2013). Automatic Text Summarization: Past, Present and Future. In Multi-source, Multilingual Information Extraction and Summarization (pp. 3–21). Berlin: Springer-Verlag. doi:10.1007/978-3-642-28569-1_1Sarkar, K. (2012a). Bengali text summarization by sentence extraction.</s>
<s>In Proceedings of International Conference on Business and Information Management (pp. 233-245). NIT Durgapur.Sarkar, K. (2012b). An approach to summarizing Bengali news documents. Proceedings of the International Conference on Advances in Computing, Communications and Informatics, 857-862. doi:10.1145/2345396.2345535Sarkar, K. (2013). Automatic Single Document Text Summarization Using Key Concepts in Documents. Journal of Information Processing Systems, 9(4), 602–620. doi:10.3745/JIPS.2013.9.4.602Sarkar, K. (2014). A Keyphrase-Based Approach to Text Summarization for English and Bengali Documents. International Journal of Technology Diffusion, 5(2), 28–38. doi:10.4018/ijtd.2014040103Second most spoken languages around the world. (n.d.). Retrieved August 20, 2015, from http://graduate.olivet.edu/news-events/news/second-most-spoken-languages-around-worldSikder, R., Hossain, M. M., & Robi, R. H. (2019). Automatic Text Summarization For Bengali Language Including Grammatical Analysis. International Journal of Scientific & Technology Research, 8(6), 288–292.Uddin, M. A., Sultana, K. Z., & Alam, M. A. (2014). A Multi-Document Text Summarization for Bengali Language. In The 9th International Forum on Strategic Technology (IFOST). Chittagong University of Engineering & Technology (CUET).Uddin, M. N., & Khan, S. A. (2007). A Study on Text Summarization Techniques and Implement Few of Them for Bangla Language. 10th International conference on Computer and Information technology, 1-4. doi:10.1109/ICCITECHN.2007.4579374Vector space retrieval model. (n.d.). Retrieved May 10, 2016, from http://www.ccs.neu.edu/home/jaa/CSG339.06F/Lectures/vector.pdfhttp://dx.doi.org/10.4018/978-1-4666-3970-6http://dx.doi.org/10.3844/jcssp.2012.133.140http://dx.doi.org/10.3115/1073445.1073465http://dx.doi.org/10.1147/rd.22.0159http://dx.doi.org/10.1017/S1351324901002741http://dx.doi.org/10.1145/219717.219748http://dx.doi.org/10.1007/978-1-4614-3223-4_3http://kavita-ganesan.com/content/rouge-2.0http://dx.doi.org/10.1007/978-3-642-28569-1_1http://dx.doi.org/10.1007/978-3-642-28569-1_1http://dx.doi.org/10.1145/2345396.2345535http://dx.doi.org/10.3745/JIPS.2013.9.4.602http://dx.doi.org/10.4018/ijtd.2014040103http://graduate.olivet.edu/news-events/news/second-most-spoken-languages-around-worldhttp://graduate.olivet.edu/news-events/news/second-most-spoken-languages-around-worldhttp://dx.doi.org/10.1109/ICCITECHN.2007.4579374http://dx.doi.org/10.1109/ICCITECHN.2007.4579374http://www.ccs.neu.edu/home/jaa/CSG339.06F/Lectures/vector.pdfhttp://www.ccs.neu.edu/home/jaa/CSG339.06F/Lectures/vector.pdfInternational Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020Md. Majharul Haque has completed their BSc. Hons in CSE, MS in IT and achieved Doctorate Degree from the CSE Dept of the University of Dhaka. Now, continuing job as Assistant Systems Analyst (Deputy Director) in the Central Bank of Bangladesh.Anowar Hossain is a passionate software professional with a spur to accept new challenges. Always eager to learn new mechanisms to make my contribution more meaningful.Words S. Indian Statistical Institute. (2016). List of stop words for Bengali language. Retrieved from http://www.isical.ac.inYang, L., Cai, X., Zhang, Y., & Shi, P. (2014). Enhancing sentence-level clustering with ranking-based clustering framework for theme-based summarization. Information Sciences, 37-50.Ye, S., Chua, T., Kan, M., & Qiu, L. (2007). Document concept lattice for text understanding and summarization. Journal of Information Process Management, Elsevier, 43(6), 1643–1662. doi:10.1016/j.ipm.2007.03.010Zaman, N. U. (2008). Big Picture Seminer Series. University of Rochester. Retrieved December 29, 2015, from https://www.cs.rochester.edu/u/naushad/survey/BigPicture-URCS-NZ-Bangla.pdfhttp://www.isical.ac.inhttp://www.isical.ac.inhttp://dx.doi.org/10.1016/j.ipm.2007.03.010https://www.cs.rochester.edu/u/naushad/survey/BigPicture-URCS-NZ-Bangla.pdf</s>
<s>DOI: 10.4018/IJTD.20201001.oaInternational Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020This article published as an Open Access article distributed under the terms of the Creative Commons Attribution License(http://creativecommons.org/licenses/by/4.0/) which permits unrestricted use, distribution, and production in any medium,provided the author of the original work and original publication source are properly credited.Approaches and Trends of Automatic Bangla Text Summarization:Challenges and OpportunitiesMd. Majharul Haque, Bangladesh Bank, BangladeshSuraiya Pervin, University of Dhaka, BangladeshAnowar Hossain, Brain Station 23, BangladeshZerina Begum, University of Dhaka, BangladeshABSTRACTAs long as the internet user is increasing, online electronic content is growing proportionally irrespective of languages. A lot of research works on English text summarization have come to light to deal with this gigantic body of online text. Unfortunately, a few works have been accomplished for Bangla though a huge number of people are involved with this language. This article has tried to explore the trend of research work on Bangla text summarization. Fourteen approaches have been briefly expounded here by addressing the pros and cons with some scope of improvement. A comparison has also been turned based on their incorporated features and evaluation results. It is expected that this article will draw the attention of more researchers in the area of Bangla text summarization and give a crystal-clear message about the opportunities to the next generation. The integrated message about all the existing methods has been depicted here to reveal the importance of Bangla text summarization. To the best of the author’s knowledge, this is the first review study in this ground.KeywORDSBangla, Electronic Content, Internet User, Online Text, Text SummarizationINTRODUCTIONThe quantity of online available information increases rapidly with the development of the World Wide Web (Ai, Zheng, & Zhang, 2010) and the problem of information overload is rising proportionally. People are encumbered with the enormous body of electronic contents or texts, whereas they expect brief information within the shortest time. So the automatic text summarization is needed to process the large document efficiently and scavenging useful information from it (Ferreira & Souza, 2014). The goal of automatic text summarization is to condense the source text into a shorter version with preserving its information content and overall meaning (Kumar & Salim, 2012; Gupta & Lehal, 2010; Hovy, 2005).International Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020Two main categories of text summarization algorithms are extractive and abstractive (Mani, Klein, House, & Hirschman, 2002). Extraction techniques simply copy significant sentences but abstraction requires deep natural language processing which is yet to reach a mature stage even for the English language (Ye, Chua, Kan, & Qiu, 2007). The summarization task can also be classified as a single document and multiple documents text summarization (Nenkova & McKeown, 2012). The research was first started naively on the single document but today information is found from various sources on any single topic for which multiple documents summarization is demanded (Haque, Pervin, & Begum, 2013a).The state-of-art-works (Kumar & Salim, 2012; Gupta & Lehal, 2010) focused on text summarization in various languages which</s>
<s>were started with English text. The automatic English text summarization has begun around five decades ago by Luhn (1958) based on term-frequency. It was extended by Baxendale (1958) by incorporating the position of sentences and cue-phrases for sentence ranking. Edmundson (1969) included three additional features as (1) cue words, (2) title or heading words, and (3) location of sentences along with term frequency. Various research works are available in the arena of English text summarization (Haque et al., 2013a, 2013b), and it has witnessed the continuous involvement of many proficient researchers. However, to this age, a few works have been presented for Bangla text summarization (Sarkar, 2012a) where most of the features have been considered from the papers of English text. There are also significant amount of review papers for English text summarization regarding the discussion on various research works (Haque et al., 2013a, 2013b) from where people can understand in which point they should focus. In these circumstances, a review study on Bangla text summarization is in need so that researchers of this ground can focus on the specific points to improve.The contribution of this paper is as follows:1. Draw a survey with comparative study for the fourteen approaches of Bangla text summarization with pros and cons as well as the opportunities of improvement;2. To the best of our knowledge, all the papers of Bangla text summarization have been included here from the beginning of the research work on this ground to now. It is expected that this survey will attract more researchers in this arena and give them a clear direction about the scope of improvement;3. It has been tried to explore the pros and cons of each paper with explicit discussion;4. Ultimately, an analysis has been drawn for some distinguished features (used in several existing methods) to show the performance improvement for each.Though there are some existing review papers for the Indian languages and the English language to the best of our knowledge this is the first attempt to illustrate a survey, especially for Bangla text summarization.The rest of the paper is organized as follows: The next section presents the motivation behind Bangla text summarization and then the challenges are pointed in brief. Later on, various approaches of Bangla text summarization along with some prospects, limitations, and scope of improvement have been described. Experimental results for each feature and a comparison of these approaches have also been depicted. Finally, the conclusion is turned at last.MOTIVATION ON BANGLA TeXT SUMMARIZATIONBangla is the 7th most spoken language in the world from more than 3500 languages and it is the native language for 250 million people (Chowdhury, Khalil, & Chowdhury, 2000). It is the mother language of Bangladesh and the second most spoken language in India. Today, many computerized contents such as web sites, word documents, etc. have been developed in Bangla because of the large International Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020community of Bangla-speaking people. Moreover, there are several online Bangla newspapers and more</s>
<s>of them are coming to the scene. So, e-contents in Bangla are dramatically increasing throughout the cyber world. In these circumstances, to cope-up with this large volume of text, automatic Bangla text summarization would be an invaluable solution.We believe the following scenario will reveal the utmost importance of Bangla text summarization: if 1 of every 10 people reads Bangla newspaper regularly than 25 million people from 250 million (who speak in Bangla) are doing so. While reading the newspaper, the Bangla text summarization system can have a valuable impact if it makes a summary of all news by one-third of the total content. We may easily assume that one of us spend at least 30 minutes regularly for reading the newspaper. So, if there will be a summary consists of one-third of the content, it will save at least 10 minutes (one-third of 30 minutes) per day for each people. Based on the assumption, for the 25 million people (who read Bangla newspaper), the system will save in total 10*25 million = 250 million minutes per day which is around 475 years. Indeed, nothing more is needed to say about the impact of automatic Bangla text summarization.It is well known that the structure of Bangla’s sentence is much different from English (Chowdhury, Khalil, & Chowdhury, 2000). So, the existing methods of English text summarization can’t apply to Bangla. Therefore, an efficient Bangla text summarization technique is essential for researchers, international news agencies and individuals.CHALLeNGeS IN ReSeARCH wORK FOR BANGLA TeXTChallenges in the research work in the ground of Bangla text are as follows:• Automatic computerized services are hardly available for Bangla for facilitating research work;• The lexical database like WordNet in English (Miller, 1995) does not exist for Bangla;• There is no database of ontological meanings for Bangla words that can be used programmatically;• Since there are a few research works exist for Bangla language, there are a few directions regarding any problem of this field.Some other problems have also been discussed in (Karim, Kaykobad, & Murshed, 2013; Zaman, 2015) about the research work for Bangla. Further, scope of knowledge sharing is also limited as there are few researchers in this ground. Despite these difficulties, some approaches have been proposed for Bangla text summarization. These approaches are discussed in the next section.APPROACHeS OF BANGLA TeXT SUMMARIZATIONIn this section, attempts of Bangla text summarization have been depicted with their strength and weakness. The scope of improvement has also been tried to explore as follows.In 2004, Islam and Masum (2014) presented ‘Bhasa’, a corpus oriented search engine and summarizer. It performs document indexing and information retrieval based on keywords using vector space retrieval model (“Vector space retrieval model”, 2016) for Unicode Bangla text. Corpus files can be ranked and documents can be summarized by this method based on frequent appearance of query terms. The document is treated here as one vector and query terms are treated as different vectors to get the similarity between them. A tokenizer has been used here that</s>
<s>can determine different terms, abbreviations, tags, sentence’ boundary, headings, and titles. This method has the following modules: 1) TF-IDF (term frequency-inverse document frequency) calculation module, 2) keyword search module, and 3) summary generation module. It has utilized the concept of useful, unimportant and important words’ list while ranking sentences.International Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020Discussion: This (Islam & Masum, 2014) is the first approach for Bangla text summarization along with a search engine based on our observation. It has addressed the problem of dangling pronoun and attempted to solve in extracted summary sentences but the solution has been claimed here without giving any explanation. Even, it is not specified which modules/sub-modules of this method is for text summarization or search engine. As per the TF-IDF calculation and similarity measurement of each sentence with a given query, it is exposed that the method is effective as a search engine but not for summarization. Finally, no evaluation has been given to show the application of this method in real life.A few years later, some techniques from the investigation of English text summarization systems were applied to summarize Bangla text by Uddin and Khan (2007). They have proposed a method by incorporating some existing methods of English as follows: 1) location method, 2) cue method, 3) title method, 4) term frequency, and 5) numerical data. They have taken 40% higher-ranked sentences from the input document as a summary. It has been found that 40% of the extract by this system has got the point 8.4 from a human professional in the range of 0 to 10.Discussion: The remarkable point of this paper (Uddin & Khan, 2007) is to show that some features of English text summarization can also be applicable for Bangla. However, this method didn’t specify the exact contribution of each feature for sentence ranking. Moreover, numerical data has been considered for sentence scoring but numerical data can be presented in words instead of digits which can be considered for improvement. While evaluating this method, the score for each system generated summary was calculated but the comparison with human-generated summary/ any model summary has not been shown.Extraction based Bangla text summarization was again presented by Sarkar (2012a). This is an easy-to-implement approach like the method of Edmandson (1969) with the three major steps: (1) preprocessing, (2) sentence ranking, and (3) summary generation. The impact of the thematic term has been investigated and features like word-frequency, length, and position of the sentence have been utilized for sentence ranking. It was claimed that the system performs batter than the LEAD baseline method (the first n words of an input article are considered as the summary in LEAD baseline method). Average unigram based recall score was found as 0.4122.Discussion: This method (Sarkar, 2012a) is fully based on almost four decades of old English text summarization method (Edmundson, 1969) which can be upgraded by incorporating modern natural language processing techniques like sentence clustering, redundancy removal, etc. Moreover, in the evaluation, only</s>
<s>one model summary has been used for each of the test document but more model summaries can be developed for sophisticated evaluation results (Haque, Pervin, & Begum, 2016).In 2012, Sarkar (2012b) proposed another method by tuning each feature of his previous method (Sarkar, 2012a) for better summarization performance. This approach has four major steps (1) preprocessing (2) extraction of candidate summary sentences (3) ranking the candidate summary sentences (4) summary generation. This is also based on word-frequency, sentence position and sentence length that is similar to (Sarkar, 2012a). In this approach, some threshold points have been adjusted for the position of sentences, TF*IDF values and the minimum length of sentences. The impact of each feature has been specified with experiments for sentence ranking.Discussion: This method (Sarkar, 2012b) has surpassed the LEAD baseline method, baseline that uses term-frequency with sentence location and the method described in (Sarkar, 2012a). All the features have been tuned here for better performance. However, this method is also based on an old English text summarization procedure (Edmundson, 1969). Moreover, the evaluation has been turned here against only one model summary where more than one model summary can be used for getting a sophisticated evaluation result (Haque, Pervin, & Begum, 2016). This system can also be upgraded by incorporating modern natural language processing techniques as discussed for the previous method.In 2013, Efat, Ibrahim, and Kayesh (2013) introduced a method for Bangla text summarization by sentence scoring and ranking. Their system is alienated into three segments: (1) pre-processing the test document, (2) sentence scoring, and (3) generating a summary. Sentence scoring is depended on International Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020term frequency, position, cue words and skeleton of the document. Skeleton of the document consists of the words in title and headers.Discussion: It was stated in their paper that the system performs well when the document completely depends on a particular theme (Efat, Ibrahim, & Kayesh, 2013). So, the system can be more user-friendly by eliminating this dependency. The average accuracy of this proposed method has been claimed 83.57% against human-generated summarization which is really a good sign but they didn’t give a comparison with any existing method. An experiment has also been turned to measure the contribution of each feature while sentence ranking. Nevertheless, the evaluation result is for a particular theme only which may not be comparable with other generic text summarization methods.Abstraction based Bangla text summarization system was proposed for the first time in 2014 by Kallimani, Srinivasa, and Reddy (2014). They focused on a unified model with attribute-based Information Extraction rules and class-based templates. They have claimed for adaptation of this system over four Indian languages as Kannada, Hindi, Bangla, and Telugu. The document to be summarized is subjected to preprocessing, namely – Parts of Speech (POS) tagging and Named Entity Recognition (NER). TF/IDF rule-based classifier has also been used to categorize the document which determines the applicable classes. In this system (Kallimani, Srinivasa, & Reddy, 2014), classes are</s>
<s>blueprints where the identified attributes are set according to this blueprint. Attributes are primary pieces of information as follows - NAME, PLACE, DOB (date of birth), DOD (date of demise), and AWARDS. The most significant part of this system is the template-based sentence generation where templates are generic structures of sentences with some gap of crucial pieces of information. The extracted attributes are mapped with the templates to generate summary sentences.Discussion: It is well known that abstraction based English text summarization is yet in an immature stage (Ye, Chua, Kan & Qiu, 2007) though the research work on English text summarization was begun in 1958 (Luhn, 1958). In this situation, this method (Kallimani et al., 2014) has reported for Bangla abstractive summarization. Attribute extraction of this method is noticeable which is required for informative sentence generation. Nevertheless, the utilized template is creating the same structure of sentences always which can be monotonous. It is also questionable that using the template is enough or not for all types of sentences while abstraction. So, there is a scope of improvement for generating refined sentences with the identified attributes. According to their evaluation, the system achieved an average 86.24% precision, 78.93% recall, and 81.50% F-measure is an intrinsic evaluation. The evaluation that has been turned here it seemed to be for attribute extraction only. It is because the Precision and Recall value can be measured by matching for the important items extracted by the system against the important items that exist in the text.Research work has been accomplished for multiple document text summarization for Bangla language in 2014 by Uddin, Sultana, and Alam (2014) for the first time. In this paper, a primary summary is generated at first by sentence scoring on the basis of term-frequency. It has been reported that the words are replaced with their common synonym before term frequency calculation so that different words with the same meaning will be treated as the same word. Cosine similarity for each sentence to every other sentence of primary summary has been calculated to get the relevance between them. A graph-based model is then applied with the A* (Aker, Cohn, & Gaizauskas, 2010) searching algorithm on the primary summary for creating the final gist. It has been claimed that the selection of the starting point of a summary is effective by this method. The performance evaluation has been completed against the human-generated summary. Unigram based Recall Score was found 56% and the similarity between manual and system generated summary was shown 86.60%. The relevance among three human judges has also been shown evidently in their paper to expose that a single sentence is not taken as significant or worthless by all judges equally.Discussion: It is noticeable that this is the first task for Bangla multiple documents text summarization. This method has selected the most relevant sentence as the starting point of summary but no theoretical or practical reason has been stated behind this. Even the source of getting synonyms for each word before</s>
<s>term-frequency calculation has not been mentioned. After selecting the final International Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020summary sentences, there is no direction for the ordering of the sentences of different sources which is very much necessary to make the text lucid and understandable.Apart from the previous approaches, keyphrase based summarization method outperforms for both Bangla and English text, which was proposed by (Sarkar, 2014). Keyphrases are extracted as a sequence of words from any sentence that contains no punctuation mark and stop words. All the keyphrases are ranked as per their frequency and the sentences are ranked based on position and term frequency. Summary sentences are selected in two phases. In phase-1, candidate summary sentences are chosen that contain top-ranked keyphrases. From the chosen sentences, top-ranked sentences are selected that have the position not more than fifth in place in the document. If phase-1 fails to generate a summary of user-desired length, phase-2 is activated and select more summary sentences based on the sentences’ score from the rest of the sentences.Discussion: This (Sarkar, 2014) is the first task on keyphrase-based sentence extraction for Bangla text summarization. It was claimed that keyphrases can reflect the concept of a document more clearly than words. This method has set the upper limit in the length of keyphrases but no lower limit has been set. So, an experiment can be done here to set the lower limit in the length of keyphrases for better performance. In the evaluation, this method outperforms all the existing methods of Bangla text summarization. However, the same type of method has already been introduced for English (Sarkar, 2013). For sentence scoring, only position and term frequency have been considered that have already been introduced around four decades ago (Edmundson, 1969). Today, it can be seen that a lot of significant features have been invented by various researchers for text summarization (Haque, Pervin, & Begum, 2013a, 2013b). So, the performance of this research work can be enhanced by adding more features for sentence scoring.Sentence clustering-based Bangla news document summarization was published for the first time in 2015 (Haque, Pervin, & Begum, 2015). They have introduced sentence frequency along with term frequency. Sentences are ranked here by doing an algebraic sum of the scores of term frequency, sentence frequency, and numerical figure. Initially, sentence frequency is set to zero (0) for each and then every sentence is matched with others. If any sentence is found containing 60% terms of any other sentence, the smaller sentence is removed and the frequency of larger sentences is increased between them. Sentences are clustered according to their cosine similarity ratio and one-third top-ranked sentences are selected from each cluster. It was claimed that clustering helps for better coverage of information in summary.Discussion: This method (Haque, Pervin, & Begum, 2015) has introduced sentence clustering for the first time in Bangla text summarization. Sentence frequency is another contribution of this research work which assists in redundancy elimination and sentence ranking. However, clustering</s>
<s>by cosine similarity is very much conventional which works on directly matching of terms between two sentences (Yang, Cai, Zhang, & Shi, 2014). This clustering strategy can be updated by utilizing background knowledge from Banglapedia so that two different terms can be matched semantically and lexically. In the updated way, two sentences can be in the same cluster though they may have low cosine similarity. Again, the numerical figure has been counted here for sentence ranking but no strategy has been proposed to identify numerical figures if they are in words’ form other than digits. Moreover, the weight of each sentence ranking feature can be set experimentally for better summarization performance. In the performance measurement against human-generated summary, the F-measure score has been found 0.632 where only 20 documents have been considered.A well-established keyphrase based method (Sarkar, 2014) has been enhanced by Haque et al. (Haque, Pervin, & Begum, 2016) for Bangla news documents summarization. Here, the existing method (Sarkar, 2014) is scrutinized and the way of betterment is mentioned. The enhancements incorporate: (i) modifying the keyphrases selection process, (ii) including the first sentence in summary if it contains any title word and (iii) counting numerical figure which is presented in digits and words for sentence scoring. The evaluation has been drawn by considering 200 documents with 3 summaries for each (in total 3x200 = 600 summaries) using ROUGE (Recall Oriented Understudy for Gisting International Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020Evaluation) (Lin & Hovy, 2003; “ROUGE 2.0”, 2016) score. In the evaluation, the F-measure score has been enhanced from 0.5496 to 0.6166 (ROUGE-1) and from 0.5050 to 0.5830 (ROUGE-2).Discussion: The method (Haque, Pervin, & Begum, 2016) is an enhancement of an existing keyphrase based method presented in (Sarkar, 2014). It is remarkable that the new method significantly outperforms the existing methods. Point to be mentioned that this research work has utilized the ROUGE package (“ROUGE 2.0”, 2016) for the first time for evaluating the Bangla text summarization system. It has counted numerical figures from words for sentence scoring which was not noticed by any other methods based on our study. Nevertheless, the text form of the numerical figure with three digits contains more than one word in the Bangla language. For example, “123 – ” is written as “one hundred twenty-three – ”. In this situation, one numerical figure can be counted for two times (one hundred – and - twenty-three). No mechanism has been included here to handle this issue. Again, background knowledge of keyphrases (from Banglapedia) can be considered while sentence ranking for upgrading performance. Another significant point that has not been covered is dangling pronoun resolution. If a sentence is extracted where a pronoun is available but the sentence which is containing the noun of that pronoun is not included in the summary, the summary will be ambiguous.Later on, another method has been presented by Haque, Pervin, and Begum (2017a) where pronoun replacement is accomplished for the first time to minimize the</s>
<s>dangling pronoun from summary. After replacing pronoun by the corresponding noun, sentences are ranked by considering (i) term frequency, (ii) sentence frequency, (iii) numerical figures and (iv) title words. Dependency parsing has been introduced here for general and special tagging of unknown words based on the tag of known words. The first sentence is included in summary always if it contains any title word. It has been found from the ROUGE evaluation results that the method outperforms the four latest existing methods (Sarkar, 2012a, 2012b, 2014; Efat et al., 2013).Discussion: In this method (Haque, Pervin, & Begum, 2017a), pronoun replacement by the corresponding noun has been utilized for the first time. The numerical figures have been considered here for both digits and words form. Their system is a rule-based system that utilized a hidden Markov model and Markov chain model. It has been claimed that 3,000 Bangla news documents have been analyzed to explore the rules. Along with the parts-of-speech tagging, they have introduced special tagging including Acronyms, Repeated words, Occupation, Name of humans and places, etc. Dependency parsing is another notable feature for boosting the tagging procedure. However, most of the rules, they have used here for dependency parsing, pronoun replacing and special tagging, have no grammatical reference which is the principal concern of this paper. Though they have identified the full human name and recalled the full name from the part of the name, there can be a chance of a high false-positive rate to treat any word accurately as the part of the name. So, it can be stated that there is a significant scope of improvement for this method.A heuristic approach of Bengali text summarization has been proposed by Abujar, Hasan, Shahin, and Hossain (2017). They have claimed for deriving some rules of Bangla text analysis. Three phases have been accomplished here as (i) preprocessing with linguistic analysis, (ii) Prime sentence (the main leading sentence) identification by words and sentence analysis, and (iii) final processing for the betterment of summary generation. The sentence analogy matrix has been utilized and sentence imitation has been considered to omit redundant sentences. They have calculated the effective rate of words by considering the repeated distance. The first and last positional sentences of each paragraph have been treated as significant for sentence scoring. They have claimed that throughout the proposed rules and models, final processing features can generate a better quality summary from Bangla text. The evaluation has been done with the human-generated summary with only three different texts where the performance is showing almost similar to humans without mentioning the actual performance in the numerical figure.Discussion: In the paper (Abujar et al., 2017), the relations between words and sentences have been revealed and the prime sentence has been selected with some other steps to develop a better International Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020summarization system. Words’ effect rate identification is deemed to be significant but no justification has been provided for the range of effective</s>
<s>rate. Cue words have been considered though positive or negative cue words could be differentiated for the improvement. Imitating of sentences is taken care of so that redundancy can be minimized but a similar feature has already been proposed in (Haque, Pervin, & Begum, 2015) as sentence frequency which has not been mentioned. The identification of prime sentences can have a false-positive rate which deserves a statistical analysis. Furthermore, the method can be tested with a publicly available dataset (“Dataset”, 2016) rather than human-generated summaries only. However, it has been explicitly mentioned in the paper that there are lots of improvements required for an enhanced summarizer.Ghosh, Shahariar, and Khan (2018) proposed a Rule-based extractive summarization system by utilizing 12 features in 2018. According to their statement, major contributions are: (i) applying graph-based sentence scoring features, (ii) introducing some features for the first time as like aggregate similarity, bushy path, keyword in the sentence, presence of inverted comma and special symbol, (iii) removing redundant information from the summary. The first sentence, based on position, has been emphasized for summary generation and the importance is downgraded to the second, third and so on. The Cue-words and title words have also been taken into account for important sentence identification.Discussion: In this method (Ghosh, Shahariar, & Khan, 2018), they have stated that 12 features have been utilized. It is appreciated that they have brought some new features for Bangla text summarization and outperformed all the existing methods in evaluation. The evaluation has been turned with a published dataset (Haque, Pervin, & Begum, 2015) with ROUGE evaluation tools. The comparison has been shown with the 5 latest existing methods. However, all the features have been equally considered without depicting any analytical result for each feature individually. Haque et al. (Haque, Pervin, & Begum, 2017a) showed that the weight of every feature should not be the same. Moreover, there is no partial implementation of the method so that the contribution of each feature can be individually distinguished. They have considered the numerical figure which is presented in digits only and ignored the figure that can be presented in words. Furthermore, the significant issue of dangling pronoun (Haque, Pervin, & Begum, 2017b) has not been addressed in their research work.Sikder, Hossain, and Robi (2019) presented a method of Bangla text summarization by combining some mathematical and Bangla grammatical rules in 2019. It has been claimed that they have introduced the first idea of extraction method including grammatical view which is a path of abstraction. According to the paper, the main contribution of this research work is sentence relevancy, meaning analysis, joining and eliminating odd sentences. After preprocessing, sentence ranking has been done by considering Term frequency, sentence position, sentence’ similarity and then 70% top-ranked sentences are selected as the primary summary. From the primary summary, sentence joining has been done by considering some Bangla grammatical rules where two or more sentences are transformed into a single sentence. While joining, the related nearest sentences are identified for each</s>
<s>sentence of the primary summary and also distinguish the structure of all the related sentences. Finally, simplified sentences are generated for each sentence and place in the appropriate position. The evaluation of this method has been accomplished with a human-generated summary for six different documents.Discussion: It is appreciated that the method (Sikder, Hossain, & Robi, 2019) has considered Bangla grammatical rules along with mathematical rules and introduced the path of abstraction based summarization. This has defined the ways of constructing new sentences from related consecutive sentences. This paper claimed for the first step in Bangla text abstraction whereas the abstraction based Bangla text summarization has been presented in 2014 (Kallimani et al., 2014). Though the method has been proposed for Bangla text summarization, they have claimed that their method can be extended easily for other languages also. However, it is deemed to be impossible for the claimed extension for other languages because there is a clear explanation that there are significant differences between Bangla and English based on grammatical rules (Haque, Pervin, & Begum, 2017a). Moreover, no analytical justification has been provided for the sentence positional score that the first sentence International Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020will have top importance and downgraded for the next sentences gradually. Sentence joining is an appreciated step but an analytical review is needed to show the impact of this step. After all, the evaluation could be done by utilizing more documents instead of six documents only.Some noticeable points about Bangla text summarization methodologies are as follows:• Most of the utilized features have been taken from existing English text summarization;• There is no common dataset publicly available for evaluating Bangla text summarization system. In this regard, we have generated and uploaded dataset in (“Dataset”, 2016) which can be used in any upcoming methods. This dataset has already been used for by some research works (“Dataset”, 2016). We hope that it will be helpful for researchers to evaluate their methods in future;• Semantic knowledge-base can be implemented to help in creating proficient Bangla text summarization system;• A lexical dictionary like WordNet (Miller, 1995) in English can be developed for Bangla text.eXPeRMeNT wITH DIFFeReNT FeATUReSExperiment has been done by generating summary of Bangla text document where only one input is taken which is a Bangla news document. Through the experiment, impact analysis of different features has been accomplished. In this analysis, the generated summary has been compared with three model summaries of 200 news documents each and the results of evaluation is the average results of the comparisons. Precision, Recall and F-measure are brought into play here as these have long used as important evaluation matrices in information retrieval field. If ‘A’ indicates the number of sentences retrieved by summarizer and ‘B’ indicates the number of sentences that are relevant as compared to target set, Precision, Recall and F-measure are computed based on the following equations:Precision (P) = A B∩ (1)Recall (R) = A B∩ (2)F-measure = 2× ×P RP R</s>
<s>(3)To show the impact of features in the betterment of summary generation, some distinguished features have been selected to apply as follows:1. Pronoun replacing by the corresponding noun to minimize the number of dangling pronouns;2. Sentence ranking by:a. Term frequency inverse document frequency calculation;b. Sentence frequency measurement with redundancy elimination;c. Counting the existence of numerical data from digits;d. Counting the existence of numerical data from words;e. Computing title word score;3. Considering the first sentence especially if it contains any title word;4. Adjustment of the coefficients of all the attributes listed in the above point (ii).International Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020The development code has been written in PHP web based programming language where the 363 Bangla stop words have been used from the Indian Statistical Institute website (“Stop Words”, 2016). The simulation has been done in a laptop with Processor: Intel(R) Core(TM) i5-3210M CPU @2.50GHz 2.50GHz, Installed memory (RAM): 8.00 GB (7.60 GB usable), System type: Windows 7 Professional 64-bit Operating System.The impact has been estimated as Precision, Recall, and F-measure using Equations (1), (2), and (3) respectively using a publicly available dataset (“Dataset”, 2016) of 200 news document with three summaries for each of the document (total 600 summaries). Every time, the system generated summary is compared with three model summaries of each document, and compute the average value of Precision, Recall and F-measure with ROUGE (“ROUGE 2.0”, 2016) automatic evaluation package. Point to be mentioned that the features have been implemented as like the feature mentioned in some paper as follows:1. Sentence Frequency Calculation: If there are two or more sentences are found with 60% similarity, the long sentence is kept and the other is deleted as in paper (Haque, Pervin, & Begum, 2015);2. Replacing Pronoun by Corresponding Noun: The pronoun has been replaced by the corresponding noun so that the related noun and pronoun will be treated as same word. Moreover, it will impact in the word frequency calculation. This has been implemented with the rule based replacement of pronoun as in (Haque, Pervin, & Begum, 2017b);3. Count Numerical Figure From Digits: The numerical figure can be presented in digits which has been counted with the pattern recognition addressed in (Haque, Pervin, & Begum, 2016);4. Count Numerical Figure From Words and Digits: The numerical figure can be presented in both words and digits which has been counted with the pattern recognition addressed in (Haque, Pervin, & Begum, 2016);5. Considering Title Words: The title words score has been used in several methods for sentence ranking as in (Sarkar, 2012a; Haque et al., 2013a, 2013b; Islam & Masum, 2014; Efat, Ibrahim, & Kayesh, 2013);6. Coefficients Adjustment: Sentence ranking has been done in several Bangla text summarization methods using some parameters like numerical figure, sentence frequency, title word, term frequency, etc. For these parameters, the impact of all these are not same according to the paper (Sarkar, 2012b, Haque, Pervin, & Begum, 2015) for which the coefficients of each parameters are adjusted for getting better</s>
<s>performance;7. Considering the First Sentence Specially: The first sentence of every document is considered as important (Haque, Pervin, & Begum, 2015) which is also considered by us.In Figure 1, all the above features has been added one by one and the utilized features in each step include all the features of the previous step(s).Figure 1. Step by step improvement of performance for including each featureInternational Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020After generating summary by using only term-frequency, the F-measure score has been found as 0.4124 using the same dataset (“Dataset”, 2016). After incorporating each feature the F-measure score has been raised as mentioned in Table 1.COMPARISON AMONG VARIOUS APPROACHeSFourteen approaches have been discussed in the previous section with their pros and cons. Table 2 turns the comparison among these approaches based on their incorporated features and evaluation results.continued on following pageTable 1. Percentage of performance improvementSN# Features Improvement1 Sentence frequency calculation 6.71%2 Replacing pronoun by corresponding noun 9.66%3 Count numerical figure from digits 6.15%4 Count numerical figure from words and digits 5.11%5 Considering title words 4.96%6 Coefficients adjustment 2.47%7 Considering the first sentence specially 4.47%Table 2. Comparison among fourteen approaches of Bangla text summarizationSn# Researcher (s), YearIncorporated Distinguished Features Remarks Evaluation Result1 Islam and Masum (2014)i) Term frequency ii) Useful word list with Important and unimportant word listThis method has keyword search module for summary generation where keywords are selected on the basis of tf-idf and list of useful, important and unimportant words.No evaluation has been drawn.2 Uddin and Khan (2007)i) Using location method ii) Using Cue method iii) Using Title method iv) Term frequencyThis research work has shown that some features of English text summarization can be used for Bangla text.Got 8.4 from human professional in the range of 0 to 10 point with 40% extraction.3 Sarkar (2012a)i) Term frequency ii) Sentence length iii) Sentence positionThe impact of thematic term has been investigated here and some statistical measures have been incorporated for sentence scoring.Unigram based recall score is 0.4122.4 Sarkar (2012b)i) Term frequency ii) Sentence length iii) Sentence positionIt was claimed that the features used here in more effective way for news document summarization than in the previous method (Sarkar, 2012a). In this approach, some threshold points have been adjusted for position of sentences, TF-IDF values and minimum length for selecting summary sentences.Precision, Recall and F-measure values have been claimed 0.3659, 0.5064 and 0.4169 respectively.International Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020Table 2. Continuedcontinued on following pageSn# Researcher (s), YearIncorporated Distinguished Features Remarks Evaluation Result5 Efat et al. (2013)i) Term frequency ii) Sentence position iii) Skeleton of documentTheir system is alienated into three segments as pre-processing the test document, sentence scoring and summarization based on sentences’ score.The average accuracy of this proposed method has been found 83.57% against human generated summary.6 Kallimani et al. (2014)i) Parts-of-speech tagging ii) Named entity recognition iii) Utilizing template of sentence iv) AbstractionInput document is categorized to apply specific classes. Some attributes are extracted from</s>
<s>the document and mapped with the template of sentence for summary sentence generation.The system achieved an average of 86.24% precision, 78.93% recall, and 81.5% F-measure while intrinsic evaluation.7 Uddin et al. (2014)i) Term frequency ii) Cosine similarity among sentences iii) A* (Aker et al., 2010) searching algorithm iv) Multi-document text summarizationThis is a multi-document text summarization system. A primary summary is generated at first by sentence scoring. A graph based model is then applied with the A* (Aker et al., 2010) searching algorithm on the primary summary for creating the final gist.Unigram based Recall Score was claimed as 56%.8 Sarker (2014)i) Keyphrase extraction ii) Sentence position iii) Term frequencyThis is a keyphrase-based sentence extraction method for both Bangla and English document. Here, keyphrases are sequence of words without any stop words and punctuation marks. Two phases approach have been applied here. First phase will select sentence on the basis of top ranked keyphrases and sentence score. Second phase is activated if summary can’t be created in first phase and select more sentences based on score.It was claimed that this method outperforms existing methods (Sarkar, 2012a, 2012b) of Bangla text summarization. The F-measure score has been found 0.4242 in the evaluation for Bangla text summarization.9 Haque et al. (2015)i) Term frequency ii) Sentence frequency iii) Counting Numerical figure iv) Sentence clusteringIn this method, sentences are ranked using term frequency, sentence frequency and counting numerical figure. Initially, sentence frequency is set to zero (0) for each. Then, if any sentence is found containing 60% terms of any other, smaller sentence is removed and the frequency of larger sentence is increased between them. Sentence clustering has been utilized here to carry diversified information in summary. After clustering, one third top ranked sentences are selected.Precision, Recall and F-measure values have been claimed as 0.608, 0.664 and 0.632 respectively.10 Haque et al. (2016)i) Setting minimum length of keyphrases ii) Considering the first sentence iii) Counting numerical figure from both digits and words.This method is an enhancement of an existing keyphrase based method of (Sarkar, 2014). The enhancements include: i) setting the minimum length for keyphrases, ii) considering the first sentence specially and iii) counting numerical figure from words and digits for sentence scoring.As per ROUGE-1 score, the value of F-measure was found as 0.6166 (F-measure for the previous method (Sarkar, 2014): 0.5495) in the evaluation with same dataset mentioned in (Haque, Pervin, Begum, 2016).International Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020Table 2. Continuedcontinued on following pageSn# Researcher (s), YearIncorporated Distinguished Features Remarks Evaluation Result11 Haque et al. (2017)i) Term frequency ii) Sentence frequency iii) Counting Numerical figure both in numeric and word forms iv) Title words v) General & Special tagging vi) Dependency parsing vii) Replacement of pronounsIn this method, two significant contributions are (i) pronoun replacement to solve the issue of dangling pronoun and (ii) dependency parsing to enhance the tagging procedure. After replacing pronoun by corresponding noun, sentences are ranked where special tagging has been utilized so that important</s>
<s>sentences will be in higher rank. Moreover, the first sentence is included in summary always if it contains any title word.F-measure scores have been found as 0.6003 and 0.5708 for ROUGE-1 and ROUGE-2 respectively by the evaluation with a publicly available dataset from (“Dataset”, 2016). Comparison has been turned with the four existing methods (Sarkar, 2012a; 2012b; Efat et al., 2013; Sarkar, 2014) where this method has outperformed others.12 Abujar et al. (2017)i) Term frequency ii) Words and sentences analysis iii) Numerical value identification iv) Sentence position and length v) Title words vi) Cue words vii) Word effect rate viii) Prime sentence identification ix) Aggregate similarity measurement x) Detection of imitating of sentencesIn this method the authors have claimed for deriving some rules of Bangla text analysis. After preprocessing with linguistic analysis, prime sentence has been identified by words and sentence analysis. They have calculated the effect rate of words by considering repeated distance and aggregate similarity has been measured to eliminate the redundancy. Moreover, sentence position, length, cue words, numerical value, term frequency and title words have been considered for sentence ranking.The evaluation has been done with human-generated summary with only three different texts where the performance is showing almost similar to human without mentioning the actual performance in numerical figure. No comparison has been shown against any existing methods.13 Ghosh et al. (2018)i) Aggregate similarity ii) BushiPath iii) Term frequency inverse sentence frequency iv) Keywords v) Sentence position vi) Title words vii) Cue words viii) Numerical value ix) Inverted comma x) Special symbol xi) Date format xii) URL/Email addressThis is a rule based extraction based text summarization which has utilized 12 features for sentence ranking. Major contributions of this method includes: (i) applying graph based sentence scoring features, (ii) introducing some features for the first time as like aggregate similarity, bushy path, keyword in sentence, presence of inverted comma and special symbol, (iii) removing redundant information from summary.F-measure scores have been found as 0.6276 for ROUGE-1 evaluation result with a publicly available dataset from (“Dataset”, 2016). Comparison has been turned with the five existing methods (Sarkar, 2012a; 2012b; Efat et al., 2013; Sarkar, 2014; Haque et al., 2017) where this method has outperformed others.International Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020CONCLUSIONIn this paper, fourteen approaches of Bangla text summarization have been described where thirteen methods are for the single document and one is for multiple documents. Two methods are there for abstraction based text summarization. The trend in the research work of Bangla text summarization has been tried to explore in brief. A comparison table has also been drawn to view the similarity and differences among them. The strength and weaknesses of all the methods have been discussed along with the scope of improvement. It has been indicated with the reference that most of the incorporated features in various existing methods of Bangla text summarization were collected from the methods of English text but with a different angle, because the structure of Bangla is</s>
<s>different from English. After all, this little number of efforts is raising hope for a more sophisticated methodology soon. It is also expected that this review paper will help the next generation to know the basement of research works in Bangla text summarization and to get the direction of future works.Our future work is to do an impact analysis of different features to identify their ratio of workability for different pattern of documents for Bangla text summarization.ACKNOwLeDGMeNTThis research work is funded by a Fellowship Scholarship from Information and Communication Technology Division, Government of the Peoples Republic of Bangladesh. There is also a valuable support from the Central Bank of Bangladesh.Table 2. ContinuedSn# Researcher (s), YearIncorporated Distinguished Features Remarks Evaluation Result14 Sikder et al. (2019)i) Term frequency ii) Sentence relevancy iii) Position iv) Bangla grammatical rules v) Primary summary generation vi) Sentence simplification rules vii) Sentence joining and linking viii) AbstractionThis method has considered Bangla grammatical rules along with mathematical rules and introduced the path of abstraction based summarization. It has been mentioned that the main contribution of this research work is sentence relevancy, meaning analysis, joining and eliminating odd sentences. After sentence ranking, 70% top ranked sentences are selected as the primary summary. From these sentences, sentences are joined, redundancy eliminated and simplified to generate final summary.The evaluation has been accomplished with human-generated summary for six documents only. The result of evaluation has been depicted in graph without mentioning the actual numerical value about the performance.International Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020ReFeReNCeSAbujar, S., Hasan, M., Shahin, M. S. I., & Hossain, S. A. (2017). A Heuristic Approach of Text Summarization for Bengali Documentation. 8th International Conference on Computing, Communication and Networking Technologies (ICCCNT), 1-8. doi:10.1109/ICCCNT.2017.8204166Ai, D., Zheng, Y., & Zhang, D. (2010). Automatic text summarization based on latent semantic indexing. Journal of Artificial Life and Robotics, 15(1), 25–29. doi:10.1007/s10015-010-0759-xAker, A., Cohn, T., & Gaizauskas, R. (2010). Multi-document summarization using A* search and discriminative training. Conference on Empirical Methods in Natural language Processing, 482-491.Banglapedia. (2003). National Encyclopedia of Bangladesh. Asiatic Society of Bangladesh.Baxendale, P. B. (1958). Machine-made Index for Technical Literature -An Experiment. IBM Journal of Research and Development, 2(4), 354–361. doi:10.1147/rd.24.0354Chowdhury, M., Khalil, I., & Chowdhury, M. H. (2000). Bangla VasarByakaran. Ideal Publication.Dataset for evaluating Bangla text summarization system. (n.d.). Bangla Natural Language Processing Community. Retrieved at Jan, 2016 from http://bnlpc.org/research.phpDhanya, P.M., & Jathavedan, M. (2013). Comparative Study of Text Summarization in Indian Languages. International Journal of Computer Applications, 75(6), 17-21.Edmundson, H. P. (1969). New Methods in Automatic Extracting. Journal of the Association for Computing Machinery, 16(2), 264–285. doi:10.1145/321510.321519Efat, M. I. A., Ibrahim, M., & Kayesh, H. (2013). Automated Bangla Text Summarization by Sentence Scoring and Ranking. In International Conference on Informatics, Electronics & Vision (ICIEV). IEEE. doi:10.1109/ICIEV.2013.6572686Ferreira, R., & Souza, L. D. (2014). A multi-document summarization system based on statistics and linguistic treatment. Journal of Expert Systems with Applications, Elsevier, 41(13), 5780–5787. doi:10.1016/j.eswa.2014.03.023Ghosh, P., Shahariar, R., & Khan, M. (2018). A Rule Based Extractive</s>
<s>Text Summarization Technique for Bangla News Documents. International Journal of Modern Education and Computer Science, 10(12), 44–53. doi:10.5815/ijmecs.2018.12.06Gupta, V. (2013). A Survey of Text Summarizers for Indian Languages and Comparison of their Performance. Journal of Emerging Technologies in Web Intelligence, 5(4), 361–366. doi:10.4304/jetwi.5.4.361-366Gupta, V., & Lehal, G. S. (2010). A Survey of Text Summarization Extractive Techniques. Journal of Emerging Technologies in Web Intelligence, 2(3), 258–268. doi:10.4304/jetwi.2.3.258-268Haque, M. M., Pervin, S., & Begum, Z. (2013a). Literature Review of Automatic Multiple Documents Text Summarization. International Journal of Innovation and Applied Studies, 3(1), 121–129.Haque, M. M., Pervin, S., & Begum, Z. (2013b). Literature Review of Automatic Single Document Text Summarization Using NLP. International Journal of Innovation and Applied Studies, 3(3), 857–865.Haque, M. M., Pervin, S., & Begum, Z. (2015). Automatic Bengali news documents summarization by introducing sentence frequency and clustering. In 18th International Conference on Computer and Information Technology (ICCIT), (pp. 156 – 160). doi:10.1109/ICCITechn.2015.7488060Haque, M. M., Pervin, S., & Begum, Z. (2016). Enhancement of Keyphrase-Based Approach of Automatic Bangla Text Summarization. Tencon Conference. doi:10.1109/TENCON.2016.7847955Haque, M. M., Pervin, S., & Begum, Z. (2017a). An Innovative Approach of Bangla Text Summarization by Introducing Pronoun Replacement and Improved Sentence Ranking. Journal of Information Processing Systems, 13(4), 752–777. doi:10.3745/JIPS.04.0038Haque, M. M., Pervin, S., & Begum, Z. (2017b). Rule Based Replacement of Pronoun by Corresponding Noun for Bangla News Documents. International Journal of Technology Diffusion, 8(2), 26–42. doi:10.4018/IJTD.2017040102http://dx.doi.org/10.1109/ICCCNT.2017.8204166http://dx.doi.org/10.1007/s10015-010-0759-xhttp://dx.doi.org/10.1147/rd.24.0354http://bnlpc.org/research.phphttp://dx.doi.org/10.1145/321510.321519http://dx.doi.org/10.1109/ICIEV.2013.6572686http://dx.doi.org/10.1109/ICIEV.2013.6572686http://dx.doi.org/10.1016/j.eswa.2014.03.023http://dx.doi.org/10.1016/j.eswa.2014.03.023http://dx.doi.org/10.5815/ijmecs.2018.12.06http://dx.doi.org/10.4304/jetwi.5.4.361-366http://dx.doi.org/10.4304/jetwi.2.3.258-268http://dx.doi.org/10.1109/ICCITechn.2015.7488060http://dx.doi.org/10.1109/TENCON.2016.7847955http://dx.doi.org/10.3745/JIPS.04.0038http://dx.doi.org/10.4018/IJTD.2017040102International Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020Hovy, E. (2005). Automated Text Summarization. In R. Mitkov (Ed.), The Oxford Handbook of Computational Linguistics (pp. 583–598). Oxford University Press.Islam, M. T., & Masum, S. M. A. (2004). Bhasa: A Corpus-Based Information Retrieval and Summariser for Bengali Text. Proceedings of the 7th International Conference on Computer and Information Technology.Kallimani, J. S., Srinivasa, K. G., & Reddy, E. B. (2014). A Comprehensive Analysis of Guided Abstractive Text Summarization. International Journal of Computer Science Issues, 11(6).Karim, M. A., Kaykobad, M., & Murshed, M. (2013). Technical Challenges and Design Issues in Bangla Language Processing. IGI Global. doi:10.4018/978-1-4666-3970-6Kumar, Y. J., & Salim, N. (2012). Automatic Multi Document Summarization Approaches. Journal of Computational Science, 8(1), 133–140. doi:10.3844/jcssp.2012.133.140Lin, C., & Hovy, E. (2003). Automatic Evaluation of Summaries Using N-gram Co-Occurrence Statistics. Proceedings of the Human Technology Conference 2003 (HLT-NAACL-2003). doi:10.3115/1073445.1073465Luhn, H. P. (1958). The Automatic Creation of Literature Abstracts. IBM Journal of Research and Development, 2(2), 159–165. doi:10.1147/rd.22.0159Mani, I., Klein, G., House, D., Hirschman, L., Firmin, T., & Sundheim, B. (2002). SUMMAC: A text summarization evaluation. Natural Language Engineering, 8(1), 43–68. doi:10.1017/S1351324901002741Miller, G. (1995). WordNet: A Lexical Database for English. Communications of the ACM, 38(11), 39–41. doi:10.1145/219717.219748Nenkova, A., & McKeown, K. (2012). A survey of text summarization techniques. In Mining text data (pp. 43–76). Springer. doi:10.1007/978-1-4614-3223-4_3ROUGE 2.0. (n.d.). Java Package for Evaluation of Summarization Tasks with Updated ROUGE Measures. Retrieved from http://kavita-ganesan.com/content/rouge-2.0Saggion, H., & Poibeau, T. (2013). Automatic Text Summarization: Past, Present and Future. In Multi-source, Multilingual Information Extraction and Summarization (pp. 3–21). Berlin: Springer-Verlag. doi:10.1007/978-3-642-28569-1_1Sarkar, K. (2012a). Bengali text summarization by sentence extraction.</s>
<s>In Proceedings of International Conference on Business and Information Management (pp. 233-245). NIT Durgapur.Sarkar, K. (2012b). An approach to summarizing Bengali news documents. Proceedings of the International Conference on Advances in Computing, Communications and Informatics, 857-862. doi:10.1145/2345396.2345535Sarkar, K. (2013). Automatic Single Document Text Summarization Using Key Concepts in Documents. Journal of Information Processing Systems, 9(4), 602–620. doi:10.3745/JIPS.2013.9.4.602Sarkar, K. (2014). A Keyphrase-Based Approach to Text Summarization for English and Bengali Documents. International Journal of Technology Diffusion, 5(2), 28–38. doi:10.4018/ijtd.2014040103Second most spoken languages around the world. (n.d.). Retrieved August 20, 2015, from http://graduate.olivet.edu/news-events/news/second-most-spoken-languages-around-worldSikder, R., Hossain, M. M., & Robi, R. H. (2019). Automatic Text Summarization For Bengali Language Including Grammatical Analysis. International Journal of Scientific & Technology Research, 8(6), 288–292.Uddin, M. A., Sultana, K. Z., & Alam, M. A. (2014). A Multi-Document Text Summarization for Bengali Language. In The 9th International Forum on Strategic Technology (IFOST). Chittagong University of Engineering & Technology (CUET).Uddin, M. N., & Khan, S. A. (2007). A Study on Text Summarization Techniques and Implement Few of Them for Bangla Language. 10th International conference on Computer and Information technology, 1-4. doi:10.1109/ICCITECHN.2007.4579374Vector space retrieval model. (n.d.). Retrieved May 10, 2016, from http://www.ccs.neu.edu/home/jaa/CSG339.06F/Lectures/vector.pdfhttp://dx.doi.org/10.4018/978-1-4666-3970-6http://dx.doi.org/10.3844/jcssp.2012.133.140http://dx.doi.org/10.3115/1073445.1073465http://dx.doi.org/10.1147/rd.22.0159http://dx.doi.org/10.1017/S1351324901002741http://dx.doi.org/10.1145/219717.219748http://dx.doi.org/10.1007/978-1-4614-3223-4_3http://kavita-ganesan.com/content/rouge-2.0http://dx.doi.org/10.1007/978-3-642-28569-1_1http://dx.doi.org/10.1007/978-3-642-28569-1_1http://dx.doi.org/10.1145/2345396.2345535http://dx.doi.org/10.3745/JIPS.2013.9.4.602http://dx.doi.org/10.4018/ijtd.2014040103http://graduate.olivet.edu/news-events/news/second-most-spoken-languages-around-worldhttp://graduate.olivet.edu/news-events/news/second-most-spoken-languages-around-worldhttp://dx.doi.org/10.1109/ICCITECHN.2007.4579374http://dx.doi.org/10.1109/ICCITECHN.2007.4579374http://www.ccs.neu.edu/home/jaa/CSG339.06F/Lectures/vector.pdfhttp://www.ccs.neu.edu/home/jaa/CSG339.06F/Lectures/vector.pdfInternational Journal of Technology DiffusionVolume 11 • Issue 4 • October-December 2020Md. Majharul Haque has completed their BSc. Hons in CSE, MS in IT and achieved Doctorate Degree from the CSE Dept of the University of Dhaka. Now, continuing job as Assistant Systems Analyst (Deputy Director) in the Central Bank of Bangladesh.Anowar Hossain is a passionate software professional with a spur to accept new challenges. Always eager to learn new mechanisms to make my contribution more meaningful.Words S. Indian Statistical Institute. (2016). List of stop words for Bengali language. Retrieved from http://www.isical.ac.inYang, L., Cai, X., Zhang, Y., & Shi, P. (2014). Enhancing sentence-level clustering with ranking-based clustering framework for theme-based summarization. Information Sciences, 37-50.Ye, S., Chua, T., Kan, M., & Qiu, L. (2007). Document concept lattice for text understanding and summarization. Journal of Information Process Management, Elsevier, 43(6), 1643–1662. doi:10.1016/j.ipm.2007.03.010Zaman, N. U. (2008). Big Picture Seminer Series. University of Rochester. Retrieved December 29, 2015, from https://www.cs.rochester.edu/u/naushad/survey/BigPicture-URCS-NZ-Bangla.pdfhttp://www.isical.ac.inhttp://www.isical.ac.inhttp://dx.doi.org/10.1016/j.ipm.2007.03.010https://www.cs.rochester.edu/u/naushad/survey/BigPicture-URCS-NZ-Bangla.pdf</s>
<s>sv-lncsSee discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/344337986An Automated Bengali Text Summarization Technique Using Lexicon BasedApproachChapter · September 2020CITATIONSREADS7 authors, including:Some of the authors of this publication are also working on these related projects:Association Rule View projectEmbedded Lab SCUT View projectBusrat JahanFeni University7 PUBLICATIONS 2 CITATIONS SEE PROFILESharmin Akter MiluNoakhali Science & Technology University7 PUBLICATIONS 4 CITATIONS SEE PROFILEs.s MahtabDhaka University of Engineering & Technology14 PUBLICATIONS 4 CITATIONS SEE PROFILEAll content following this page was uploaded by Busrat Jahan on 22 September 2020.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/344337986_An_Automated_Bengali_Text_Summarization_Technique_Using_Lexicon_Based_Approach?enrichId=rgreq-7e07316b4c621f7aa6dfe9fcf7066f9d-XXX&enrichSource=Y292ZXJQYWdlOzM0NDMzNzk4NjtBUzo5Mzg1MDY3MDg4NTY4MzNAMTYwMDc2ODg0NDIwMg%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/344337986_An_Automated_Bengali_Text_Summarization_Technique_Using_Lexicon_Based_Approach?enrichId=rgreq-7e07316b4c621f7aa6dfe9fcf7066f9d-XXX&enrichSource=Y292ZXJQYWdlOzM0NDMzNzk4NjtBUzo5Mzg1MDY3MDg4NTY4MzNAMTYwMDc2ODg0NDIwMg%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Association-Rule?enrichId=rgreq-7e07316b4c621f7aa6dfe9fcf7066f9d-XXX&enrichSource=Y292ZXJQYWdlOzM0NDMzNzk4NjtBUzo5Mzg1MDY3MDg4NTY4MzNAMTYwMDc2ODg0NDIwMg%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Embedded-Lab-SCUT?enrichId=rgreq-7e07316b4c621f7aa6dfe9fcf7066f9d-XXX&enrichSource=Y292ZXJQYWdlOzM0NDMzNzk4NjtBUzo5Mzg1MDY3MDg4NTY4MzNAMTYwMDc2ODg0NDIwMg%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-7e07316b4c621f7aa6dfe9fcf7066f9d-XXX&enrichSource=Y292ZXJQYWdlOzM0NDMzNzk4NjtBUzo5Mzg1MDY3MDg4NTY4MzNAMTYwMDc2ODg0NDIwMg%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Busrat_Jahan?enrichId=rgreq-7e07316b4c621f7aa6dfe9fcf7066f9d-XXX&enrichSource=Y292ZXJQYWdlOzM0NDMzNzk4NjtBUzo5Mzg1MDY3MDg4NTY4MzNAMTYwMDc2ODg0NDIwMg%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Busrat_Jahan?enrichId=rgreq-7e07316b4c621f7aa6dfe9fcf7066f9d-XXX&enrichSource=Y292ZXJQYWdlOzM0NDMzNzk4NjtBUzo5Mzg1MDY3MDg4NTY4MzNAMTYwMDc2ODg0NDIwMg%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Busrat_Jahan?enrichId=rgreq-7e07316b4c621f7aa6dfe9fcf7066f9d-XXX&enrichSource=Y292ZXJQYWdlOzM0NDMzNzk4NjtBUzo5Mzg1MDY3MDg4NTY4MzNAMTYwMDc2ODg0NDIwMg%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sharmin_Milu?enrichId=rgreq-7e07316b4c621f7aa6dfe9fcf7066f9d-XXX&enrichSource=Y292ZXJQYWdlOzM0NDMzNzk4NjtBUzo5Mzg1MDY3MDg4NTY4MzNAMTYwMDc2ODg0NDIwMg%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sharmin_Milu?enrichId=rgreq-7e07316b4c621f7aa6dfe9fcf7066f9d-XXX&enrichSource=Y292ZXJQYWdlOzM0NDMzNzk4NjtBUzo5Mzg1MDY3MDg4NTY4MzNAMTYwMDc2ODg0NDIwMg%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Noakhali_Science_Technology_University?enrichId=rgreq-7e07316b4c621f7aa6dfe9fcf7066f9d-XXX&enrichSource=Y292ZXJQYWdlOzM0NDMzNzk4NjtBUzo5Mzg1MDY3MDg4NTY4MzNAMTYwMDc2ODg0NDIwMg%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sharmin_Milu?enrichId=rgreq-7e07316b4c621f7aa6dfe9fcf7066f9d-XXX&enrichSource=Y292ZXJQYWdlOzM0NDMzNzk4NjtBUzo5Mzg1MDY3MDg4NTY4MzNAMTYwMDc2ODg0NDIwMg%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Ss_Mahtab?enrichId=rgreq-7e07316b4c621f7aa6dfe9fcf7066f9d-XXX&enrichSource=Y292ZXJQYWdlOzM0NDMzNzk4NjtBUzo5Mzg1MDY3MDg4NTY4MzNAMTYwMDc2ODg0NDIwMg%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Ss_Mahtab?enrichId=rgreq-7e07316b4c621f7aa6dfe9fcf7066f9d-XXX&enrichSource=Y292ZXJQYWdlOzM0NDMzNzk4NjtBUzo5Mzg1MDY3MDg4NTY4MzNAMTYwMDc2ODg0NDIwMg%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Dhaka_University_of_Engineering_Technology?enrichId=rgreq-7e07316b4c621f7aa6dfe9fcf7066f9d-XXX&enrichSource=Y292ZXJQYWdlOzM0NDMzNzk4NjtBUzo5Mzg1MDY3MDg4NTY4MzNAMTYwMDc2ODg0NDIwMg%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Ss_Mahtab?enrichId=rgreq-7e07316b4c621f7aa6dfe9fcf7066f9d-XXX&enrichSource=Y292ZXJQYWdlOzM0NDMzNzk4NjtBUzo5Mzg1MDY3MDg4NTY4MzNAMTYwMDc2ODg0NDIwMg%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Busrat_Jahan?enrichId=rgreq-7e07316b4c621f7aa6dfe9fcf7066f9d-XXX&enrichSource=Y292ZXJQYWdlOzM0NDMzNzk4NjtBUzo5Mzg1MDY3MDg4NTY4MzNAMTYwMDc2ODg0NDIwMg%3D%3D&el=1_x_10&_esc=publicationCoverPdfAn Automated Bengali Text Summarization Technique Using Lexicon Based Approach Busrat Jahan1, Sheikh Shahparan Mahtab, Md. Faizul Huq Arif, Ismail Siddiqi Emon4, Sharmin Akter Milu5, Md. Julfiker Raju6 1Department of CSE, Feni University, Feni, Bangladesh Email: hossenbipasa980@gmail.com 2Department of EESE,Universiti Kebangsaan Malaysia , Bangi, Selangor,Malaysia Email: mahtabshahzad@gmail.com 3Department of ICT(DoICT),ICT Division, Bangladesh Email: arifict27@gmail.com 4Department of CSE, Feni University, Feni, Bangladesh Email: emonsahriar0@gmail.com 5Department of CSTE, Noakhali Science and Technology University, Noakhali, Bangladesh Email: sharminmilu7@gmail.com 6Department of CSE, Feni University, Feni, Bangladesh Email: julfikerar@gmail.com {Corresponding Author: Sheikh Shahparan Mahtab2* Email: mahtabshahzad@gmail.com} Abstract.There is enough resources for English to process and obtain summarize documents. But this thing is not directly applicable for Bengali Language as there is lots of complexity in Bengali, which is not same to English in the context of Grammar and sentence structure. Again, doing this for Bengali is harder as there is no established tool to facilitate research work. But this necessary as 26 crore people use this language. So, we have gone for a new approach Bengali document summarization. Here, the system design has been completed by preprocessing the i/p (input) doc, tagging the word, replacing pronoun, sentence ranking respectively. Pronoun replacement has been added here to minimize the rate of swinging pronoun in the output summary. As the pronoun replacement we have gone ranking sentences according to sentence frequency, numerical figures (both in digit and word version), document title. Here if the sentence has any word that exists in title also taken into our account. The similarity between two sentences has been checked to deduct one as that cause‟s less redundancy. The numerical figure also makes an impact so they were also identified. We have taken over 3000 newspaper and books documents words has been trained according to grammar. And two documents has been checked by the design system to evaluate the efficiency of designed summarizer. From the evaluation system it is been found that the Recall, Precision, F-score is 0.70 as it is 70%, 0.82 as it is 82%, 0.74 as it is 74% respectively. Keywords: Text Summarizer, BTS, Bengali, NLP, Python, Machine Learning, POS Tagging. 1 Introduction Text summarization is the process of summarizing a text or document. There are many summarization tools for the English language. There are also some tasks for automated Bengali text or document summarization. From an application standpoint the tools do not seem to be very suitable. The abstracts are categorized in two ways: the</s>
<s>extractive and the abstractive approach. . Most of the summarizer methods for Bengali text summarization are extractive [3].In an automated text summarization process, a text is delivered to the computer and the computer returns a less-than-redundant extract or abstract of the original text (s). Text abstraction is the process of producing an abstract or a summary of an extract by selecting a significant portion of the information from one or more texts [1-3].Thus, the overview summarizes the meaning of the extremes, and some time extraction results in data loss. These methods are not also able to create a plain text from related hierarchical texts. Extract summarization is less like complexity when it comes to favorite issues than abstracts for less complexity. We can use the grammatical rules in conjunction along with mathematical rules for making sentences to decrease the unnecessary error. Alternatively, for creating new and plain text from multiple texts that enables to reduce the size of the text summary [4]. Rafel, et al. told the extractive summarizer states all the basic requirements. This method has three structures: text analysis, ranking /scoring sentence and summarization [5]. 2 Literature Review The observation of summarization of Bangla language foronly a document is showedin this sector. However the area of Bangla text summarization was begun several yearsback as a new research. Previously, most of the work in the text summarization domain was done on the basisof sentence prohibition. The surveyof different text summarization techniques are proposed as the article method [3]. They accomplished an analysis of variousmethods for text and implemented the basis of extraction Bangla text summarizer. According to the proposed method of K. Sarkar [4], which providesa summary of a text without reading full text. The main steps in his method has(i) preprocessing, (ii) scoring /ranking sentence and (iii) generatingsummary. It has also term frequency (TF), inverse document frequency (IDF) and Positional Value (PV). Thepresented method of M. M. Haque et al. [5], it summarized Bangla document by using an extraction based summarization technique.The four major steps of their method is given here: (i) preprocessing, (ii) scoring /ranking sentence,(ii) sentence clustering, (iv) generatingsummary. M. I. A. Efat et al. [6] suggested a summarization method as an extraction based which acts onthe Bangla documents. At thesame time, it is capable of summarizing a single document. It has two major steps in their proposed method:(i) preprocessing, (ii) scoring /ranking sentence&summarization. The method of A. Das and S. Bandyopadhyay [7] presented,the identification of sentiment from the text, combine it and lastly signify the text summarization. They used a sentiment model to restore and integrate sentiment. The integration isbased on the presentation of theme clustering (K-means) and documentlevel theme relational graph algorithms and finally generates summary selected by the standard page rank algorithm for data retrieval. 3 Suggested Method For successfully we have employed two tagging system. One is general tagging system and another is Special tagging system. The special tagging system makes the thing best and updated. 3.1General Tagging Every word is made</s>
<s>to tag (like noun, pronoun, adjective, verb, preposition, etc.). by using a lexicon database [1] and SentiWordNet [2]. The lexicon database and SentiWordNet has limited number of predefined words. Using lexicon database, the words can be tagged as „JJ‟ (Adjective), „NP‟ (Proper noun), „VM‟ (Verb), „NC‟ (Common Noun), „PPR‟ (Pronoun), etc. On the other hand, SentiWordNet has list of words with tag as „a‟ (Adjective), „n‟ (Noun), „r‟ (Adverb), „v‟ (Verb), „u‟ (Unknown). Based on these predefined lists of words, we have experimented on 200 Bangla news documents and found that 70% words can be tagged.Bangla words (especially verb) are very much interesting [3]. Though we use word stemming to identify the originalterm of the word, 100% inactive verbs can‟t be stemmed. In fact, it is very difficult to identifying verb because there are many suffixes in Bangla. For example, basis on the tense and person, the English words “do” may be “doing”, “did” and “does”, but on the other hand the word may have different forms in BanglaTo consider the present continuous tense Like, “ ” (kor-do) three main forms of this word can only have depend on the first, second and third person. Also it can be “ ” (doing) for first person, “ ” (doing) for second person and “ ” (doing) for third person respectively.To consider the present continuous tense Like, “ ” (kor-do) three main forms of this word can only have depend on the first, second and third person. Also it can be “ ” (doing) for first person, “ ” (doing) for second person and “ ” (doing) for third person respectively. The forms of verbs for all these meanings of "you" in Bangla are also different. For instance, all these meanings for the forms of verbs of "you" are also different in Bangla. As, “ ” (you are doing), “ ” (you are doing), “ ” (you are doing) where those terms are specified in present continuous tense and also with second person. Thus the word “ ” (do) may have the given forms: “ ” (do), “ ” (do), “ ”(do), “ ” (do), “ ” (doing), “ ” (doing), “ ” (doing), “ ” (doing), “ ” (doing), “ ” (did), “ ” (did), “ ” (did), “ ” (did), “ ” (did), “ ” (do), “ ” (do), “ ” (did), “ ” (did), “ ” (did), “ ” (did), “ ” (did), “ ” (do), “ ” (did), “ ” (did), “ ” (did), “ ” (did), “ ” (doing), “ ” (doing), “ ” (doing),“ ” (doing), “ ” (doing), “ ” (doing), “ ” (doing), “ ” (doing),“ ” (doing), “ ” (doing), (doing),“ ” (doing), “ ” (doing), “ ” (do), “ ” (do), “ ” (do), “ ” (do), “ ”(do). Thus, there is no any comparison between the complexity of verb in Bangla and English. However, verb identification is very important for language processing because the verb is the main word of a</s>
<s>sentence. So, the complexity of verb in Bangla can‟t be compared with English. A list of suffixes are considered as for the final checking in following: “ ”(itechhis), “ ” (techhis), “ ” (itis),“ ” (ile), “ ” (ibi), etc. Now, if the word has suffix, it is tagged as a verb. The result of word tagging has been improved from 68.12% (before using the list of suffixes [4]) to 70% (after using the list of suffixes).We get some preliminary taggingin this step and later it may be updated in the next steps and also along with certain words will be specifically tagged asacronym, named entity, occupation, etc. in the next step[17-20]. 3.2 Special Tagging After general tagging, special tagging was introduced to identify the words as acronym, elementary form, numerical figure, repetitive words, name of occupation, organization and places. 1. Examining for English acronym: When the words are formed by the initials of the other words then it is called acronym. Such as, '' '' (UNO), '' '' (OIC), '' '' (USA), etc. For examining these kinds of words, when we can separate these words that like: '' '' (UNO) to match with '' '' (U), '' '' ,'' '' (O) ,those are matched every letter of the words. Actually we can write all English letters in Bangla like: A for ('' ''),B for ( '' ''),C for ('' '') ,D for ('' ''), …….. W for ('' '') ,X for ('' ''),Y for ('' ''), Z for ('' '') and if we can sort them by descending order depend on their string lengths where W ('' '')will be in the first place and A('' '') will be in the last place, then match every letter of the words . It is important in descending order that is always used to ensure the longest match. Such as, '' '' (M) does not match with '' '' (A), but it will match with '' '' (M). This experiment shows that 98% success rate for this case. 2. Studying for Bangla elementary tag: Bangla letters with spaces, like: '' '' (A K M), '' '' (A B M), etc. These letters will be tagged as Bangla primary tag. We have gotten based on research; the accuracy of the elementary result is 100%. 3.Studying for recurrent words: Recurrent words are special form of word combination where same word can be placed for two times consecutively. For example, „„ ‟‟ (thandathanda - cold cold), '' '' (boroboro - big big), '' ''(chotochoto - small small), etc. There are some words, they are partially repeated such as '' '' (khawadawa - eat). We have found 100% accuracy on identifying recurrent/repetitive words. 4.Studying for numerical digit: There are three conditions for recognizing the numerical representation in words and digits, are examined as follows: a)It is formed by following the first part of the word, like as, 0 for ( ), 1 for ( ), 2 for ( ),………, 9 for ( ) or '' ''</s>
<s>(one), '' '' (two), '' '' (three), '' ''(four) to '' '' (ninety nine). The decimal point (.) is also considered when examining the numerical form from digits. b) The next part (if any) is followed by: '' '' (hundred), '' '' (thousand), etc. c) Finally, it can have suffixes such as, „' '' (this), '' ''(this)," "(en)etc. After the experiment on our sample test documents, 100% numerical form can be found from both numerical values and text documents. 5.Studying for name of occupation: Occupation has a significant word and for the human named entity identification, occupation is very much helpful by which named entity can be recognized. If we get any word as occupation, we may consider the immediate next some words to find out named entity. We have retrieved some entries for the occupation of Bangladesh from a table such as, '' '' (shikkhok - master),'' '' (sangbadik-journalist), etc. Every word has matched with these words (that we collected from different online source) and if any matches are found then tagged as occupation. Here, '' ''(shikkhok- master) will turn into “ ” (prodhanshikkhok-Head master), and so on. From this study, it may identify 96% for occupation. 6.Studying for the name of organizationName of organization is an important factor where any type of word may be the element of organizational name. From our analysis it has been mentioned as follows: a)The following complete name of the organization, which is depended on the acronym of the name that is together with this parenthesis. For example, “ ( )''“DurnitiDomonCommission(DUDOK)AntiCorruptionCommission(ACC)”. (b)The organization name with last part may contain certain words. Such as,'' ”(limited-limited),'' ‟‟(biddaloyschool),'' ''(montronaloy- ministry) etc. [93]. Along with the above point. If any such of words are presented in the text according to the point (b), then immediately check the three words of the particular word. Uncertainty when the words are found as noun, name entity or any blocked word then call them the organizations name. It is found that the organizations name may be accepted the basis of point (b) 85% times. 7.Studying for name of place: There is a table thename of places of Bangladesh, it is made with 800 names for the list of division, district, upazila and municipality. Here the top level is division, second level is district and third level is upazila or municipality in area based separation. In addition, we have analyzed 230 countries names and their capitals. In this way, about 91% of the place names can be identified in our experiment. 4 Experimental Results Sample input Title: - , Text: । ।। । ( ) , । ।, , ( ) ( )। ।। . । ।( ) ( )। ।Getting Summary of Sample ------------------------------------------------------------- Title: । Text: ( ) , । । ।Figure 1.Sentence Scoring of Sample Document Figure 2.Mean Deviance of Sample Document 4.1 Co-selection measures Co-selection measures: In co-selection measures, the principal evaluation metrics are [12]: (i) Precision (P): It is the number of sentences occurring in both system</s>